UNIVERSITY PARK, Pa. — Many people use facial recognition technology on their personal devices, to quickly and securely enter a password or complete an online transaction. But when that same technology is deployed in public settings — such as to screen airport passengers or to grant access to a secure location — how do individuals whose images are captured feel?
According to a new study led by Penn State and the University of Alabama, an organization’s decision to publicly deploy facial recognition technology — and whether or not stakeholders are involved and informed in that decision-making — could not only lead to users’ concerns of privacy, data security, mass surveillance and bias toward minority groups, but it could also reveal issues of organizational justice, or perceptions of fairness, within the organization itself.
“Technology is created by humans, and humans can be easily biased, so technology is never neutral,” said Yao Lyu, doctoral student at the Penn State College of Information Sciences and Technology. “The human-computer interaction community has been paying growing attention to justice issues in technology design and development. But as we highlight in our study, besides design and development, the implementation of technology could also engender justice issues, especially in an organizational setting where various stakeholders are involved.”
Compared with use of facial recognition technology on a personal device, where the user has full control over the decision to use the tool as a means of authentication, recent implementation of facial recognition technology in public settings captures and uses individuals’ images without their consent. This has led to growing controversy — most notably surrounding concerns of privacy and disproportionate misidentification of women and people of color, often leading to false arrests due to algorithmic bias in facial recognition tools that law enforcement agencies use to identify suspects and witnesses. Ongoing debates have led U.S. cities and states including San Francisco, Boston and Maine to limit or ban the use of facial recognition technology in public spaces. Yet its deployment is on the rise in certain sectors — including public U.S. universities — where Yao and his team focused their study.
“I was surprised by the fact that the education sector is among the public settings in which this technology is being implemented at scale, through campus security, attendance monitoring and virtual learning,” said Hengyi Fu, assistant professor in the College of Communication & Information Sciences at the University of Alabama and co-author of the study. “In contrast to more contentious discussions about facial recognition technology in other areas of society, there is little sustained opposition to its implementation in schools.”
According to Fu, in a higher education setting, where many young people shape their sense of social identity and may be more willing to share personal information, implementation of facial recognition technology could lead to the “normalized elimination of practical obscurity," a pre-internet concept which upholds that private information in publicly-accessible records — such as police logs — is largely protected due to accessibility limitations.
“Attempting to manage what is known and disclosed about oneself can be seen as a legitimate way of students ensuring that their actions and intentions are correctly interpreted and understood,” she said.