Engineering

Penn State leads $8.5M, multi-institution DARPA project on mixed-reality systems

A team of Penn State researchers has been selected to lead a three-year, $8,552,388 multi-institution project that aims to model the risks and human behaviors as well as the attacks and mitigations within MR systems.    Credit: Poornima Tomy/Penn State. All Rights Reserved.

UNIVERSITY PARK, Pa. — A team of Penn State researchers has been selected to lead a three-year, $8,552,388 multi-institution project funded by the Defense Advanced Research Projects Agency (DARPA) to identify cognitive threats in mixed-reality (MR) systems as part of the agency's Intrinsic Cognitive Security program.  

The project, known as "Verified Probabilistic Cognitive Reasoning for Tactical Mixed Reality Systems (VeriPro)," is led by Gary Tan, professor of computer science and engineering, and includes researchers from George Washington University, Northeastern University, University of Southern California, Kennesaw State University and industry partner Design Interactive. The other Penn State researchers on the project are Bin Li, associate professor of electrical engineering and computer science, and Jonathan Dodge, assistant professor in the College of Information Science and Technology, and Penn State’s share of the project is $3,409,978.  

The goal of the project, according to the researchers, is to model the risks and human behaviors as well as the attacks and mitigations within MR systems.  

“While virtual reality is a total immersion experience, mixed reality is an interactive experience of a real-world environment that typically has an overlay of a digital object on the physical world,” Li said. “An example might be a firefighter training scenario, where the trainee can see digital fires overlayed on a real room that they are standing in.” 

While MR systems are already in use, they haven’t yet been used extensively in national defense. The researchers said that DARPA anticipates future combat and defense scenarios where MR is relied upon, and that in these scenarios, bad actors could attack the MR systems and launch cognitive attacks — attacks carried out through hacking and intended to overload or mislead users.  

According to the researchers, these cognitive attacks could greatly undermine the effectiveness of MR systems by jilting or altering the display, or could provide false information through content manipulation, altered fields of view, display lag, changing the colors of certain objects in color-coded fields — for example, changing red-coded dangerous objects to green-coded safe objects — and other means. 

“Some of the challenges with identifying and reasoning about the risks and attacks for MR systems include lack of cognitive models relevant to MR system threats, limited real-world datasets for cognitive models and difficulty evaluating on realistic missions,” said Tan, who is also a Penn State Institute for Computational and Data Sciences co-hire.  

To address these challenges, Penn State researchers will collaborate with additional experts at other institutions and lead the effort to identify MR system threats and mitigate their impacts. 

“Penn State is an ideal candidate for leading this project that addresses important national security needs,” said Andrew Read, senior vice president for research at Penn State. “This is because of our unique facilities, expert researchers and proven track record of working collaboratively across the University and other institutions in order to synthesize multiple areas of academic focus for practical applications.”  

The researchers from the partner institutions will create compact cognitive models, and then Tan will oversee the assessment of these models using a type of verification technique known as probabilistic program verification. Li will develop the real-world MR testbed focused on search and rescue and Dodge will use the test bed in Li’s lab to collect data to further refine the models. 

“My role is to design the tasks humans will perform, then help Dr. Li to conduct the experiments in his lab,” Dodge said. “Since our testbed is a search-and-rescue task, this involves first learning about how experts actually perform this task and how technology can assist, as well as how technology may hinder the task.  It is my hope that in addition to learning about how to protect MR systems from cognitive attacks, we are able to also learn more about humans using such devices so that they work better for everyday users.” 

Last Updated October 9, 2024

Contact