UNIVERSITY PARK, Pa. — Patrick McDaniel, the William L. Weiss Chair in Information and Communications Technology in the Penn State College of Engineering, was selected by the National Science and Technology Council in coordination with the Executive Office of the President of the United States to lead a technical workshop on June 4-6 with experts from across the nation on cybersecurity. Recently, he co-authored a report that summarizes the discussions of the workshop, which was presented to several government research agencies and subcommittees and will be widely shared with policymakers.
The purpose of the workshop and subsequent report, “Artificial Intelligence and Cybersecurity: Opportunities and Challenges 2019,” was to determine the needs and opportunities related to artificial intelligence (AI) and machine learning, as well as where and how the U.S. government should allocate resources for the future security of AI and machine learning.
The White House approached McDaniel to lead the workshop and author the report largely because last year, with a five-year, nearly $10 million NSF Frontier Grant, he helped Penn State establish and lead the Center for Trustworthy Machine Learning. Researchers in the multi-institution, multi-disciplinary center aim to develop a rigorous understanding of the security risks involved in the use of machine learning and AI. John Launchbury, chief technology officer at Galois, co-led and co-authored the report with McDaniel, along with direct support from the National Security Agency, the Army Research Office and the National Science Foundation.
“We know this technology is coming — it’s here — and it’s revolutionizing everything from automotive to medicine to education,” McDaniel said. “As it revolutionizes and changes all of these industries, we need to understand what the vulnerabilities are in the use of AI and machine learning in these fields. This is why we need to invest in research to help us secure the future.”
In addition to emphasizing the importance of understanding vulnerabilities in the software, systems and algorithms of AI and machine learning, the report also discusses the need to develop an engineering discipline related to these concerns.
“We’re just starting to understand how to use all of this technology effectively and in safe and secure ways, but we don’t really have a foundational engineering discipline that way we have in other areas such as aviation or civil engineering,” McDaniel said. “We need to develop that engineering practice because so much of this is really new.”
A third thrust of the report addresses human factors. The authors of the report noted that the interactions between humans and AI are also vulnerable, as humans do not know how to best use these new assistants and autonomous vehicles securely.
McDaniel said that one of the most surprising aspects to arise out of the workshop was that the experts were concerned not solely with technology but also with how the impacts of technology would affect society.
“One of the surprising and lengthy conversations was on developing trust in this machine learning and AI,” McDaniel said. “How do we start trusting computers and algorithms more? Because if we develop technology and provide it to industries and individuals but they don’t trust it, then they’re not going to make effective use of it.”
According to McDaniel, policymakers will use the report to inform decisions related to the priorities of AI and machine learning development.