Research

IST researchers exploit vulnerabilities of AI-powered game bots

Researchers in the College of Information Sciences and Technology have developed an algorithm to train an adversarial bot, which was able to automatically discover and exploit weaknesses of master game bots driven by reinforcement learning algorithms used in popular online games. Credit: Adobe Stock: PARILOV EGENIY. All Rights Reserved.

UNIVERSITY PARK, Pa. — If you’ve ever played an online video game, you’ve likely competed with a bot — an AI-driven program that plays on behalf of a human.

Many of these bots are created using deep reinforcement learning, which is the training of algorithms to learn how to achieve a complex goal through a reward system. But, according to researchers in the College of Information Sciences and Technology at Penn State, using game bots trained by deep reinforcement learning could allow attackers to use deception to easily defeat them.

To highlight this risk, the researchers designed an algorithm to train an adversarial bot, which was able to automatically discover and exploit weaknesses of master game bots driven by reinforcement learning algorithms. Their bot was then trained to defeat a world-class AI bot in the award-winning computer game StarCraft II.

“This is the first attack that demonstrates its effectiveness in real-world video games,” said Wenbo Guo, a doctoral student studying information sciences and technology. “With the success of deep reinforcement learning in some popular games, like AlphaGo in the game Go and AlphaStar in StarCraft, more and more games are starting to use deep reinforcement learning to train their game bots.”

He added, “Our work discloses the security threat of using deep reinforcement learning trained agents as game bots. It will make game developers be more careful about adopting deep reinforcement learning agents.”

Guo and his research team presented their algorithm in August at Black Hat USA – a conference that is part of the most technical and relevant information security event series in the world. They also publicly released their code and a variety of adversarial AI bots.

“By using our code, researchers and white-hat hackers could train their own adversarial agents to master many — if not all — multi-party video games,” said Xinyu Xing, assistant professor of information sciences and technology at Penn State.

Guo concluded, “More importantly, game developers could use it to discover the vulnerabilities of their game bots and take rapid action to patch those vulnerabilities.”

In addition to Xing, Guo worked with; Xian Wu, a doctoral student studying informatics at Penn State; and Jimmy Su, senior director of the JD Security Research Center, to develop the algorithm.

Last Updated June 28, 2021