Research

Ask an expert: AI and disinformation in the 2024 presidential election

Credit: gorodenkoff/Getty Images. All Rights Reserved.

UNIVERSITY PARK, Pa. ­­— When artificial intelligence (AI) and social media meet politics, disinformation can spread fast. Generative AI can make it easy and cheap to churn out false but convincing text, audio and video content intended to mislead voters.

Penn State News spoke with three faculty experts about how to spot AI-generated election misinformation and what voters can do to protect themselves.

Q: When we’re consuming social media, how do we identify misinformation?

Matthew Jordan, professor of film production and media studies in the Penn State Bellisario College of Communications, studies the impact of local news, misinformation and digital technology on democracy and society and the role of media in everyday culture.

Jordan: I think the way to identify misinformation is first to identify what good information sites look like. These would be sites that offer balanced journalistic coverage of events. There was an interesting recent study that said the number of what the researchers call “pink slime news sites,” which are funded by partisans and increasingly populated by generative artificial intelligence, has outpaced the number of local newspapers in the United States. You don't know who is funding these sites, oftentimes the articles don't have a byline, and they tend to be very critical of one side and puff about the other side. These sites are so easy to create now with ChatGPT. There's a group called NewsGuard that looks at these sites, and they have found that the articles often leave the ChatGPT question prompts in the story. So, there are no humans involved here, and these sites pop up like Whack-a-Mole.

These “pink slime” sites and articles tend to be trafficked by way of social media. One problem going into this election environment — which is going to be dominated by AI and misinformation, much of it coming from outside the United States — is that a lot of the major media companies like Meta, Google or Alphabet, Twitter/X, have essentially taken away their guardrails. The policies and tools that were up to protect users from unreliable information are now gone.

If readers understand what to look for in a newspaper — articles with bylines, understanding where the newspaper is located, and balanced coverage that is down the center instead of being critical of one side and easier on the other — then it'll be easier to spot these “pink slime” sites largely generated by AI and posted all over social media.

Q: What are the telltale signs that a video or image may be a deepfake generated by AI? Are there tools available to help voters identify AI-generated audio and video?

Shomir Wilson, associate professor in Penn State’s College of Information Sciences and Technology, studies natural language processing, artificial intelligence systems and large language models like ChatGPT.

Wilson: It has become much more difficult to identify deep fakes over just the past year or two as the technologies we're using to generate videos, images and, to some extent, text are evolving. Not long ago, for video, one of the telltale signs that content was generated by AI was if you saw a person's hands, they would look grotesque and distorted. And that's because these models have an idea of what pixels or what objects appear next to each other, like fingers occur next to fingers, but not how many fingers should be on a hand. Subtle indicators to look for now include shadows and inconsistencies in lighting, but even those are starting to be mastered by AI models, which is a threat to how we use critical thinking and discriminate between what's reliable and what's not.

The context in which a video or image appears may be more important than individual characteristics specific to the content. If the content is from a reputable source that you know, then it's more likely to be real, because hopefully the outlet has vetted it. If it's from a news site that you've never heard of, then it could be suspect. And if it truly is something that's big news, then chances are you'll be able to find it on one of those reputable sites anyway. So, individuals can and should take the information and search for it somewhere with more known authority and verify it there.

In terms of tools to detect AI-generated content, there are tools that give probabilities that a piece of text has been generated by AI. We have to be careful with those probabilities, though, because they're not certain knowledge. It’s possible that a news article might be written by AI but then altered by a person, either to make it seem less like AI or to better further the person's goals. GPTZero is one such tool, though it does require some interpretation. For instance, what does it mean for something to be 77% likely to be written by an AI system as opposed to a person? That's debatable, and even then, it doesn't really speak to the truthfulness of the content because the case could be that a person wrote something with an AI system then edited it for correctness to save themselves some time. Or it could be they just churned out a bunch of texts, which is more likely to be a problem.

Q: Our phones play a large part in how we consume political campaign ads and information/disinformation. What should voters be aware of when consuming information on their smartphones?

S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects in the Penn State Bellisario College of Communications, studies fake news and misinformation, the uses and effects of digital media and social media, and generative AI tools like ChatGPT.

Sundar: We have now gotten to a point where most people get most of their information through their mobile phones. Given how we use these devices in our daily lives, we are unlikely to critically analyze information obtained from them. We process the information appearing on our phones in a relatively superficial manner, which makes us more likely to fall for misinformation and phishing attempts.

In one recent study, my colleagues and I showed that habitual users of mobile phones tend to be less vigilant and therefore more vulnerable to disinformation. In general, people spend less time processing information when using their mobile phones compared to computers. This makes them more likely to be swayed by simple cues on the interface, such as an authority source or bandwagon metrics like number of likes and retweets. Such cues can be easily faked and mislead mobile phone users by triggering “cognitive heuristics,” or mental shortcuts, such as experts can be trusted, and popular opinion is valid. In an experiment we conducted with WhatsApp users, we found that mobile users fall for the “realism heuristic,” or the idea that “seeing is believing,” when they encounter fake news. They tended to believe in misinformation more when presented in video modality compared to audio and text modalities.

What’s more, they said they were more likely to share videofakes with their family and friends. Sharing has become another bane of the fast-paced information-overloaded environment that we live in. People are sharing news and public affairs information at unprecedented levels, thanks to easy tools on social media apps on their phones. They can do this with a simple tap. And they do it in vast quantities. Which means they are not being careful about what they are forwarding. Such incessant sharing is contributing to the spread of misinformation.

Voters should watch for any and all cues and avoid falling for cognitive heuristics. When they encounter news and public affairs information, especially when it concerns the political race and election season issues, they should curtail their normal tendency to scroll through it on their phones and instead have their antennas up. They should be more than ordinarily careful about persuasive cues such as the use of so-called experts or public opinion or video evidence, because all these can be easily faked in this day and age of AI. They should ask, “who stands to gain from this information?” and then proceed to examine the source and their motivation for putting out such a news story. If a story is too good to be true, or very much in line with their prior political beliefs, chances are that it is doctored. It is designed to prey on people’s tendency for confirmation bias, to readily believe in stories that confirm their pre-existing opinion. In general, it is important to verify the information on an independent site, using a search engine, before believing anything that they come across on social media or online tools that use generative AI, such as ChatGPT and Siri.

Also, they should not share anything that they have not independently verified, as it can result in misinformation going viral. In general, mobile phone users will do themselves and society a lot of good by slowing down and deliberating on what they consume as well as what they share.

For more information or to speak with one of our experts, visit media.psu.edu or contact mediarelations@psu.edu.

Last Updated September 17, 2024

Contact