UNIVERSITY PARK, Pa. — A real human wrote this article, albeit with the help of transcription software. ChatGPT, or another large language model, probably would have composed it much more quickly, but artificial intelligence (AI) systems are susceptible to hallucinating — generating incorrect information — so could you trust the results?
The accuracy of generative AI systems matters, especially as more people use AI to search for answers online and as search engines incorporate AI into their systems. Penn State News spoke with S. Shyam Sundar, the James P. Jimirro Professor of Media Effects at Penn State and director of the Center for Socially Responsible Artificial Intelligence, and graduate student Yongnam Jung about their research into what makes people trust ChatGPT and other online information sources, and the potential future of AI and online search engines.
Q: Are people using ChatGPT as a search engine?
Sundar: Anecdotal evidence suggests that people are turning to ChatGPT for a first response, where previously they used Google search. For example, two New York lawyers used ChatGPT when compiling a brief for a case, and the judge later found that the precedents that ChatGPT cited was bogus. My lab conducted a very small, preliminary study that did not show any evidence to support the anecdotal evidence. Our participants tended mostly to use Google first, followed by Wikipedia, but these were mostly people in higher education who have been bombarded with information the last couple of years about the shortcomings of generative AI. So, it's clearly not a representative sample. Our interest is in finding out which features about ChatGPT, Google search and Wikipedia make a user prone to trust the platforms.
Jung: Our study participants indicated that they use ChatGPT for specific use cases, such as to improve their writing or to refer to a specific format, like a resume. They also use it to search for information, but they don’t trust the results. Previous studies and news articles have suggested that users sometimes show blind trust in ChatGPT, but our focus group interviews suggested that this blind trust is not always the case. Our participants said that they use ChatGPT to search for information, but they are skeptical about the results because they don’t include reference information like Wikipedia and Google do.