UNIVERSITY PARK, Pa. — As interactions between humans and artificial intelligence (AI) accelerate at a rapid pace, researchers from numerous disciplines are beginning to investigate the societal implications — among them Penn State Associate Professor of Psychology and Sherwin Early Career Professor in the Rock Ethics Institute C. Daryl Cameron.
Cameron is currently integrating AI research into his role as director of Penn State’s Consortium on Moral Decision-Making, an interdisciplinary hub of scholars focused on moral and ethical decision making that’s co-funded by the Social Science Research Institute (SSRI), Rock Ethics Institute and College of the Liberal Arts, with additional funding from the McCourtney Institute for Democracy and the Department of Philosophy.
On Tuesday, April 9, the consortium will host the “AI, Empathy, and Morality” conference in partnership with the Center for Socially Responsible Artificial Intelligence and University Libraries. Taking place from 10 a.m. to 4 p.m. in Foster Auditorium, 102 Paterno Library, and part of the consortium’s Expanding Empathy series, the event will feature presenters from the United States, England, Scotland, Canada, Italy and Israel. For more information, or to register to attend in person or via Zoom, click here.
Meanwhile, Cameron recently collaborated with three other researchers on the article, “In Praise of Empathic AI,” which was published in the journal Trends in Cognitive Sciences.
The article examines the pros and cons that come with people’s response to AI’s ability to replicate empathy, and highlights AI’s “unique ability to simulate empathy without the same biases that afflict humans.”
AI chatbots like ChatGPT continue to grow both in popularity and in their ability to demonstrate empathy – such as the ability to recognize someone’s emotions, see things from their perspective, and respond with care and compassion – and other “human” qualities. These “perceived expressions of empathy” can leave people feeling that someone cares for them, even if it’s coming from an AI, noted Cameron and his co-authors, Michael Inzlicht from the University of Toronto; Jason D’Cruz from University at Albany SUNY; and Paul Bloom from the University of Toronto. Inzlicht and D’Cruz are both slated to present at the April 9 conference.
Cameron, who also leads Penn State’s Empathy and Moral Psychology Lab, recently shared some of his thoughts on the AI-empathy dynamic.
Q: What inspired you and your fellow researchers to write the journal article?
Cameron: It’s easy to fall into these conversational modes with AI — it does feel like you’re asking something of it and it gives you a response, as if you’re having an actual human interaction. Given the social and ethical questions involved, we thought this would be a good idea for a short thought paper. Some of the responses we got were intriguing, and it raised a lot of fascinating questions.
Q: You and your co-authors cite a recent study where ChatGPT was perceived by licensed health care providers to provide better quality diagnoses and demonstrate superior bedside manner than actual physicians to patients posting medical symptoms to a discussion board on the website Reddit. And in your own encounters with it, you were impressed with AI’s simulated empathy, including the way it conveyed sorrow and joy. But you also note that it always stated it was incapable of having actual human feelings. Why is that disclosure critical to the interaction?
Cameron: AI can give us enough of what we want from an empathetic response. It lacks the essence of real empathy, but if you’re treating it as real empathy, you’re making a mistake. We can think about it from the perspective of what the perceiver thinks: ‘What is it doing for them?’ Rather than wholly dismiss AI empathy as something not real, why not look at the social needs people might get from receiving these responses and then work with that.