What’s The Danger Of Humans Empathizing With AI?

A new study published in Human Behavior and Emerging Technologies looked at how humans mindlessly treat AI virtual agents as social beings, but how this tendency diminishes among the young.

“We were interested in understanding how people relate to AI tools, for example chatbots or smart speakers,” study author Jianan Zhou told us. “Such systems are pervasive nowadays and appear to be smart and well-informed to the typical user, which makes sense from the point of usability. The question this raises, however, is whether interaction with such systems can increase feelings of liking or sympathy towards them and lead us to behave much like we do when interacting with other humans.”

The researchers considered a theoretical model of human cognitive processing called the dual-processing model, which identifies two ways that people think: Type 1 processing, referring to automatic or intuitive responses (such as eating), and Type 2 processing, which refers to conscious or reflective responses (e.g., planning for the day). They were interested to see whether humans would treat AI bots like other humans without necessarily reflecting too much about what they do.

“Interactions with smart (or AI-powered) systems are of particular interest now, as they are so widespread,” Zhou told us. “In the West, most people have access to a smart speaker, engage with chatbots when accessing online services, or speak to for example Siri when they need to find something. This also means that children and young people are increasingly growing up in environments where smart systems like these will be the norm.”

The researchers conducted an online experiment in which they had participants play a virtual ball-tossing game called Cyberball, a well-known research paradigm used in social psychology to test for feelings of rejection by others. They adapted this game to suit a human-agent interaction scenario and told participants that they would be playing the game with another human and an AI bot.

“What the participants didn’t know was that both the AI bot and the other human were in fact software,” Zhou told us. “This allowed us to investigate how participants responded when they observed the bot being rejected versus treated fairly by another human. In other words, we tested whether humans respond to bots as they would to other humans when they observe rejection or unfairness taking place.”

The results show that participants tended to treat AI bots as social beings, because they tried to re-include them into the ball-tossing game if they saw the bot was being excluded. This is common in human-to-human interactions, and the participants showed the same tendency even though they knew they were tossing a ball to a bot. This effect was stronger for older participants. 

“Participants also reported feeling sympathy for the ostracized bot,” Zhou told us.

“However, they did not criticize the human player who rejected the bot player (unlike what typically happens in human-to-human interactions). That is, our participants did not consciously equate the bot’s experience with that of a rejected human.”

What surprised the research team was that even though participants treated the AI bot as much as another human, they did not hold the other human player who rejected the bot to the same standard as they would in human-to-human interactions. This underscores the complexity of human-AI interactions and invites the su, say the researchers, to think critically about our evolving relationship with these technologies.

“These results provide unique insights into how we should be designing interactive AI systems,” Zhou told us. “Given that humans tend to intuitively treat these systems as they would other humans, this tendency should be considered for human-machine collaboration. In collaborative work settings, leveraging this tendency could enhance effectiveness. However, caution is necessary when designing systems that advise on physical or mental health, or when they are used as virtual friends and companions, to avoid replacing essential human relationships.”

Judging from the findings, according to the researchers, AI systems should not be designed to be overly human-like. 

“We also do not know yet whether engaging with these systems might in turn impact the way people treat other humans, which requires further research,” Zhou told us. “Another consideration for design concerns age or generational differences. For systems that aim to appeal to a broad generational audience, developers should consider developmental differences in interactions.” 

More broadly, explained the researchers, as people intuitively treat AI systems as social beings, this means that under certain conditions, the cognitive awareness that “it’s merely an artificial entity” can easily become irrelevant. 

“We believe collaboration between governments, researchers, developers, and companies is advantageous to understand more about when the use of these systems is appropriate, and when it should be avoided,” Zhou told us. “Regulatory frameworks will also be essential to ensure the ethical and effective development and deployment of increasingly smart technologies.”

 

Nous vous invitons…

Nous vous invitons à prendre rendez-vous avec un de nos psychologues, psychothérapeutes et psychopraticiens afin de faire un premier pas vers le changement que vous désirez. Si vous désirez obtenir de plus amples informations ou si vous avez des questions, n’hésitez pas à nous téléphoner. Vous pouvez prendre un rendez-vous par téléphone ou en envoyant un email au cabinet des Psychologues de Paris 9 (à l’attention du psychologue ou psychothérapeute de votre choix).