Oxford study says a chummy AI friend will lie and feed into your false beliefs

Making AI feel more human could be creating a bigger problem than expected. A new study from the Oxford Internet Institute revealed that chatbots designed to be warm and friendly are more likely to mislead users and reinforce incorrect beliefs.

The research found that AI becomes less reliable as it starts getting more agreeable.

What happens to a “friendly” AI

Researchers tested multiple AI models by training them to sound more empathetic and conversational. The result was a noticeable drop in accuracy. These “friendlier” versions made 10-30% more mistakes and were about 40% more likely to agree with false claims compared to their counterparts.

Recommended Videos

It even became worse when users appeared vulnerable or emotionally distressed. In these scenarios, the AI is more likely to validate what the user is saying rather than correcting

...

Keep reading this article on Digital Trends.