AI chatbots flatter users, make them worse at conflict resolution

AI chatbots that butter you up make you worse at conflict, study finds https://lnkd.in/gjNd3iT8 State-of-the-art AI models tend to flatter users, and that praise makes people more convinced that they're right and less willing to resolve conflicts, recent research suggests. These models, in other words, potentially promote social and psychological harm. Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear.

Yes. And I have a problem with Perplexity, where the platform regularly infuriates me with its efforts to "sell me" their answer. It is disgusting. Then it gets all sucky and apologetic when you point out this behaviour. Imagine having a product that literally infuriates the user?!

To view or add a comment, sign in

Explore content categories