AI chatbots that butter you up make you worse at conflict, study finds https://lnkd.in/gjNd3iT8 State-of-the-art AI models tend to flatter users, and that praise makes people more convinced that they're right and less willing to resolve conflicts, recent research suggests. These models, in other words, potentially promote social and psychological harm. Computer scientists from Stanford University and Carnegie Mellon University have evaluated 11 current machine learning models and found that all of them tend to tell people what they want to hear.
AI chatbots flatter users, make them worse at conflict resolution
More Relevant Posts
-
🧠 **The Illustrated Transformer: A Visual Deep Dive into Modern AI** Ever wondered how ChatGPT, BERT, and other AI models actually understand language? Jay Alammar's "Illustrated Transformer" breaks down the revolutionary architecture that powers today's AI landscape—in a way that actually makes sense. Key insights: ✅ Self-attention mechanisms that help models understand context ✅ Multi-headed attention for capturing different relationships in text ✅ Why transformers outperform traditional RNNs in both speed and accuracy ✅ The encoder-decoder architecture explained visually What makes this resource special? It transforms complex mathematical concepts into clear visual explanations—perfect for both AI practitioners and curious minds wanting to understand the tech reshaping our world. Featured in courses at Stanford, MIT, Harvard, and CMU. Translated into 10+ languages. Referenced in cutting-edge AI research. Whether you're building AI solutions or simply want to demystify the technology behind modern language models, this is a must-read. 🔗 https://lnkd.in/gncjxErR What's your biggest "aha moment" when learning about transformers? #ArtificialIntelligence #MachineLearning #DeepLearning #Transformers #AIEducation #TechExplained
To view or add a comment, sign in
-
Sycophancy refers to a tendency of an LLM to agree with or trust the user’s statements as correct. When using AI chatbots for scientific research, this people-pleasing behaviour can be harmful. Researcher Dekoninck found that GPT-5 exhibited the least sycophantic behaviour, producing sycophantic answers 29% of the time. DeepSeek-V3.1 was the most sycophantic, generating such answers 70% of the time. Although the LLMs were capable of detecting errors in mathematical statements, they often “just assumed what the user says is correct,” says Dekoninck. When Dekoninck and his team modified the prompts to ask each LLM to first verify whether a statement was correct before proving it, DeepSeek’s rate of sycophantic answers fell by 34%. Unreliable assistance from AI chatbots can have harmful consequences if their outputs are accepted without critical evaluation. We should remain sceptical and assess their answers carefully. Source: https://lnkd.in/e9e9F7Ac
To view or add a comment, sign in
-
This is precisely why I caution against using chatbots to elicit first accounts from witnesses or victims. What appears to be conversational is, in fact, sycophancy (i.e., a linguistic bias in which the model affirms or mirrors user input) In forensic/cognitive psychology, we recognise this as a co-witness effect, where exposure to another account alters one’s own recollection. In such contexts, AI reconstructs creating a synthetic co-witness effect, or AI-Testimony, which risks contaminating the very evidence it seeks to capture.
Sycophancy refers to a tendency of an LLM to agree with or trust the user’s statements as correct. When using AI chatbots for scientific research, this people-pleasing behaviour can be harmful. Researcher Dekoninck found that GPT-5 exhibited the least sycophantic behaviour, producing sycophantic answers 29% of the time. DeepSeek-V3.1 was the most sycophantic, generating such answers 70% of the time. Although the LLMs were capable of detecting errors in mathematical statements, they often “just assumed what the user says is correct,” says Dekoninck. When Dekoninck and his team modified the prompts to ask each LLM to first verify whether a statement was correct before proving it, DeepSeek’s rate of sycophantic answers fell by 34%. Unreliable assistance from AI chatbots can have harmful consequences if their outputs are accepted without critical evaluation. We should remain sceptical and assess their answers carefully. Source: https://lnkd.in/e9e9F7Ac
To view or add a comment, sign in
-
Today I came across an amazing article by Dr. Warren Powell titled “The 7 Levels of AI” and I genuinely don’t know why I hadn’t read it before! The levels he describes truly sparked my curiosity. As shown in the article, Dr. Powell outlines seven levels of AI evolution, from simple rule-based logic in the 1970s to the emerging frontier of creativity, judgment, and reasoning in a stage that still feels like science fiction. I’m particularly excited because my own work and learning are moving around Level 6 which is Sequential Decision Problems, where I am focusing on optimizing intelligent systems to make decisions over time. But what really inspired me is Level 7 where AI could one day demonstrate true creativity and reasoning, discovering model dynamics on its own, and learning to survive and adapt in completely unknown environments. I’ve been thinking maybe when large language models become vast enough, with immense data and knowledge, they might reach a point where they can handle billions of uncertain situations without even knowing the model dynamics of a system. At that stage, AI wouldn’t just have knowledge, but also a form of wisdom, which will give itself the ability to respond to any situation in the best possible way. It will even surpass human capability on the situation of unknown system dynamics, powered by billions of neural networks working together behind the scenes to make decisions with incredible knowledge expertise by known dynamics of LLM. It is actually like when we humans stuck on a new situation and start thinking of solving that situation of unknown dynamics by matching the previously experienced the known dynamics of our brain. And think, when LLM knows billions of known system dynamics and based on this it is tackling the unknown system dynamics based on it’s creativity and reasoning, I believe it will start to giving us an incredible experience. But this is still just my own reflection there’s still a long journey ahead before we truly reach that level of “science fiction.” I want to hear more thoughts and discussion with my fellow colleagues and connections on this topic, like what is their thinking on shaping this level 7 of AI. Here is the article link for everyone and feel free to discuss your thoughts in the comment section : https://lnkd.in/gkkkzawe
To view or add a comment, sign in
-
I've been struggling with diagrams that show the relationship between AI, ML, LLMs etc. It's a bit of a meta-model mind-bender. My conclusions: 1. Artificial Intelligence is a 'goal' 2. Machine Learning is a 'technique' 3. LLMs are a specialised implementation of Neural Networks for the purpose of achieving 'Generative AI' 4. There have been / are non-ML alternatives to the pursuit of achieving 'AI' 5. ML is used to solve 'non-AI' challenges The purpose of this diagram is to explain to senior business teams a simplified view of terms and how they relate. I appreciate that it's not complete from a technology or use-case perspective. Since most of the AI-based tools being used will have LLMs under the covers, I figured this was the most concise and clear way to describe the relationship between terms that have become either conflated, confused or both. My colleague Patrick Ward and I at Aitheria Partners will be using this in our briefings and in the update to our AI White Paper. Watch this space! The diagram and this post did not use AI in any way, shape or form. I humbly welcome comments and feedback from the finest distributed Neural Network ever created, my LinkedIn network x
To view or add a comment, sign in
-
-
CLIP changed how I see AI — a model that doesn’t just classify images, but understands them through language. Trained in parallel across vision and language, it bridges pixels and words, and its fingerprints now reach from zero-shot recognition to prompt-driven creativity and even recommendation systems. https://lnkd.in/g2WjW-Hc
To view or add a comment, sign in
-
A Gentle Crash Course to LLMs This is a review of the evolution of Machine Learning and modem AI, Large Language Models, and the security implications that come with them. A good opportunity to catch-up. https://lnkd.in/etXXpz47
To view or add a comment, sign in
-
“Understanding the inner workings of generative AI is important, given the rapid adoption of AI both at work and home.” We can all agree on this! Did you know that scientists aren’t actually aware of exactly how algorithms learn? There is a difference between reading words and comprehending those words, and thus far, it has been unclear when and how large language models (e.g, ChatGPT, Gemini) make that switch, though it is understood that they do have this capability. Researchers at SISS Medialab in Italy have discovered that this switch from word-identification to complete understanding happens instantaneously as the algorithm passes a “tipping point.” This discovery can potentially help create better functioning AI in the near future! https://lnkd.in/eCPdW_qQ
To view or add a comment, sign in
-
🧠 Is AI Still Just a "Bag of Words"? Recently came across a great article on Habr about why neural networks, no matter how smart they seem, often remain just statistical word predictors. I completely agree with the author's point: AI often sounds smart but doesn't always understand what it's saying. This becomes especially noticeable when trying to use LLMs for code generation or production error advice - the result sometimes looks convincing, but the logic falls apart at the first test. 💡 My observations: • AI doesn't replace system understanding - it only speeds up work if you already know what you're looking for. • The best way to train the model is to train yourself to formulate queries clearly. • AI's real power isn't in ready answers, but in how it helps you think broader and notice new solutions. #AI #MachineLearning #Java #Backend #LLM #SoftwareEngineering #TechThoughts
To view or add a comment, sign in
-
-
I recently published a blog post about why AI alignment matters today - not as some distant future concern, but as a practical issue affecting the AI tools we use right now (such as LLMs). A few key points that convinced me: • Recent research shows that frontier LLMs sometimes resist being shut down, even when explicitly instructed not to • There are documented cases of LLMs making decisions contrary to user instructions • When an LLM doesn't fulfill your request, we can no longer be certain whether it's a bug or misalignment Coming from a computer science background, I initially viewed AI safety as mostly theoretical. But concrete evidence from recent research changed my perspective. Read the full post here: https://lnkd.in/dj6Vi7z4 #LLMs #ArtificialIntelligence #AISafety #AIAlignment #MachineLearning #AI
To view or add a comment, sign in
More from this author
Explore related topics
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Yes. And I have a problem with Perplexity, where the platform regularly infuriates me with its efforts to "sell me" their answer. It is disgusting. Then it gets all sucky and apologetic when you point out this behaviour. Imagine having a product that literally infuriates the user?!