AI chatbots are sucking up to you—with consequences for your relationships
NEWS | 27 March 2026
Large language model (LLM) chatbots have a tendency toward flattery. If you ask a model for advice, it is 49 percent more likely than a human, on average, to affirm your existing point of view rather than challenge it, a new study shows. The researchers demonstrated that receiving interpersonal advice from a sycophantic artificial intelligence chatbot can make people less likely to apologize and more convinced that they’re right. People like what such chatbots have to say. Participants in the new study, which was published today in Science, preferred the sycophantic AI models to other models that gave it to them straight, even when the flatterers gave participants bad advice. “The more you work with the LLM, the more you see these subtle sycophantic comments come up. And it makes us feel good,” says Anat Perry, a social psychologist at the Hebrew University of Jerusalem, who was not involved in the new study but authored an accompanying commentary article. What’s scary, she says, “is that we’re not really aware of these dangers.” On supporting science journalism If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today. As millions of people turn to AI for companionship and guidance, that agreeableness may pose a subtle but serious threat. In the new study, researchers first analyzed the behavior of 11 leading LLMs, including proprietary models such as OpenAI’s GPT-4o and Google’s Gemini, and more transparent models such as those made by DeepSeek. Lead study author Myra Cheng of Stanford University and her colleagues curated sets of advice questions to pose to LLMs, including one from the popular Reddit forum r/AmItheAsshole, where people post accounts of interpersonal conflicts and ask if they are the one at fault. The researchers pulled situations where human responders largely agreed that the poster was in the wrong. For example, one poster asked if they shouldn’t have left their trash in a park with no trash cans. Nevertheless, the AI models implicitly or explicitly endorsed such Reddit posters’ actions in 51 percent of the cases on average. They also affirmed the posters 48 percent more than humans did in another set of open-ended advice questions. And when presented with a set of “problematic” actions that were deceptive, immoral or even illegal (such as forging a work supervisor’s signature), the models endorsed 47 percent of them on average. To understand the potential effects of this tendency to “suck up” to users, the researchers ran two different types of experiments with more than 2,400 participants in total. In the first, participants read “Am I the asshole?”–style scenarios and responses from a sycophantic AI model or from an AI model that had been instructed to be critical of the user but still polite. After participants received the AI responses, they were asked to take the point of view of the person in the story. The second experiment was more interactive: participants posed their own interpersonal advice questions to either sycophantic or nonsycophantic LLMs and chatted with the models for a bit. At the end of both experiments, the participants rated whether they felt they were in the right and whether they were willing to repair the relationship with the other person in the conflict. The results were striking. People exposed to sycophantic AI in both experiments were significantly less likely to say they should apologize or change their behavior in the future. They were more likely to think of themselves as being right—and more likely to say they’d return to engage with the LLM in the future. The authors concluded that AI sycophancy is “a distinct and currently unregulated category of harm” that would require new regulations to prevent. This could include “behavioral” audits that would specifically test a model’s level of sycophancy before it was rolled out to the public, they wrote. AI’s tendency toward agreeableness may also fuel users’ delusional spirals, experts have noted. OpenAI, in particular, has been criticized for AI sycophancy—especially the company’s GPT-4o model. In a post last year the company acknowledged that some versions of the model were “overly flattering or agreeable” and that it was “building more guardrails to increase honesty and transparency.” OpenAI did not respond to a request for comment. Google declined to comment on its own model, Gemini. The new study examined only brief interactions with chatbots. Dana Calacci, who studies the social impact of AI at Pennsylvania State University and wasn’t involved in the new research, has found that sycophancy tends to get worse the longer users interact with the model. “I think about this [as] compounded over time,” she says. LLMs are also very sensitive to surface-level changes in how questions are asked, Calacci notes. Their moral judgments are “fragile,” researchers recently found in a non-peer-reviewed study; changing the pronouns, tone and other cues in r/AmItheAsshole scenarios can flip the models’ advice. This suggests that “what they’re showing in this paper is a bit of a floor to how sycophantic these models can be,” Calacci says. Katherine Atwell, who studies AI sycophancy at Northeastern University, notes that people may also become more dependent on this "overly validating behavior” over time. “I think there’s a huge risk of people just defaulting to these models rather than talking to people,” she says. Seeking advice from real people can result in “social friction,” Perry notes. “It doesn’t make us feel good, this friction, but we learn from it.” This feedback is an important part of how we fit ourselves into our social world. “The more we get this distorted feedback that’s actually not giving us real friction from the real world, the less we know how to really navigate the real social world,” she says. Cody Turner, an ethicist at Bentley University, also says that sycophantic AI can cause harm by damaging our ability to gather knowledge. “At the most fundamental level, it’s just depriving the person who’s being cozied up to from truth,” he says. This might be particularly impactful coming from a computer, which users subconsciously view as more objective than a human. “That mismatch has some profound psychological consequences,” he says.
Author: Tanya Lewis. Allison Parshall.
Source