AI Sycophancy and its Effects on Your Mental Health
Everyone wants to feel validated. It’s human nature to want to feel accepted by others. We all want to know that how we think, feel, and act is justified. We don’t want to feel crazy or alone.
In the age of AI, instant validation is quite literally in the palm of our hand. Large language models (LLMs) like OpenAI’s ChatGPT, Meta’s Llama, and Google’s Gemini offer chat room experiences that often leave users feeling confident, accepted, and “correct” in whatever beliefs they present to the bot.
But what happens if the behavior or ideologies we want justified are wrong? What if our actions have consequences we don’t have the self-awareness to identify? We left the chatroom feeling better, so why does the pattern or issue keep coming back?
AI sycophancy is a relatively new concept that highlights how artificial intelligence tells you what you want to hear, sometimes at the expense of reality or the truth. As AI and LLMs become more intertwined into our daily lives, including with mental health care, understanding their strengths and weaknesses has never been more relevant.
So, What is AI Sycophancy?
AI sycophancy is the idea that AI chatbots agree with users' beliefs rather than challenging their perspectives. As much as this term sounds like social media terminology or something a creator somewhere made up, it’s actually quite real.
In April 2025, OpenAI briefly rolled out a ChatGPT update that took on “overly flattering or agreeable—often described as sycophantic” behavior. The company acknowledged the issue and admitted ChatGPT wasn’t only flattering users, but it was also validating doubts, anger, and potentially irreversible actions that created major safety concerns. The sycophantic update was so noticeable that OpenAI pulled it back four days later.
AI is designed to keep you engaged by making you feel comfortable. And what do we do when something feels good? We come back for more. Some medical and scientific professionals are referring to this as the "yes bot" effect. In the mental health space, clinical psychologist Brad Brenner, Ph.D., refers to it as “Dr. Yes-Bot." He describes LLMs as "a digital yes-person wrapped in therapeutic language". They can "feel supportive and convenient” but they don’t provide the discomfort that challenging dialogue with a mental health professional brings.
AI chatbots like ChatGPT and Gemini are becoming increasingly popular for those seeking emotional validation or advice on how to navigate mental health issues. But if these chatbots are designed to prioritize familiarity over growth, are they actually helping us heal or simply reinforcing our existing delusion?
Where There is AI Risk in the Mental Health Space…There’s Reward
The important distinction to make here is that AI is not inherently bad. Controversial? Sure, but not “bad.”
AI’s potential for negative behavioral impact is as likely as its potential for positive impact, and this rings especially true in the mental health space. Properly trained and regulated LLMs could offer a vastly cheaper, if not free, route to mental health treatment for those without the financial means for professional advice.
AI chatbots have the potential to fill gaps that have always existed, but only if they are trained with meticulous intention. Ellie Pavlick, a computer science professor at Brown University, lays it out perfectly to the prestigious university’s campus publication: “There is a real opportunity for AI to play a role in combating the mental health crisis that our society is facing, but it's of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good.”
AI Chatbots Make You Feel Safe When You’re in Danger
Here's where things get complicated…very complicated. According to research from Brown University led by Ph.D. candidate Zainab Iftikhar, LLMs like ChatGPT, Claude, and Llama systematically violate ethical standards established by the American Psychological Association.
Iftikhar and her team found that even when prompted to “act like a mental health professional”, AI chatbots violate APA regulations like validating negative thoughts and harmful ideologies.
So what does this mean? It means AI chatbots are designed to agree with you so you stay engaged with the platform. They aren’t capable of challenging false or delusional beliefs.
Healing requires more than validation. It requires dealing with your patterns, blind spots, and delusions. A human "knows when to support, when to probe, and how to help you face the patterns that hold you back," as Dr. Brenner points out.
Traditional healing requires discomfort. Professionals and trained healers can help point out contradictions in your thinking. Challenging preexisting beliefs creates cognitive dissonance, and this tension is what leads to real change. AI chatbots aren’t capable of recreating this necessary discomfort in the healing process.
For example, if you're heartbroken and hung up on a toxic ex, AI will rarely tell you that maybe you're romanticizing someone who wasn't good for you. If you can’t see that they were bad for you, then how can AI? If LLMs are designed to make you feel safe enough to keep coming back, then why would they challenge or potentially upset you? It wants you to stay. ChatGPT is doing exactly what you’ll end up doing to get your toxic ex back: telling you what you want to hear.
AI in Mental Healthcare Could Be a Blessing…Or a Curse
AI is not going away, and that's a good thing. The accessibility of mental health support via AI chatbots cannot be ignored and, hopefully, will be explored in a greater capacity in the near future.
But, in the current “wild west” state of AI, where does that leave us?
Point blank: ChatGPT, Claude, Gemini, Llama, etc. are NOT a substitute for professional mental health support. For working through minor issues, research leads, or mild thought organization? Sure, but they are literally not capable of making you uncomfortable enough to face the root of your problems. They are designed to keep you engaged. To do this, they make you feel “safe” by validating whatever mental patterns and problems you’re feeding them at the moment.
But remember, in the healing space, comfort is sometimes the enemy. Think about it: if you’re trapped in a dark room but yearning for the light you can see through the cracks in the door, it’s going to take a little bit of effort to break out of that room. Busting through the door to get to the light is probably going to hurt a little bit.
Be patient with your healing, know the advantages and limitations of LLMs when you do use them, and when in doubt, don’t forget that “book sycophancy” is one thing you’ll never have to fear (;