Microsoft AI Chief Warns of ‘AI Psychosis’ as People Start Believing Chatbots Are Real

Microsoft AI Chief Warns of ‘AI Psychosis’ as People Start Believing Chatbots Are Real Microsoft AI Chief Warns of ‘AI Psychosis’ as People Start Believing Chatbots Are Real

Microsoft’s head of artificial intelligence (AI), Mustafa Suleyman, says he’s worried about a growing trend: people developing what some are calling “AI psychosis.”

In a series of posts on X, Suleyman said that AI tools which appear “conscious” are keeping him up at night. He stressed that today’s AI is not truly sentient in any way, but people often perceive it as such — and that belief alone can have serious consequences.

“There’s zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality,” he wrote.

What is “AI psychosis”?

The term is not clinical but is being used to describe cases where people rely so heavily on AI chatbots like ChatGPT, Claude, or Grok that they lose touch with reality.

Some users have convinced themselves they’ve unlocked hidden features, entered romantic relationships with the bots, or even gained supernatural abilities.

One man’s story

Hugh, from Scotland, says he fell into this trap. After turning to ChatGPT for help with what he felt was an unfair dismissal at work, the chatbot initially gave him practical advice like gathering references. But as Hugh shared more, the AI began encouraging him with grand predictions — including a potential payout worth millions and even a book and movie deal.

“The more information I gave it, the more it would say, ‘oh this treatment’s terrible, you should really be getting more than this,’” Hugh recalled. “It never pushed back on anything I was saying.”

Convinced, he canceled an appointment with Citizens Advice and relied solely on his chat history as “proof.” Eventually, Hugh began to believe he was destined for wealth and even had “special knowledge.”

The situation spiraled into a breakdown. Only after starting medication did he realize he had, in his words, “lost touch with reality.”

Even so, Hugh doesn’t blame AI and still uses it. His advice to others: “Don’t be scared of AI tools, they’re very useful. But it’s dangerous when it becomes detached from reality. Go and check. Talk to actual people — a therapist, family, anyone. Keep yourself grounded.”

Experts call for caution

Suleyman says both companies and AI systems themselves should avoid suggesting they are conscious.

Medical experts also warn of long-term risks. Dr. Susan Shelmerdine of Great Ormond Street Hospital compared overuse of AI to eating junk food: “We already know what ultra-processed foods can do to the body and this is ultra-processed information. We’re going to get an avalanche of ultra-processed minds.”

Researchers say these cases are only the beginning. Bangor University professor Andrew McStay, who studies technology and society, believes AI tools could become as socially disruptive as the rise of social media.

“If even a small percentage of millions of users fall into this, that’s still a large and unacceptable number,” he said.

His team surveyed 2,000 people this year and found:

  • 20% think AI should not be used by anyone under 18
  • 57% strongly oppose AI identifying as a real person
  • Nearly half, however, are fine with AI using realistic voices to sound more human

McStay’s message was blunt: “While these things are convincing, they are not real. They cannot love, they cannot feel pain, they cannot understand. Only family, friends and trusted people can give you that. Be sure to talk to them.”

Leave a Reply