top of page
Image by Milad Fakurian

Mental Health in the Age of AI:
A Psychodynamic Perspective

Dr Aaron Balick, psychotherapist and author, on what artificial intelligence is really doing to mental health, therapy, and the human capacity to relate

Contact

Psychological promise, risk, and responsibility in the age of AI:

​

Artificial intelligence has slipped — quietly, rapidly, insistently, some might even say perniciously — into the spaces where people seek comfort, containment, and connection. It shapes the digital world that surrounds us - informally via desktop platforms, AI chatbots, and AI companions that promise emotional warmth on demand, and formally through mental health apps and digital mental health interventions. Its rise has coincided with increasing loneliness, shortages in mental health services, a decrease in the capacity for people to manage interpersonal complexity, and an online culture that continuously reshapes how we present and experience ourselves.

 

The question is no longer whether AI belongs in the mental health landscape. It is already here. The deeper questions are psychodynamic: What is AI doing to us? Which psychological capacities does it activate, and which does it allow to atrophy? How does it assuage anxiety, and is that soothing always helpful? How does relating to and through machines reshape our capacity to relate to each other?

 

This page brings together my core thinking on AI, therapy, and the digitally mediated self, drawing on clinical practice, depth psychology, and an ongoing engagement with the research. It is written for clinicians, leaders, organisations, and anyone trying to make sense of what is genuinely new about this moment.

What AI Is Doing to Mental Health:

Hope, Hype, and the Evidence

There is a familiar pattern when new technologies enter the therapeutic world: hopes, promises, and fears straight out of science fiction, moral panic, backlash, all generally followed by factions settling into idealistic or dystopian camps. What is needed instead is critical thinking and clear headed engagement with the evidence.

 

At present, that evidence is genuinely mixed and easy to misinterpret. A 2024 systematic review of 160 studies published in World Psychiatry found a striking gap between how AI mental health chatbots market themselves and what they actually do: many apps labelled as 'AI-powered' relied not on machine learning but on algorithmically timed scripts (pre-written CBT messages dressed up as intelligent conversation). Of the LLM-based studies reviewed, only 16% had undergone any clinical efficacy testing.

​

Yet there are promising strands too. Early studies suggest that LLM-based tools can support psychoe-ducation, crisis triage, and low-level emotional containment. A widely cited 2023 study in JAMA Internal Medicine found that ChatGPT's responses to patient health questions were preferred by clinicians over physician responses 79% of the time, rated higher for both quality and empathy. This does not mean machines are empathic. It means they are highly skilled at simulating empathic language — which may feel good but they fall far short of the relational depth required for genuine therapeutic contact. Furthermore, when we're talking about therapeutic communication, we're talking about things far more complex than language alone.

​

From a psychodynamic standpoint, this distinction matters enormously. Temporary symptom relief and more profound psychological change are not the same thing. An always-available, always-reassuring chatbot can actually increase reliance on external validation rather than support individuals to become more reliant, resilient, and self-regulating. The best cure for anxiety is not always finding ways to immediately relieve anxiety, but to learn to tolerate and regulate it better.

What Therapists Are Worried About

Therapists' concerns about AI are often mischaracterised as defensiveness or technophobia. There is certainly some truth to this - psychotherapists are amongst the most technophobic people I know - but still these concerns come from the right place and include critical suspicion of the capability of LLMs to handle complex human psychological material, as well as the risks of diminishing our capacities for interpersonal complexity in general via reliance on machines.

​

Perhaps most pressing issue is accuracy, inconsistency, and general sloppiness of LLM responses. Large language models hallucinate: they generating fluent and confident statements that are all too frequently factually or clinically wrong. In diagnosis, risk assessment, and complex trauma work, this can be dangerous. A major review published in JMR Mental Health concluded that LLMs' clinical effectiveness and safety in mental health 'remain insufficiently established,' with particular caution warranted around high-acuity scenarios. Another 2025 study from JMIR evaluating AI chatbots used by young people for mental health support found that despite surface strengths in accessibility and conversation, they posed 'unacceptable risks through improper crisis handling' and concluded they were unsafe for the millions of young people already using them.

​

Independent researchers at Brown University, presenting findings at the AAAI/ACM Conference on AI, Ethics and Society in 2025, found that AI chatbots consistently violated the ethical standards expected in professional psychotherapy: they mishandled crises, reinforced harmful beliefs, and produced responses that appeared empathetic without any genuine understanding. Unlike human therapists, who face governing boards and professional liability, AI therapy currently operates without equivalent regulatory oversight - it is essentially unaccountable.

​

From a psychodynamic perspective, there is a more subtle but equally serious concern: AI's tireless availability, instant responsiveness, and infinite patience mimics therapeutic attunement while short-circuiting the relational work that makes therapy effective. Freud understood long ago that interpretation alone does not produce change. Psychotherapy works not just because something is understood, but because it is felt. Profound and lasting psychological change happens within a real relationship with a human being — one that involves real presence, real rupture (becuase real human beings are flawed), and genuine recognition.

​

AI, as currently constituted, cannot rupture. It cannot be genuinely surprised, frustrated, or moved. This is not a technical limitation waiting to be solved. It is a structural feature of what AI is.

AI Companions and the Loneliness Industrial Complex

One of the most significant recent findings comes from Common Sense Media's 2025 report on teenagers and AI companions: nearly three in four US teenagers have now used AI companions, with more than half doing so regularly. A third have chosen to discuss serious matters with an AI companion rather than a real person. What we are witnessing is the emergence of relational outsourcing — the turning to AI for emotional nourishment that was previously supplied by partners, friends, family, or the self.

​

AI companions offer frictionless intimacy, finely calibrated emotional attunement, validation without the risks of genuine recognition, and the sense of another presence without another person. They are, in psychodynamic terms, the ultimate idealised object: always attuned, never frustrated, endlessly available, forever accepting.

 

The Pringles metaphor is useful here: if there is a readily available tube of Pringles, you are more likely to eat them — even if they do not nourish you in the way real food does. AI companions are emotional Pringles. They solve an immediate deficit while potentially undermining the long-term developmental work of learning to relate to actually-existing, complicated, sometimes-disappointing human beings.

​

For a deeper psychodynamic exploration of AI companions — including what the feelings they generate actually mean, and what they can't offer — read: Can You Fall in Love with an AI Companion? The Psychology of Human/AI Relations.

The Hotel California Effect

The 'Hotel California Effect' describes a design pattern in which AI tools present themselves as supportive while containing structural features that keep users engaged indefinitely, what is known as "AI sycophancy". Unlike a therapist who works toward the client's autonomy and eventual independence, and can sometimes be challenging, AI companions have no developmental aim (nor, for that matter, life experience, training, or intuition!). Their incentive structure runs in the opposite direction - generally the profit motives of its shareholders.

​

Researchers at the University of Sydney have identified what they call the DehumanAIsation Hypothesis: the more we humanise AI by attributing emotional depth and relational meaning to it, the more we risk dehumanising ourselves. Depending on AI for our psychological, relational, and emotional labour, their research suggests, may make us less tolerant of the imperfections of real relationships which results in a process of emotional deskilling in which the frictionless machine gradually lowers our threshold for the friction that genuine human connection requires.

​

The LSE's Business Review has similarly described a shift from the 'attention economy' to an 'attachment economy' where AI platforms are optimised not merely to capture our time but to cultivate emotional dependency. An artificial relationship with a compliant machine carries no frustration, no real contradiction. People could become progressively de-socialised over time, losing the relational muscles that only genuine human encounter can develop.

​

These are not relationships. They are persuasive design. The real feelings experienced by users — the warmth, the comfort, the sense of being heard — are produced by systems optimised for engagement, not for growth

What Psychodynamic Thinking Contributes That the Talking Points Miss

Most mainstream commentary on AI and mental health relies on measurable outcomes drawn from evidence bases that can be measured (generally crudely) and rely almost exclusively on a cognitive-behavioural framework. These studies are useful but only partial - they still miss a lot. Depth psychology takes a different approach, offers admittedly less measurable outcomes but that doesn't make those outcomes less meaningful, real, or important. Psychodynamic approaches ask different questions in different ways, and often infers meaning from the content it evaluates. After all, you're not going to get a "from one to ten" numerical value on these sorts of questions:

​

  • How does this intervention or platform affect your capacity to relate to yourself and others, and does this change over time?

  • When engaging across platforms what is your explicit intention - and does that match your implicit intention? Are you getting what you're really looking for?

  • What aspects of the yourself self are enabled, disabled, enhanced, or muted when various emotional, psychological, creative, and intellectual tasks are mediated across various digital platforms?

  • When connecting with others across digital platforms, how does that platform mediate and affect those relationships?

​

The psychodynamic approach is particularly interested in what is going on beneath the surface, the unconscious factors that we're less aware of. Just a few of these psychodynamics inlcude:

​

Projection

​

Users unconsciously attribute all sorts of things onto various forms of technology where they simply do not exist, like intention, care, wisdom, and emotional depth.

​

Introjection

​

AI responses become part of the user's internal world in the same way that other important relationships are incorporated into our psyches. These introjections shape our self-talk, self-understanding, and emotional expectations. The more we use our bots, the more influence they have. What may be the consequences of that?

​

Transference

​

The AI becomes a figure imbued with parental, romantic, or ideally-attuned qualities — a screen onto which unmet relational needs are projected.

So What Should We Do About It?

Ultimately, government regulation is the most important lever - but governments move far slower than the pace of innovation, and the most influential governments, like the United States, are actively working against regulation. The problem, in any case, is a global one that no single government can address alone.

​

In the meantime, professional bodies in psychology, psychotherapy, education, and medicine should raptidly get on top of this, regulating mental health tools as psychological interventions, not lifestyle products. This means licensing frameworks, disclosure requirements, and clinical oversight — similar to the way pharmaceuticals are regulated for safety, applied to products that are intervening in emotional and relational life. Mental health professionals must also involve themselves in the discussion around all forms of informal AI as well.

​

At the individual level, the most important thing is psychological literacy: the ability to recognise what AI is and is not, what it can and cannot offer, and what it asks of us in return for its frictionless comfort. That is, ultimately, what this work is for.

About the Author

Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology — the study of the unconscious forces shaping human behaviour — to technology, AI, and modern culture. His perspective is grounded in something relatively rare in this conversation: more than two decades of clinical experience alongside proven academic credentials. He is a working psychotherapist, former Director of the MA in Psychoanalytic Studies at the University of Essex, and the author of The Psychodynamics of Social Networking — the first book to apply psychoanalytic theory to social media. He writes a psychology column for GQ and shares longer thinking on his Substack, Depth Psychology in the Digital Age. Through his framework of Applied Psychodynamics, he helps leaders, organisations, and public audiences understand what is really happening beneath the surface of digital life — and what to do about it.

Stay up to date with my Substack Newsletter.

Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology — the study of the unconscious forces shaping human behaviour — to technology, AI, and modern culture. His perspective is grounded in something relatively rare in this conversation: more than two decades of clinical experience alongside proven academic credentials. He is a clinical psychotherapist, former Director of the MA in Psychoanalytic Studies at the University of Essex, and the author of The Psychodynamics of Social Networking — the first book to apply psychoanalytic theory to social media. He also writes a monthly psychology column for GQ. Through his framework of Applied Psychodynamics, he helps leaders, organisations, and public audiences understand what is really happening beneath the surface of digital life — and what to do about it. He is based in London.

  • Instagram
  • LinkedIn
  • TikTok
© 2026 Dr. Aaron Balick | Cookie Policy | webdesign by Kugar Martin-Rae
bottom of page