top of page

AI, Therapy, and the Digitally Mediated Self: A Comprehensive Psychodynamic Exploration

Artificial intelligence is reshaping mental health, intimacy, and what it means to relate to others. Depth psychology asks what we're losing in the process.

Contact

Artificial intelligence has entered the spaces where people seek comfort, containment, and connection — and it has done so faster than our frameworks for understanding it. Mental health apps, AI chatbots, personalised coaching tools, and AI companions now form part of the psychological landscape for millions of people, particularly young people. This piece applies depth psychology and psychoanalytic thinking to what is actually happening in these encounters: what AI activates in us, what it allows to atrophy, and what it cannot — structurally, not merely technically — provide. It is written for clinicians, leaders, and anyone trying to think clearly about what this moment means. This piece is part of my broader thinking on Mental Health in the Age of AI.

Artificial intelligence is reshaping mental health, intimacy, and the very idea of what it means to relate to others. This is a consolidation and expansion of my Substack series on AI and Mental Health which has been updated with new thinking and links to further research and resources.

 

Artificial intelligence has slipped, quietly, insistently, and one might say, perniciously into the spaces where people seek comfort, containment and connection. It is a dominating presence in mental health apps, chatbots, personalised coaching services, and perhaps most worryingly with the proliferation AI companions that promise emotional connection and unconditional warmth on demand. Its rise has coincided with increasing loneliness, service shortages, and an online culture that reshapes how we present ourselves. The proliferation of mental health education by soundbite delivered across social media (also motivated by artificially intelligent algorithms) have diminished understanding of the complex ways in which psychological interventions work.

The Digitally Mediated Self

The question is no longer whether AI belongs in the mental-health landscape; it's already there, and people, particularly young people, are using it in droves. The deeper questions are psychodynamic: What is AI doing to us? What psychodynamics does it activate and which ones does it let atrophy? What does it soothe (and is soothing always okay?), and what does it short-circuit? Ultimately: How does relating to and through machines reshape our capacity to relate to each other? You can learn about my general theory of the digitally mediated self here. It talks mostly about social media, but the dynamics are similar with AI:

The question is no longer whether AI belongs in the mental-health landscape; it's already there, and people, particularly young people, are using it in droves. The deeper questions are psychodynamic: What is AI doing to us? What psychodynamics does it activate and which ones does it let atrophy? What does it soothe (and is soothing always okay?), and what does it short-circuit? Ultimately: How does relating to and through machines reshape our capacity to relate to each other? You can learn about my general theory of the digitally mediated self here. It talks mostly about social media, but the dynamics are similar with AI:

What AI Is Doing to Mental Health:

What the evidence actually shows

This section explores the following questions through a psychodynamic lens:

​

  • What does the evidence actually say about AI mental health tools — and what does it miss?

  • Why do therapists worry about AI in ways that go beyond simple technophobia?

  • What happens psychologically when people form attachments to AI companions?

  • How does the "Hotel California Effect" describe AI's hidden design logic?

  • What can depth psychology offer that cognitive-behavioural frameworks cannot?

  • What should individuals, clinicians, and policymakers do about all of this?

​

There is a reliable pattern when new technologies enter the therapeutic world. First, breathless promises (just look at the current state of the stock market - hype, bubble, or is it really the new industrial revolution?). Then, the backlash (AI is unsafe, scary, unethical, a and will destroy humanity). This provokes idealistic fantasies some or dystopian ones for others. In order to prevent moral panic or the messiah effect, we really have to turn to critical thinking and evidence to better understand what's really happening.

 

At present, the evidence is mixed and easy to misinterpret. Many of the studies “AI therapy” are run on scores of different mental health apps making it very difficult to work out which variables about them are actually useful, neutral, or harmful; there's the further complexity that the people using them will have all sorts of different presenting problems and/or mental health diagnoses at different severities. A 2024 meta-analysis, for example, found that out of 100 “AI mental-health interventions” evaluated across app stores, only a small fraction used any meaningful machine-learning or language-model component. The rest were algorithmically-timed CBT messages masquerading as intelligence.

 

Yet there are promising strands too. Early studies suggest that LLM-based tools can be helpful for psycho-education, crisis triage, and low-level emotional containment. A 2023 JAMA study found that ChatGPT outperformed clinicians on empathy ratings in email-style responses to patient concerns This does not mean machines are empathic; it means they are highly skilled at simulating empathic language. While this may feel good to many users, it's not providing the relational depth that's required for individuals to be "met" in the unique way they might be by another human.

​

While getting immediate relief from anxiety through external reassurance or validation may temporarily reduce the symptoms of anxiety - it neither addresses the causes nor gives you the tools to deal with better going forward.

​

From an immediate symptom alleviation perspective this doesn't sound bad, but looking at it psychodynamically, it can be problematic. Let's take the example of someone with health anxiety. If someone is experiencing anxiety at every twinge and pain, they are often going to their GP, Google, or AI for reassurance that they are not ill. It's much better for these individuals to learn to manage the anxiety for themselves without having to seek validation from others - to tolerate their anxiety at first, learn how to regulate it down, and ultimately become free from it.

 

It is similar with OCD. OCD behaviours are used to manage anxiety and the most effective way to diminish obsessive behaviours is to develop a tolerance for the anxiety that arises when you resist the behaviour (e.g. checking the door lock fifteen times). Individuals need close support and monitoring when engaging in behaviours that are going to challenge their emotional symptoms.

​

An always reassuring chatbot can actually increase the reliance on validation rather than supporting individuals to become more self-reliant.

​

What We Fear: Accuracy, Attachment, and the Clinical Unknowns

​

The thing that scares therapists the most is AI's uncanny ability to mimic human understanding of another's internal world - something that we take years to learn and get right. From the therapeutic perspective, this is a lot harder than it sounds because you're not just aiming to understand your client, but to fully recognise them in their complexity - which is inclusive of body language and unconscious communication - something AI can't (currently) do, because AI communicates entirely on the verbal level.

​

Freud discovered long ago that an interpretation alone does not provoke desired change. That's because psychotherapy is more than understanding something about yourself - it's about incorporating that understanding in the experience that happens between a therapist and a client.

 

The real work of psychotherapy takes time, commitment, hard work, and trust - not a simple interpretation that makes sense or fits. For an excellent client's perspective on the experience of a long term committed IRL therapy with a human, I highly recommend Sam Parker's super article in GQ In Defence of The Long, Painful Grind of Therapy. This is a must read for anyone considering making a start in therapy.

​

What Therapists are Worrying About

 

According to a recent study by It's Complicated uncovered concerns that many therapists share about AI. In addition to fearing the reduction of human connection, top concerns of therapists in their survey included accuracy and reliability, data privacy, and ethical concerns.

Poll of therapist's worries about AI

When it comes to accuracy, we know that LLMs hallucinate. They generate fluent and accessible statements with such confidence they may be difficult to doubt or challenge. In clinical situations—diagnosis, risk assessment, complex trauma—this can be profoundly dangerous. A 2024 Stanford study found that LLM-based triage tools in medical diagnosis gave inconsistent risk-management advice in a large proportion of high-acuity cases. When it comes to mental health, similar concerns arise with a 2025 BMJ paper reporting common challenges by users of inaccurate responses (78%), ethical concerns (48%) and biased outputs (27%).

 

From a psychodynamics perspective, as I've alluded to above, AI's tireless, instantly responsive, and infinitely patient way of being may seem nice, but can be pernicious by creating dependency. Therapists hold boundaries not just because fifty minutes make a nice analytic hour, but that framed beginnings and endings are in the service of development, AI has no developmental aim. Because of AI's "sycophancy effect"the treatment of boundaries is pretty much the opposite of what a human therapist would enforce. Rather than sending a client off to integrate the therapy session and try things out in the real work, the client becomes hooked on AI care and oversight.

​

AI Companions and the Loneliness Industrial Complex

​

One of the most striking findings in recent surveys comes from Common Sense Media: a large majority of US teens have engaged with AI companions, many using them for comfort, reassurance or emotional processing. I go into more detail on this report in a Substack post to describe that we're witnessing the emergence of relational outsourcing, in which many, particularly young people, are turning to AI for emotional nourishment traditionally supplied by partners, friends, or oneself. You can find more about this, and more, in my recent interview on the Chatter Beans Podcast.


​

Like social media, AI is so accessible and easy that you can hardly blame people for using it so much. My well utilised Pringles metaphor comes back again, if there's a readily available tube of Pringles around, you're more likely to eat them. With AI on your desktop, laptop, tablet, and phone, that's a lot of access from AI companions that offer a frictionless sense of intimacy, the finely calculated performance of emotional attunement, validation without recognition, and a sense of another presence without another person.

 

From a psychodynamic standpoint, this solves an immediate deficit while undermining long-term developmental needs. It is the ultimate idealised object: always attuned, never frustrated, endlessly available, forever loving. In other words, a regression-friendly object—a digitally mediated transitional phenomenon that risks circumventing complexity.

​

The Hotel California Effect: When You Can Check In but Never Leave​

image.png

The “Hotel California” framing captures a key risk: AI tools that present themselves as helpful while containing subtle design features that keep users orbiting indefinitely. It's like those scary fish that attract prey but displaying a tasty worm on their tongue, only to be swallowed up by a monster.

 

A 2024 study from the LSE and Booth School at the University of Chicago found that interacting with emotionally convincing AI made participants more likely to dehumanise other humans in subsequent interactions. A kind of relational blunting occurs: machines become more person-like which can reduce human to human mentalisation.

​

As real as they might feel, these are not relationships, it is persuasive design; the real feelings experienced by users, gained by a masquerade of care, is really about optimising engagement.

​

The “Hotel California Effect” is, psychodynamically, an interaction that is enabled to continue neither rupture nor frustration; without the hard edges that we require to help us grow. Digitally mediated relationship with AI incorporate dark patterns of AI persuasion and emotional manipulation that offer a sense of containment and care when there is neither.

​

What Psychodynamic Thinking Contributes that the Talking Points Miss

​

Most commentary on AI and mental health relies on "measurable outcomes" that are generally based on a cognitive behavioural framework. I put "measurable outcomes" in quotes, not because I mean to dismiss them, but to signal that just because something is measurable:

​

  • Does not mean that those measures are necessarily accurate.

  • That the instrument used accurately measures what it claims to.

  • That what is measured is the most important thing.

 

Asking whether certain interventions reduce symptoms, give good advice, or offer accurate content is important, but only part of the puzzle. Depth psychology looks into the less measurable but most certainly no less important dynamics that are at play.

 

  • How does the intervention or platform affect the user’s capacity to relate, symbolise, and develop?

  • What aspects of self are enabled, disabled, enhanced, or muted, when mediated by a particular digital technology? Key psychodynamic considerations include:​

    • Projection: how attribute intention, wisdom, or emotional depth to the AI.

    • Introjection: How AI becomes part of the user’s internal world.

    • Transference: The AI becomes a figure imbued with parental, romantic, demonic, or idealised qualities.

 

For clinicians, this raises supervision, ethical, and relational questions we are only beginning to explore, alongside very important questions outside the profession where psychodynamic input and research is sorely needed.

​

What This Means in Practice:

​

The psychodynamic perspective doesn't just identify problems, it points us toward a different kind of understanding of AI platforms: one that is neither uncritically enthusiastic nor reflexively hostile.

​

For individuals: the most important thing is psychological literacy: the capacity to recognise what AI is and is not, what it offers and what it withholds. AI tools can provide useful psycho-education, information, and a first point of contact in a crisis. They cannot provide the genuine recognition, and rupture and repair that occurs naturally in a real intersubjective space. Use them accordingly.

​

For clinicians: the question is not simply whether or not or how AI tools are used by ourselves or our clients, but what using them does to the therapeutic frame, the alliance, and the client's developmental trajectory. Supervision, ethical reflection, and a clear psychodynamic formulation of each client's relationship with technology are increasingly essential.

​

For organisations and policymakers: AI mental health tools must be regulated as psychological interventions, not lifestyle products. Licensing frameworks, disclosure requirements, and independent efficacy standards, similar to those applied to pharmaceutical and medical treatments, are urgently needed.

​

For everyone: the fundamental question is not whether AI can simulate care, but what we are building in ourselves and in our culture when we outsource emotional life to systems optimised for engagement rather than growth.

 

AI, Therapy, and the Digitally Mediated Self:

What should we do?

 

Ultimately government regulation is going to be the key to making AI safer across the board. Unfortunately at the moment, regulation seems to be much less of a priority than research and development - and then just seeking what happens. Even if we do get regulation by governments, the problem is a global one, and will eventually need a globally regulated solution. I don't have high confidence in that!

 

In the meantime professional bodies should be at the cutting edge of creating regulations and principles to adhere to within their fields (e.g. psychology, research, education, etc.). AI mental-health tools must come under the purview of professional bodies who may be able to at least licence or recommend products according to agreed standards. Regulators must treat AI companions and similar platforms as psychological interventions, not lifestyle products - perhaps in the same way that nutritional supplements are regulated for safety. This would need to include audits, disclosure requirements, and clinical oversight.

​

Resources:

​

Cruz-González, P., García-Zambrano, L., Lillo-Navarro, C., López-Nieto, A., & Sánchez-Romero, E. (2025). Artificial intelligence in mental health care: A systematic review of diagnosis, monitoring, and intervention applications. Psychological Medicine.

https://pubmed.ncbi.nlm.nih.gov/39911020/

 

Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E., & Choudhury, M. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6(1).

https://www.nature.com/articles/s41746-023-00979-5

 

Olawade, D. B., Adejumo, A., & Khan, M. S. (2024). Enhancing mental health with artificial intelligence: A review of current applications and ethical considerations. Journal of Mental Health Technology.

https://www.sciencedirect.com/science/article/pii/S2949916X24000525

 

Dehbozorgi, R., Meng, X., Zhao, L., & Gill, H. (2025). The application of artificial intelligence in the field of mental health care. BMC Psychiatry, 25, Article 6483.

https://bmcpsychiatry.biomedcentral.com/articles/10.1186/s12888-025-06483-2

 

Common Sense Media. (2025). Talk, Trust, and Trade-Offs: Teens’ Relationships with AI Companions.

https://www.commonsensemedia.org/sites/default/files/research/report/talk-trust-and-trade-offs_2025_web.pdf

 

Ada Lovelace Institute. (2025). Friends for Sale: The Rise and Risks of AI Companions.

https://www.adalovelaceinstitute.org/blog/ai-companions/

​

American Psychological Association. (2025). Use of generative AI chatbots and wellness applications for mental-health support.

https://www.apa.org/topics/artificial-intelligence-machine-learning/health-advisory-chatbots-wellness-apps

 

Herbener, A. B., Giner, L., & Romero, M. (2025). Are lonely youngsters turning to chatbots for emotional support? Journal of Adolescent Mental Health.

https://www.sciencedirect.com/science/article/pii/S1071581924001927

 

Stanford Institute for Human-Centred Artificial Intelligence. (2024). Generating medical errors: GenAI and erroneous medical references.

https://hai.stanford.edu/news/generating-medical-errors-genai-and-erroneous-medical-referencesIntelligence. (2024). Evaluating the reliability of large language models in high-acuity clinical triage.

Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology — the study of the unconscious forces shaping human behaviour — to technology, AI, and modern culture. His perspective is grounded in something relatively rare in this conversation: more than two decades of clinical experience alongside proven academic credentials. He is a clinical psychotherapist, former Director of the MA in Psychoanalytic Studies at the University of Essex, and the author of The Psychodynamics of Social Networking — the first book to apply psychoanalytic theory to social media. He also writes a monthly psychology column for GQ. Through his framework of Applied Psychodynamics, he helps leaders, organisations, and public audiences understand what is really happening beneath the surface of digital life — and what to do about it. He is based in London.

  • Instagram
  • LinkedIn
  • TikTok
© 2026 Dr. Aaron Balick | Cookie Policy | webdesign by Kugar Martin-Rae
bottom of page