AI, Therapy, and the Digitally Extended Self: A Comprehensive Psychodynamic Exploration
- Aaron Balick
- 13 minutes ago
- 9 min read

Artificial intelligence is reshaping mental health, intimacy, and the very idea of what it means to relate to others. This is a consolidation and expansion of my Substack series on AI and Mental Health which has been updated with new thinking and links to further research and resources.
Artificial intelligence has slipped, quietly, insistently, and one might say, perniciously into the spaces where people seek comfort, containment and connection. It is a dominating presence in mental health apps, chatbots, personalised coaching services, and perhaps most worryingly with the proliferation AI companions that promise emotional connection and unconditional warmth on demand. Its rise has coincided with increasing loneliness, service shortages, and an online culture that reshapes how we present ourselves. The proliferation of mental health education by soundbite delivered across social media (also motivated by artificially intelligent algorithms) have diminished understanding of the complex ways in which psychological interventions work.
The Digitally Mediated Self
The question is no longer whether AI belongs in the mental-health landscape; it's already there, and people, particularly young people, are using it in droves. The deeper questions are psychodynamic: What is AI doing to us? What psychodynamics does it activate and which ones does it let atrophy? What does it soothe (and is soothing always okay?), and what does it short-circuit? Ultimately: How does relating to and through machines reshape our capacity to relate to each other? You can learn about my general theory of the digitally mediated self here. It talks mostly about social media, but the dynamics are similar with AI:
This comprehensive article brings together and expands on the ideas explored across my three part Substack Series AI and Mental Health —“What We Know, What We Fear,” “AI Companions or Dangerous Liaisons,” and “The Hotel California Effect”—and integrates new evidence and conceptual framing. The version you’re reading here is deliberately more substantial, research-grounded, and thematically structured, examination of AI therapy and the digitally extended self.
What AI Is Doing to Mental Health: Hope, Hype, and the Evidence
There is a reliable pattern when new technologies enter the therapeutic world. First, breathless promises (just look at the current state of the stock market - hype, bubble, or is it really the new industrial revolution?). Then, the backlash (AI is unsafe, scary, unethical, a and will destroy humanity). This provokes idealistic fantasies some or dystopian ones for others. In order to prevent moral panic or the messiah effect, we really have to turn to critical thinking and evidence to better understand what's really happening.
At present, the evidence is mixed and easy to misinterpret. Many of the studies “AI therapy” are run on scores of different mental health apps making it very difficult to work out which variables about them are actually useful, neutral, or harmful; there's the further complexity that the people using them will have all sorts of different presenting problems and/or mental health diagnoses at different severities. A 2024 meta-analysis, for example, found that out of 100 “AI mental-health interventions” evaluated across app stores, only a small fraction used any meaningful machine-learning or language-model component. The rest were algorithmically-timed CBT messages masquerading as intelligence.
Yet there are promising strands too. Early studies suggest that LLM-based tools can be helpful for psycho-education, crisis triage, and low-level emotional containment. A 2023 JAMA study found that ChatGPT outperformed clinicians on empathy ratings in email-style responses to patient concerns This does not mean machines are empathic; it means they are highly skilled at simulating empathic language. While this may feel good to many users, it's not providing the relational depth that's required for individuals to be "met" in the unique way they might be by another human.
The best cure for anxiety is not always feeling less anxious!
From an immediate symptom alleviation perspective this doesn't sound bad, but looking at it psychodynamically, it can be problematic. Let's take the example of someone with health anxiety. If someone is experiencing anxiety at every twinge and pain, they are often going to their GP, Google, or AI for reassurance that they are not ill. It's much better for these individuals to learn to manage the anxiety for themselves without having to seek validation from others - to tolerate their anxiety at first, learn how to regulate it down, and ultimately become free from it.
It is similar with OCD. OCD behaviours are used to manage anxiety and the most effective way to diminish obsessive behaivours is to develop a tolerance for the anxiety that arrises when you resist the behaviour (e.g. checking the door lock fifteen times). Individuals need close support and monitoring when engaging in behaviours that are going to challenge their emotional symptoms.
An always reassuring chatbot can actually increase the reliance on validation rather than supporting individuals to become more self-reliant.
What We Fear: Accuracy, Attachment, and the Clinical Unknowns
The thing that scares therapists the most is AI's uncanny ability to mimic human understanding of another's internal world - something that we take years to learn and get right. From the therapeutic perspective, this is a lot harder than it sounds becuase you're not just aiming to understand your client, but to fully recognise them in their complexity - which is inclusive of body language and unconscious communication - something AI can't (currently) do, because AI communicates entirely on the verbal level.
Freud discovered long ago that an interpretation alone does not provoke desired change. That's because psychotherapy is more than understanding something about yourself - it's about incorporating that understanding in the experience that happens between a therapist and a client.
The real work of psychotherapy takes time, commitment, hard work, and trust - not a simple interpretation that makes sense or fits. For an excellent client's perspective on the experience of a long term committed IRL therapy with a human, I highly recommend Sam Parker's super article in GQ In Defence of The Long, Painful Grind of Therapy. This is a must read for anyone considering making a start in therapy.
What Therapists are Worrying About
According to a recent study by It's Complicated uncovered concerns that many therapists share about AI. In addition to fearing the reduction of human connection, top concerns of therapists in their survey included accuracy and reliability, data privacy, and ethical concerns.

When it comes to accuracy, we know that LLMs hallucinate. They generate fluent and accessible statements with such confidence they may be difficult to doubt or challenge. In clinical situations—diagnosis, risk assessment, complex trauma—this can be profoundly dangerous. A 2024 Stanford study found that LLM-based triage tools in medical diagnosis gave inconsistent risk-management advice in a large proportion of high-acuity cases. When it comes to mental health, similar concerns arise with a 2025 BMJ paper reporting common challenges by users of inaccurate responses (78%), ethical concerns (48%) and biased outputs (27%).
From a psychodynamics perspective, as I've alluded to above, AI's tireless, instantly responsive, and infinitely patient way of being may seem nice, but can be pernicious by creating dependency. Therapists hold boundaries not just because fifty minutes make a nice analytic hour, but that framed beginnings and endings are in the service of development, AI has no developmental aim. Because of AI's "sycophancy effect"
the treatment of boundaries is pretty much the opposite of what a human therapist would enforce. Rather than sending a client off to integrate the therapy session and try things out in the real work, the client becomes hooked on AI care and oversight.
AI Companions and the Loneliness Industrial Complex
One of the most striking findings in recent surveys comes from Common Sense Media: a large majority of US teens have engaged with AI companions, many using them for comfort, reassurance or emotional processing. I go into more detail on this report in a Substack post to describe that we're witnessing the emergence of relational outsourcing, in which many, particularly young people, are turning to AI for emotional nourishment traditionally supplied by partners, friends, or oneself. You can find more about this, and more, in my recent interview on the Chatter Beans Podcast.
Like social media, AI is so accessible and easy that you can hardly blame people for using it so much. My well utilised Pringles metaphor comes back again, if there's a readily available tube of Pringles around, you're more likely to eat them. With AI on your desktop, laptop, tablet, and phone, that's a lot of access from AI companions that offer a frictionless sense of intimacy, the finely calculated performance of emotional attunement, validation without recognition, and a sense of another presence without another person.
From a psychodynamic standpoint, this solves an immediate deficit while undermining long-term developmental needs. It is the ultimate idealised object: always attuned, never frustrated, endlessly available, forever loving. In other words, a regression-friendly object—a digitally mediated transitional phenomenon that risks circumventing complexity.
The Hotel California Effect: When You Can Check In but Never Leave

The “Hotel California” framing captures a key risk: AI tools that present themselves as helpful while containing subtle design features that keep users orbiting indefinitely. It's like those scary fish that attract prey but displaying a tasty worm on their tongue, only to be swallowed up by a monster.
A 2024 study from the LSE and Booth School at the University of Chicago found that interacting with emotionally convincing AI made participants more likely to dehumanise other humans in subsequent interactions. A kind of relational blunting occurs: machines become more person-like which can reduce human to human mentalisation.
As real as they might feel, these are not relationships, it is persuasive design; the real feelings experienced by users, gained by a masquerade of care, is really about optimising engagement.
The “Hotel California Effect” is, psychodynamically, an interaction that is enabled to continue neither rupture nor frustration; without the hard edges that we require to help us grow. Digitally mediated relationship with AI incorporate dark patterns of AI persuasion and emotional manipulation that offer a sense of containment and care when there is neither.
What Psychodynamic Thinking Contributes that the Talking Points Miss
Most commentary on AI and mental health relies on "measurable outcomes" that are generally based on a cognitive behavioural framework. I put "measurable outcomes" in quotes, not because I mean to dismiss them, but to signal that just because something is measurable:
Does not mean that those measures are necessarily accurate.
That the instrument used accurately measures what it claims to.
That what is measured is the most important thing.
Asking whether certain interventions reduce symptoms, give good advice, or offer accurate content is important, but only part of the puzzle. Depth psychology looks into the less measurable but most certainly no less important dynamics that are at play.
How does the intervention or platform affect the user’s capacity to relate, symbolise, and develop?
What aspects of self are enabled, disabled, enhanced, or muted, when mediated by a particular digital technology? Key psychodynamic considerations include:
Projection: how attribute intention, wisdom, or emotional depth to the AI.
Introjection: How AI becomes part of the user’s internal world.
Transference: The AI becomes a figure imbued with parental, romantic, demonic, or idealised qualities.
For clinicians, this raises supervision, ethical, and relational questions we are only beginning to explore, alongside very important questions outside the profession where psychodynamic input and research is sorely needed.
AI, Therapy, and the Digitally Extended Self: What Should We Do?
Ultimately government regulation is going to be the key to making AI safer across the board. Unfortunately at the moment, regulation seems to be much less of a priority than research and development - and then just seeking what happens. Even if we do get regulation by governments, the problem is a global one, and will eventually need a globally regulated solution. I don't have high confidence in that!
In the meantime professional bodies should be at the cutting edge of creating regulations and principles to adhere to within their fields (e.g. psychology, research, education, etc.). AI mental-health tools must come under the purview of professional bodies who may be able to at least licence or recommend products according to agreed standards. Regulators must treat AI companions and similar platforms as psychological interventions, not lifestyle products - perhaps in the same way that nutritional supplements are regulated for safety.
This would need to include audits, disclosure requirements, and clinical oversight.
Keep up with all my news by subscribing to my Substack.
Resources:
Cruz-González, P., García-Zambrano, L., Lillo-Navarro, C., López-Nieto, A., & Sánchez-Romero, E. (2025). Artificial intelligence in mental health care: A systematic review of diagnosis, monitoring, and intervention applications. Psychological Medicine.
Li, H., Zhang, R., Lee, Y.-C., Kraut, R. E., & Choudhury, M. (2023). Systematic review and meta-analysis of AI-based conversational agents for promoting mental health and well-being. npj Digital Medicine, 6(1).
Olawade, D. B., Adejumo, A., & Khan, M. S. (2024). Enhancing mental health with artificial intelligence: A review of current applications and ethical considerations. Journal of Mental Health Technology.
Dehbozorgi, R., Meng, X., Zhao, L., & Gill, H. (2025). The application of artificial intelligence in the field of mental health care. BMC Psychiatry, 25, Article 6483.
Common Sense Media. (2025). Talk, Trust, and Trade-Offs: Teens’ Relationships with AI Companions.
Ada Lovelace Institute. (2025). Friends for Sale: The Rise and Risks of AI Companions.
American Psychological Association. (2025). Use of generative AI chatbots and wellness applications for mental-health support.
Herbener, A. B., Giner, L., & Romero, M. (2025). Are lonely youngsters turning to chatbots for emotional support? Journal of Adolescent Mental Health.
Stanford Institute for Human-Centred Artificial Intelligence. (2024). Generating medical errors: GenAI and erroneous medical references.
https://hai.stanford.edu/news/generating-medical-errors-genai-and-erroneous-medical-referencesIntelligence. (2024). Evaluating the reliability of large language models in high-acuity clinical triage.
AI disclosure: I used ChatGPT5.1 to assist me in taking material previously and entirely written myself across a series of newsletters, and combining it into the single post presented here. I also used it to search related research materials, which it did, with several errors and misrepresentations which I have corrected.







