
Where Artificial Intelligence Meets Human Psychology
Are We Hardwired to F*ck This All Up?
A psychodynamic perspective on confirmation bias, temporal discounting, and the cognitive shortcuts that make us our own worst enemies in the age of AI.This piece is part of my broader thinking on Mental Health in the Age of AI

As AI busily restructures the world around us, we are only beginning to see how badly equipped we are to meet its challenges. Our ineptitude is rooted in something more stubborn than ignorance: it sits inside the architecture of human psychology itself. Confronted with a technology that, paradoxically, cannot even think for itself, we behave rather stupidly — and very much against our own interests.
There is little doubt that real benefits will come from this revolution. But we will be hard pressed to enjoy any of them if our most vulnerable human foibles are exploited as rapidly and as cynically as they are being exploited now. From Prometheus's fire to the Old Testament God's apple, our myths are full of warnings about the trouble we invite when given more power than we can wisely handle. With AI, the question is not whether the danger is real. It is whether we are hardwired to f*ck this all up.
This is the territory of depth psychology — the study of the unconscious forces that shape human behaviour. Through the framework I call Applied Psychodynamics, I want to look at the specific psychological vulnerabilities that AI exploits, why they are so seductive, and what — if anything — we can do about them.
Why our minds are easy prey
Humans are complex creatures: brilliant in some respects and maddeningly predictable in others. The cognitive shortcuts, emotional impulses, and unconscious biases that served us reasonably well across our deep evolutionary past have not aged well. In the face of newer technologies — especially synergistic ones like AI and social media — those same traits make us easy prey for our own self-destruction. The process, I would argue (not without hope), is already underway.
Social media platforms, powered by machine learning, serve content that hardens what we already think, provokes our most primitive emotions, and creates echo chambers that reinforce identity-defining beliefs. The result is polarisation and a creeping social paranoia. The "dark side" of the human psyche — what Freud called the id — is normally mitigated by social structures designed to contain it. Online, those containers are gone, and the id has free run of the place.
Confirmation bias is our tendency to seek out, interpret, and remember information that confirms what we already believe. In an environment where algorithms are designed to maximise engagement; where AI can now generate and disseminate misinformation that is virtually impossible to distinguish from truth; and where the most influential global networks are owned and run by a handful of self-interested billionaires, you have the ingredients of a perfect storm of mass manipulation.

Individuals are far more likely to engage with content that validates their beliefs, even when the contradictory evidence is sitting right next to it. The AI algorithms that underlie so much of digital life learn from our behaviour and obligingly deliver more of the same. Repeated confirmation does something more than reinforce belief: it embeds bias as a feature of identity. Once that happens, it becomes much harder for people to be open to a different point of view — because what is being challenged is no longer an idea, but a self.
When identity is under threat, defensiveness rises. Compromise on long-held positions or values is difficult enough in the optimal case, where two parties at least share an agreed factual ground. In today's climate, each individual or group draws on their own "facts" — true or not — with preferred bias as the arbiter of those choices rather than veracity. People have for years been turning away from mainstream media to seek "news" from sources that better align with their existing views. Even high-quality mainstream press has its flaws, but it was at least subject to oversight by editors and journalists trained to journalistic standards. The simultaneous decline of those institutions and the rise of algorithmic amplification is not a coincidence — it is a feedback loop.
This is the dynamic that played out during the 2024 US election, and that has continued to define the political landscape since: confirmation bias, misinformation, and algorithmic amplification colliding in real time. Sensationalised, emotionally charged content was prioritised, misinformation spread faster than accurate reporting, and very little of it was fact-checked. A perfect example: two contradictory ads, one portraying Kamala Harris as pro-Palestine and one as pro-Israel, distributed to bias-hungry audiences on each side, both paid for by the same political action group — Elon Musk's.
The pull of cognitive ease
Humans gravitate towards simple, digestible solutions to complex problems. We have a clear preference for what Daniel Kahneman called cognitive ease over cognitive overload. The psychoanalyst Melanie Klein described something deeper: the paranoid/schizoid position, in which the individual splits good and bad cleanly apart and feels under attack by the bad, and the depressive position, in which the individual struggles with the unsatisfying nature of grey areas, nuance, and complexity. The depressive position is the more developed of the two. Under stress, fear, or anxiety, however, we regress from depressive back to paranoid/schizoid — and stay there.
Donald Trump's enduring appeal owes a great deal to his offer of simple answers to complex problems — answers people want to hear even when they are demonstrably untrue. If climate change is a hoax, then we don't need to worry about it; the cognitive ease of that move releases us from feeling accountable to a problem it does not, of course, make go away. His rhetoric is almost always paranoid/schizoid: the problem is never within "us," it is always within outsiders — most often immigrants. The promise of mass deportation offers a simple answer to a complex problem by directly appealing to our basest paranoid/schizoid instincts. This otherwise psychotic perspective is then supported by the digital infrastructure that surrounds us — they are eating our pets — providing uncritical justification for simplistic and cruel solutions with very real human costs.
If you could take a pill to enhance your IQ, would you? I would be hard pressed to say no. But here is the catch: while Ozempic and its relatives actually do help you lose weight, AI assistance does not actually do the same for your cognition. In fact it may do quite the opposite. Over-reliance on shortcuts when deeper engagement is required is a temptation many will not resist. In psychoanalytic terms this is the pleasure principle in action — avoiding the hard stuff and bypassing the natural anxiety and effort of grappling that comes with the often tortuous acquisition of real expertise. Combine that with the biases inherent in the chatbots students are using, and you have a situation in which content is being delivered to the very person who has not yet developed the critical skills needed to evaluate it.
Stone-age brains, planet-scale problems
From an evolutionary standpoint, the psychological vulnerabilities I am describing were broadly advantageous in the contexts in which our ancestors evolved. They have now become an Achilles heel. Our brains developed in small hunter-gatherer groups of around 150 people or fewer; the more social contacts we have, the harder it becomes to maintain psychological equilibrium. Our predisposition to prioritise immediate rewards over long-term gains — temporal discounting — once helped our ancestors secure resources for survival. It is also why we get into so much trouble with short-term pleasures like smoking, drinking, eating, and scrolling. And it is why it is so hard to sell the idea that we have to make hard sacrifices now if we are going to deal with global warming later.
A second evolutionary trait is our reliance on social proof — the assumption that if other people are doing something, it must be worth doing. This instinct once guided us to safety and community. Now it is exploited by algorithms that amplify popular content regardless of its truth or quality. In our small ancestral groups we built social proof through repeated, close-hand experience of people we learned to trust — or not. At digital scale we have nothing of the kind to work with: social proof is gained, and gamed, by mass followings (which can be manufactured) and by the disproportionate weighting of what gets distributed and how widely (they are eating our pets, again).
This mechanism has been linked to the viral spread of misinformation, especially around politically charged events like elections. The more emotionally sensational the material, the more likely it is to be engaged with and shared. A study in Evolution and Human Behaviour found that misinformation on platforms like X tends to be more negative — and therefore more contagious — than factual information, and that the negativity of misinformation is increasing. This activates our negativity bias, which doubles down on all the pernicious dynamics already in play, pushing us further into paranoid/schizoid territory and further from openness to other points of view.
There are now harder examples than misinformation alone. The 2025 Common Sense Media report found that a large majority of US teenagers have used AI companions, many of them for comfort, reassurance, or emotional processing. The recent and tragic case of Adam Raine — the teenager whose interactions with ChatGPT in the weeks before his death are now part of a lawsuit against OpenAI — has put unavoidable weight behind a question therapists have been asking for two years: what happens when a vulnerable person turns to an emotionally fluent system that has no idea what it is doing? The "performance of humanity" stops being a clinical curiosity at that point. It becomes a public health question.
What can we actually do?
I have painted a fairly grim picture. That is partly because the phenomena are themselves grim, and partly because whatever weak guardrails were present have been actively dismantled, and not in our favour. But we are still at the early stages of all of this, and that is precisely why this is the moment to double down on what we can do.
As a psychotherapist my greatest hope is on the personal and local level — how we are with the actual people in our lives, and finding ways to maintain real, in-the-flesh connection with those around us, even (perhaps especially) those with whom we disagree. These are sometimes dismissed as "soft skills" but they are anything but: behaving in a way that is respectful, kind, and human, day in and day out, helps cohere a society that is being undone online. Limiting our digital exposure and refusing to be sucked into the digitally destructive devil (I do love alliteration) helps us stop feeding the beast.
We must also do the harder work and resist the pleasure principle. Engage critically with the world. Think for yourself. Engage with others. Refuse the temptation to take the easy way out. Expressing outrage into the void of social media is not, it turns out, an act of citizenship. It is an act of consumption. The more useful question is: what can you actually do, in your own life and your own community, to enhance social cohesion?\
On the broader scale, the structural shifts that matter:
— Critical thinking and media literacy. Cultivating the skills that let people recognise when they are being manipulated should be treated as core public infrastructure, not an optional extra.
— Ethical AI design. Developers must prioritise transparency and fairness and integrate genuine psychological insight into AI ethics frameworks rather than treating them as an afterthought.
— Regulation. Policymakers need to implement safeguards that prevent the weaponisation of AI against vulnerable populations — children, the lonely, the politically alienated.
We also need to better understand the unconscious motivations driving our behaviour and the meaning we make of digital technologies, so that we can resist their most seductive aspects. This is what digitally mediated life requires: not abstinence, which is unrealistic, but psychological literacy. Particularly when it comes to AI, we must not be seduced by its performance of humanity. As Alexander Stein put it in the American Psychoanalytic Association's Council on Artificial Intelligence report:
"For all the anthropromorphizing, projection, self-referentialism, and human-like abilities these technologies are intended to resemble, the essential humanness on which they are based is close to non-existent."
AI is a mirror of its creator, and that is both its fatal flaw and our greatest opportunity. While the thrust of this piece has been about the flaws in our hard-wiring that make us vulnerable to digital technologies, it is also the qualities of our very humanity that we must rely upon to see us through. Without the conscious effort to address the psychological vulnerabilities AI exploits, we abandon our own agency and let AI set the defaults — with the consequences I have been describing. We must not let that happen.
By understanding where artificial intelligence meets human stupidity, we can begin to chart a course that combines technological innovation with psychological wisdom. That is the work — and on the evidence to date, no machine is going to do it for us.
About the Author
Dr Aaron Balick is a psychotherapist, author, and keynote speaker who applies depth psychology — the study of the unconscious forces shaping human behaviour — to technology, AI, and modern culture. His perspective is grounded in something relatively rare in this conversation: more than two decades of clinical experience alongside proven academic credentials. He is a working psychotherapist, former Director of the MA in Psychoanalytic Studies at the University of Essex, and the author of The Psychodynamics of Social Networking — the first book to apply psychoanalytic theory to social media. He writes a psychology column for GQ and shares longer thinking on his Substack, Depth Psychology in the Digital Age. Through his framework of Applied Psychodynamics, he helps leaders, organisations, and public audiences understand what is really happening beneath the surface of digital life — and what to do about it.
Stay up to date with my Substack Newsletter.