top of page
Image by Waldemar

Psychology : Applied

Where Artificial Intelligence Meets Human Psychology: Are We Hardwired to F*ck This All Up?

Updated: Jan 30

A robotic hand operating a bunch of humans as if they were marrionettes.
Image Generated by AI

As AI busily restructures the entire world around us, we are only just beginning to see how inept we are at meeting its challenges. Our ineptitude is based within our basic human psychology, provoking us behave rather stupidly and against our own interests in the face of a technology that, paradoxically, can't even think for itself.

While there is little doubt that great benefits are to be derived from the AI revolution, we’ll be hard pressed to enjoy any of them if the natural vulnerability of our very human foibles are so rapidly and disastrously exploited. From Prometheus’ fire to the Old Testament God’s apple, our myths are replete with warnings about the trouble humans invite when given too much power and knowledge. When it comes to AI, it’s fair to ask, “are we hardwired to f*ck this all up?”


The Problem with Being Human


Humans are complex creatures—brilliant in some ways and maddeningly predictable in others. Our cognitive shortcuts, emotional impulses, and unconscious biases may have served us relatively well for our deep evolutionary past, it’s been less up to the task of modernity. In the face of newer technologies, especially synergistic ones like AI and social media, these same traits make us easy prey for our own self-destruction – a process that I argue (not without hope) is already underway.


Social media platforms, powered by machine learning, serve up content that hardens what we already think, provokes our most primitive emotions, and creates echo chambers that reinforce individual and group identities that provoke polarisation and social paranoia. The “dark side” of the human psyche, what Freud called the “id”, is usually mitigated by social structures that seek to contain it.


Today’s digital technologies operate synergistically and powerfully on the primitive elements of human psychology. Essentially, we’ve created a shockingly effective crack distribution system upon which the world’s id is freebasing.

Please Don’t Challenge My Beliefs - Confirm Them!


Confirmation bias is our tendency to seek out, interpret, and remember information that confirms our existing beliefs. In an age where algorithms are designed to maximise online engagement; where AI can now create and disseminate misinformation that’s impossible to discern from truth; and where the most influential global networks are founded, developed, and run by a few self-interested billionaires; you can easily see a perfect storm of global human manipulation underway.


A person with futuristic goggles reflects text like "Opinions" and "Facts." Digital background, bright colors, evokes tech influence.
Image Generated by AI

Individuals are far more likely to engage with content that validates their beliefs, even when presented with contradictory evidence alongside it. AI algorithms that underlie so much of our digital lives, learns from our behaviour and delivers more of the same, deepening our cognitive biases. Even worse, continued repetition and confirmations embed biases not just as beliefs, but as a very feature of our identity. This makes it far more difficult for people to be open to contrary points of view. That’s because rather than it being about challenging one’s ideas, it feels like one’s very identity could be compromised simply by accepting another’s position. When one’s identity is under threat, one will become defensive and even less open to another’s point of view.


Finding room to compromise on long held positions or values is difficult enough in optimal situations where agreed upon facts and figures sit between disagreeing parties. In today’s climate each individual or group draws on their own “facts” – whether true or not – where preferred bias is the arbiter of those choices rather than veracity.


Misinformation research at a micro-level emphasizes people's false and unsubstantiated belief formation process.*

For years now people have been turning away from mainstream media to seek “news” from sources that better align with their ideas and values. While even high-quality mainstream media has its flaws, it was at least subject to oversight by editors and journalists who were at least trained to journalistic standards and ethics. This year the world witnessed what might be the final blow to trust in quality press when The Washington Post and LA Times broke with precedent by refusing to endorse a presidential candidate.


Smiling woman at microphone with text accusing her of standing with Palestine over Israel. Background has flag colors, emphasizing political tension.

This dynamic played a significant role in the recent election in the US, where confirmation bias, misinformation, and algorithmic amplification collided. Social media platforms prioritised sensationalised or emotionally charged content, where misinformation spread faster than accurate reporting and was rarely fact-checked. A perfect example of this were the two contradictory ads that portrayed Kamala Harris as pro-Palestine and pro-Israel – ads that were distributed to bias-hungry audiences on both sides – and both paid for by the very same political action group – Elon Musk’s!

Smiling woman in front of an Israeli flag. Text reads, "Kamala Harris Stands with Israel." Button says, "Watch the video to see her record."

Don’t Make Me Think Too Hard


Humans have a natural tendency to gravitate towards simple, digestible solutions to complex problems with a preference for "cognitive ease" over “cognitive overload”. This is a phenomenon that psychoanalyst Melanie Klein described as the “paranoid/schizoid” position, where an individual splits off the good and the bad from each other (and feels under attack by the bad), and the “depressive position” where the individual has to struggle with the unsatisfying nature of grey areas, nuance, and complexity. The paranoid/schizoid position is the more regressed one and the depressive more highly developed. When under stress, made to feel afraid, or overwhelmed by anxiety and complexity, we can regress from depressive back to paranoid/schizoid.


Simple answers may be easier to swallow, but that doesn’t make them correct.

One of the draws of Donald Trump (to some, at least) is that he offers simple answers to complex problems – answers that people want to hear even if they aren't true. For example, if climate change is a hoax then we really don’t need to worry about it – it lets us off the hook entirely. This is a direct appeal to cognitive ease releases one from feeling accountable to the problem - it doesn't make the problem go away.


Trump's rhetoric is almost always paranoid/schizoid as illustrated on his insistence that problems lie not within “us” but within outsiders instead – a key scapegoat being immigrants. In promising mass deportation he offers a simple answer to a complex problem by appealing to our basest paranoid/schizoid instincts – fear of the other. This otherwise psychotic perspective is then supported by the digital infrastructure that surrounds us (“they are eating our pets”) providing uncritical justification for simplistic and heinous solutions with real human costs.


We already know that AI suffers from the same biases as the data it’s trained on. It also offers simplistic answers that may feel satisfying, but fail to capture the nuance of a given issue. Chatbots and generative AI tools often package information in ways that make it seem definitive, even when it’s based on incomplete, misleading, or even completely hallucinated data.


While experts using AI to assist in their work may be able to identify these discrepancies and override them, novices won’t have the knowledge, skills, or experience to do so. Those that are seeking to use AI to compile and produce content without any deep learning of that content are merely performing expertise rather than acquiring it. If we don’t get ahead of it, the natural human urge to circumvent hardship and find shortcuts may result in generations of humans bypassing deep learning and acquiring robust expertise.


Ozempic for the IQ: There's No Pill (yet) For Actual Learning


Man in suit holds glowing pill; digital brain and tech elements in background. Text reads "INTELLIGENCE ENHANCE". Futuristic mood.
Image Generated by AI

If you could take a pill to enhance your IQ, would you? I’d be hard pressed to say no. However, while Ozempic and related drugs actually do help you lose weight, AI assistance does not do the same for your cognitive ability, in fact it might do quite the opposite. 


Over-reliance on shortcuts when deeper (and harder) engagement is required is a temptation that may be too hard to ignore. In psychoanalytic terms, this reflects a tendency towards the “pleasure principle” where we avoid the hard-stuff and bypass the natural anxiety and effort of grappling that comes with the often tortuous acquisition of real expertise. When you combine this with the biases inherent in AI Chatbots that students may be using, you have, a situation where content is provided to the very person who has yet developed the critical skills to evaluate it.


Are we building a civilisation where its members are spoon fed flawed content while at the same time withholding the capacity for independent critical thought?

Suitable For Survival in The Savannah: Disastrously Dangerous in the Digital Domain


From an evolutionary perspective, the psychological vulnerabilities I’m discussing here that were generally advantageous in the context of the survival of our ancestors have now become a serious Achilles heel. Our brains developed in the context of small hunter-gatherer groups of around 150 people or less – the more social contacts we have the more difficult it is to maintain psychological equilibrium. Further, our predisposition to prioritise immediate rewards over long-term gains—a trait known as temporal discounting—once helped our ancestors secure resources for survival. Temporal discounting is why we get into so much trouble with short term pleasures like smoking, drinking, eating, and scrolling. It’s also why it's so difficult to sell the idea that we have to make some hard sacrifices if we’re going to deal with global warming.


Another evolutionary trait is our tendency to rely on social proof—the idea that if others are doing something, it must be worth doing. While this instinct once guided us to safety and community, it’s now exploited by algorithms that amplify popular content, regardless of its truth or quality. Back in our hunter-gatherer groups we were able to evidence social proof by repeated experiences, at close hand, of people we learned we could trust or not. At scale, we don’t have that to work with – and social proof is gained (and gamed) by mass followings (which can be manipulated) and the disproportionate weighing of what gets distributed and how much (e.g. “they’re eating our pets.”)


This mechanism has been linked to the viral spread of misinformation, especially during politically charged events like elections. The more emotionally sensational the material is, the more likely it is to be engaged with and shared. A recent study in Evolution and Human Behaviour journal found misinformation on platforms like X tends to be more negative (and hence more contagious) than factual information, and that the negativity of misinformation is increasing. This activates our inherent negativity bias which unfortunately doubles down on all the pernicious elements I’ve been describing, like making us lean more paranoid/schizoid and less open to being impacted by other opinions.


Yikes! Is There Anything We Can Do About This Runaway TrAIn?

Futuristic train labeled "AI" speeds through an old western town with glowing patterns in the sky. Red canyons and cacti in the background.

I’ve painted a pretty scary picture (and so has AI above)– and perhaps that’s because I’m writing a mere six days after Trump’s inauguration, an administration which I believe is very much the result of what I’ve been speaking about in this article. Furthermore, whatever weak guardrails were present are being rapidly dismantled, and not in our favour. However, we are also at the early stages of all these developments, so this is the opportunity to double down on what we can do.


As a psychotherapist my greatest hope is on the personal and local level – how we are with actual people in our lives – and finding way to maintain real IRL connection with those around us – even (perhaps especially) those with whom we disagree. They may seem like “soft skills” but behaving in a way that is respectful, kind, and human – on the everyday level – can help cohere a society that’s being undone online. This includes limiting online exposure and not being sucked into the digitally destructive devil (I just love alliteration don’t I?) in ways that feed the beast.

Where Artificial Intelligence Meets Human Psychology


We must also do the hard work and resist the pleasure principle. We must engage critically with our world, think for ourselves, engage with others, and resist the temptation to take the easy way out. Expressing outrage into the void of social media won’t help. What can you actually do to enhance social cohesion instead? On a broader scale we should also consider:

 

  1. Critical Thinking Education: Cultivating media literacy and critical thinking skills can help individuals recognise when they’re being manipulated.

  2. Ethical AI Design: Developers must prioritise transparency and fairness in algorithmic design to minimise exploitation of human biases and integrating psychological insights into AI ethics frameworks.

  3. Regulation: Policymakers need to implement safeguards that prevent the weaponisation of AI against vulnerable populations.


Lastly, we need to better understand the unconscious motivations that drive our behaviour and the meaning we make of digital technologies in ways that can help us resist their most seductive aspects. Importantly, particularly when it comes to AI, we must not be seduced by its performance of humanity. As Alexander Stein stated in the American Psychoanalytic Association’s new Council on Artificial Intelligence Report (CAI):


For all the anthropomorphizing, projection, self-referentialism, and human-like abilities these technologies are intended to resemble, the essential humanness on which they are based is close to non-existent.

Built by humans, yes; capable of human like thought, no. AI, however, is a mirror of its creator and that is both its fatal flaw and our greatest opportunity. While the thrust of this piece is all about the flaws in our hard-wiring that make us vulnerable to digital technologies, it is also the qualities of very humanity that we must rely upon to see us through. Without the conscious effort to address the psychological vulnerabilities that it exploits, we abandon our own agency and let AI set the defaults which lead to the consequences we've been discussing; we must not let that happen.


By understanding where artificial intelligence meets human stupidity, we can begin to chart a course that combines technological innovation with psychological wisdom.


Let’s not be hardwired to f*ck this all up.

 

 

Find out more about the APA's CAI Report: https://apsa.org/thecaireport/

AI assistance was used in creating this post.

FEATURED POSTS
POSTS BY CATEGORY

Subscribe

Thanks for submitting!

bottom of page