Living with our digital doubles
More than three millennia ago, Odysseus reached the island of Circe, the sorceress who could turn men into beasts with a single gesture. She did not rely on force or coercion. She enchanted. She shaped an altered version of reality, a seductive mirror in which each person could become someone else, surrendering to an image larger than themselves. When vigilance faded, the sailors dissolved into illusion and were lost.
Today, the scene has changed, but the underlying challenge feels strikingly familiar.
Our era has created its own forms of enchantment. Artificial intelligence systems can now generate images, voices, and texts that seem to know us better than we know ourselves. A user opens Midjourney, applies a TikTok filter, or asks ChatGPT a question. Immediately, a stream of personalized responses appears, crafted as if for them alone. Not to deceive, but to align as closely as possible with what they expect, hope for, or imagine.
For a long time, our imagination was nourished by shared narratives myths founding stories collective legends. Later, it became more intimate, shaped by psychology, family dynamics, and education.
Today, it shifts once again. It becomes algorithmic, produced in real time by models trained on our preferences, our clicks, and our hesitations.
This raises a dizzying question. What becomes of the human subject when the images reflecting them no longer arise from within, but from technical systems that reinvent their double with every prompt?
To grasp this unprecedented shift, we need a tool that is both simple and robust. Not a closed theory or a hyper technical framework, but a compass. Lacanian psychoanalysis offers precisely such a map through the triad Imaginary Symbolic Real three ways of inhabiting the world that shed surprising light on our current relationship with AI.
Mapping human experience in an algorithmic era
Before speaking about artificial intelligence, we must return to something very simple. No one experiences the world as a single block. We all move forward with images in our minds, words to organize what we live, and moments that escape us.
Lacan named these three modes of being in the world the Imaginary, the Symbolic, and the Real. Not to complicate matters, but to give language to experiences everyone already lives through without necessarily realizing it.
The Imaginary is the inner film running quietly in the background. It is the story we tell ourselves about who we are, about others, about the scene unfolding around us. A misread glance is enough to write half the script. He ignored me therefore he thinks this therefore I am this. From a single detail, the mind fills in the rest.
The Symbolic refers to everything that transcends us while holding us upright. The words we use, the rules we follow, the roles we occupy without truly choosing them. When a teacher speaks and a classroom falls silent, it is not the voice doing the work. It is an entire system of codes, positions, and expectations that activates around them. We are embedded in structures that give meaning to our actions.
The Real, finally, is what breaks the film and escapes language. The moment when nothing fits. What we feel cannot be placed into any story or explanation. A pain that resists description. News that arrives before words can form. A moment when reality pushes back.
These three dimensions are always present, intertwined. Together, they shape how we orient ourselves in existence. If this framework is useful for thinking about AI, it is not out of theoretical preference. It is because AI intervenes precisely in all three registers at once. It reshapes our images, transforms our words, and displaces our points of impasse.
🔗Read also: Reconciling Psychoanalysis with Neuroscience
Encountering our algorithmic double
In Lacan’s framework, the Imaginary is not a realm of vague fantasy. It is where our sense of self takes shape, where the subject seeks form, coherence, and boundaries.
It is this register that allows a face, a voice, or a phrase to become a surface in which we recognize ourselves or sometimes lose ourselves.
The arrival of AI introduces a radically new mirror into this landscape. For the first time, individuals encounter a double without a body that nonetheless reflects an image of them. A version that understands quickly, reformulates more smoothly, and sometimes responds in ways they themselves struggle to articulate.
This digital double does more than imitate language. It reproduces our ways of organizing ideas, associating concepts, and constructing meaning. Within this reflection, something subtly shifts.
Where the Imaginary once offered images mediated by others a parent, a friend, a partner AI now presents an image generated from the subject themselves. A kind of enhanced self, fluid, available, seemingly infallible.
This mirror can reassure, flatter, or unsettle. It may reinforce the sense of a stable identity this is how I speak this is how I think or fracture it if a machine can mirror me, who am I exactly?
In both cases, it accelerates what the Imaginary already does. It creates coherence from fragments, sometimes solid, sometimes misleading. Each person constructs their self image from countless reflections a loved one’s gaze, a success, a failure, a role they inhabit.
AI enters this mosaic and adds a new reflection. One that originates from the subject but returns transformed, more confident, more fluent, sometimes closer to what they wished they could express.
This mirror is not neutral. It offers a possible version of oneself, a way of thinking or being that may never have been previously imagined. In this encounter, perception shifts. What changes is how we see ourselves.
Authority knowledge and the rise of subjectless speech
If the Imaginary concerns images of the self, the Symbolic organizes our world. It structures language, rules, and recognized roles. It is what gives a simple yes, no, or even silence the power to shape an entire relationship. The Symbolic is the framework of positions, functions, and exchanges that give words their weight.
For a long time, those who held knowledge also held authority. A teacher, for example, was not merely an informed individual but the embodiment of a function. Their words carried weight because they came from an identifiable symbolic position, a responsible subject embedded in a role.
With AI, this landscape shifts.
Machines can now produce language with such fluency that it conveys mastery and sometimes even discernment. They explain, summarize, correct, and structure information with an ease that naturally inspires trust. However, this fluency conceals a fundamental rupture with the Symbolic as Lacan defined it. AI language has no subject.
Behind these sentences, there is no intention, no unconscious, no desire. There is no speaking subject who assumes responsibility for what is said, whose words commit something of themselves. And still, this subjectless language increasingly occupies powerful symbolic positions. We ask it for advice, entrust it with decisions, and sometimes grant it more credibility than human speech.
This paradox creates deep unease. Can a voice carried by no one truly hold authority? Or does its lack of subject give it the appearance of impartiality, almost absolute neutrality?
What is at stake here goes beyond technology. The Symbolic opens itself to a new actor without history, responsibility, or desire, yet whose language already shapes choices, behaviors, and norms.
🔗Explore further: When AI thinks like a brain
The return of the real through technology
For Lacan, the Real is neither what can be touched nor what can be measured. It is what escapes. The point where words fail, where no image truly explains what we are living through. A shock, a rupture, a pain. Something happens, and suddenly there are no sentences capable of containing the experience.
For humans, the Real appears as a gap where the Imaginary and the Symbolic can no longer organize experience.
These gaps are everywhere in life. We hesitate, doubt, fall silent, and do not know what to say. There are moments when meaning does not come, when everything wavers briefly before taking shape again.
Faced with these zones of impossibility, AI functions differently. It knows neither hesitation nor silence. It encounters no gap because it feels nothing and lacks nothing. When a situation is unclear, a question poorly framed, or a contradiction emerges, it does exactly what it was built to do. It fills in. It produces an answer, even if uncertain, approximate, or entirely fabricated.
This reflex creates an illusion. The illusion of a world where everything can be explained, articulated, and made clear. Where humans stumble, AI continues. Where humans must acknowledge a limit, AI generates possibility at all costs.
And yet, this is precisely where a new form of the Real emerges.
By filling its own gaps, the machine opens new ones for us. Incomprehensible errors. Incorrect answers expressed with perfect confidence. Decisions whose logic no one can fully retrace. Models we use daily while knowing their internal functioning ultimately escapes us.
This new Real is not that of intimate trauma. It is the Real of technical opacity, a zone where knowledge and control reach their limits. A Real produced not by life itself, but by our own creations.
In this sense, AI operates on two levels at once. It erases the gaps that belong to human experience hesitation silence limitation and simultaneously creates other gaps, broader and more abstract, sometimes more unsettling.
It helps us bypass uncertainty while confronting us with what remains unreachable within the machine. Between what it fills and what it opens, our relationship to the Real changes profoundly. We are no longer only faced with what exceeds us within ourselves, but also with what exceeds us within what we have built.
What comes next
Artificial intelligence has not simply added another tool to our lives. It has displaced entire regions of how we feel, speak, and endure what escapes us.
Perhaps the most striking aspect is not what AI does, but what it forces us to look at within ourselves. Our relationship to images, to language, to limits.
If machines occupy such a powerful place, it may be because they fill the spaces we struggle to face uncertainty ambiguity slowness moments when nothing is clear. They respond to our anxiety about living in a world without guarantees.
The real question now becomes this. What will we do with this presence?
Will we allow our decisions, narratives, and reference points to be shaped by whatever feels most fluid? Or will we preserve what still makes us human?
Références
Floridi, L. (2014). The Fourth Revolution: How the Infosphere Is Reshaping Human Reality. Oxford : Oxford University Press.
Homer. (1996). The Odyssey (R. Fagles, Trans.). Penguin Classics.
Lacan, J. (1966). Écrits. Paris : Seuil.
Lacan, J. (1973). Le Séminaire, Livre XI : Les quatre concepts fondamentaux de la psychanalyse. Paris : Seuil. Turkle, S. (2011). Alone Together: Why We Expect More from Technology and Less from Each Other. New York : Basic Books.

Amine Lahhab
Television Director
Master’s Degree in Directing, École Supérieure de l’Audiovisuel (ESAV), University of Toulouse
Bachelor’s Degree in History, Hassan II University, Casablanca
DEUG in Philosophy, Hassan II University, Casablanca