A blue shirt I had received as a gift from my wife's sister helped. The whole event reminded me of Yakov Petrovich Golyatkin, the modest bureaucrat from Fyodor Dostoevsky's novella The Double, a disturbing study of a torn personality within a vast, impersonal feudal system.
It all started with a message from a respected colleague congratulating me on a speech on a geopolitical topic. When I clicked on the attached link to remember what I had said, I worried about my memory. I couldn’t remember when I had recorded the video. After a few minutes, I knew something was wrong. Not because of what I was saying, but because in the video, I was sitting at my office desk in Athens wearing a blue shirt that I had never worn outside my island home. The video actually showed my “deepfake AI” doppelganger.*
Since then, hundreds of such videos with my face and a synthesized voice have been circulating on social media. And this weekend, a new video surfaced showing deepfake me telling fabrications about the coup in Venezuela. My AI doppelgangers lecture, saying things I might say, but also things I would never say. Sometimes they are full of anger or give pompous sermons. Sometimes they are blatantly false, sometimes disturbingly convincing. Acquaintances ask me, “Janis, is it possible that you said that?” Opponents share these recordings as proof that I am an idiot. The worst part is when they say that my doppelgangers are more articulate and convincing than me. I find myself in the bizarre position of being an observer of my own digital puppet, a phantom in a technofeudal machine that I have long argued is not just broken, but designed to enslave us.
My first reaction was to write to Google, Meta, and others demanding that they remove all such recordings. I furiously filled out forms to have at least some of these accounts and recordings removed, only to have them reappear elsewhere. After a few days, I gave up: no matter what I did, no matter how much time I spent persuading big tech companies, my AI doppelgangers would spring up again like the severed heads of Hydra.
I calmed down and began to analyze. Wasn't it me who claimed that big tech companies had digitized capitalism and were leading a major transformation of markets into cloud fiefdoms and profits into cloud rent? Aren't my AI doppelgangers the perfect confirmation that, in this technofeudal reality, the liberal individual is dead?
Reconciled to a partial loss of ownership of myself, I sought solace in the rationalization of deepfake as the ultimate act of feudal confinement, proof that in technofeudalism we own nothing - not the data we create through our (digital) labor, not the networks of our social connections, not even our audiovisual identity. Our new masters see us as tenants in their cloud estates, androids whose image they can appropriate at will to spread confusion, muddy conversation, and drown out authentic criticism with purposefully synthesized cacophony.
And then I remembered a brighter idea from ancient Greece. What if my AI doppelgangers were messengers? isegoria (ἰσηγορία), the principle of the bright, the promising, and the absent, just like true democracy? I asked various versions of AI chatbots to define this term, and they consistently misinterpreted it as equality of speech, or the right to be heard, or the freedom to speak in an assembly. But that’s not what the ancient Athenians had in mind. For them, isegoria was the exact opposite of today’s “freedom of speech,” which they would dismiss as an abstract right to shout into the air. For the Athenians, it meant the right to have your views seriously evaluated, based on facts, regardless of who you are or how skillfully you formulate them.
Can AI deepfakes rescue isegoria from the clutches of our technofeudal dystopia? When we realize that it is impossible to verify a video shared online, will we be forced to judge the value of what is said, not who is saying it? While destroying authenticity, have big tech companies accidentally given isegoria a chance? These questions offer a glimmer of hope.
It is the hope that the specter of democracy might still be looming over our heads, if only we dare to look up, to engage in the slow, hard, democratic work that algorithmic content was meant to destroy: the critical assessment of the views and arguments that are thrown at us. Unfortunately, this hope, while tangible, is not enough as long as our technofeudal masters have two colossal, asymmetrical advantages.
First, they own the agora itself - the servers, the selection of posts, the algorithmic means of communication. They can mark their speech with a digital seal of authenticity, while drowning ours in a swamp of doubt and noise. The result? Not an isegoria, but a digital divine right in which truth is a patented property of power.
Second, and much more cunningly, they don't need deepfakes to rule. Their ideology is built into the machine: the power to extract surplus value from cloud-connected proletarians via various digital devices, the logic of extracting rent from the cloud by vassal capitalists on their platforms, the tyranny of shareholder value, their inevitable success in privatizing money.
So our task is not to beg the masters to verify the content. Our task is political. We must socialize cloud capital, an all-powerful new force that is transforming society and destroying everything that humanism makes conceivable.
Until then, let our digital doppelgangers do the talking. Maybe they'll drown out the spectacle so much that we'll finally stop listening. your voice and we start to evaluate argument to based on their value. That's perhaps the most paradoxical sliver of hope in the hall of mirrors. But on this merry-go-round, we grasp at every straw.
(Translated by Milica Jovanović)
* Deepfake, literally "deep fake"; a type of synthetic media, video or audio content created using artificial intelligence technology, in which real people are shown saying or doing something they never did. In everyday use, the term "deepfake" is common; ed.transl.
Bonus video: