Last November, state police in Georgia stopped a young African-American man, Randall Curran Reed, on his way to Atlanta. He was arrested on warrants issued by Louisiana State Police for two thefts in New Orleans. Reed had never set foot in Louisiana, let alone New Orleans. His arguments were ignored, and he spent six days in jail while his family spent thousands of dollars on lawyers in Georgia and Louisiana to get him out.
It turned out that the arrest warrant was based solely on a software-based facial comparison, although no police documents mentioned this: it said only that Reid had been identified as the perpetrator by a "credible source." When the result of the facial recognition program was found to be incorrect, the case collapsed and Reid was released.
He was lucky to have a family and the resources to clear the matter. Millions of Americans would not be able to count on social and financial support in such a situation. Reed, however, is not the only victim of faulty facial recognition. The numbers are small, but so far all those arrested in the US due to inaccurate facial matching have been black. This is not surprising, since we know not only that the design of facial recognition software itself makes it difficult to correctly identify people of color, but also that the algorithms replicate the biases of the human world.
Reed's and similar cases should be at the center of one of the most urgent contemporary discussions - about artificial intelligence and the risks that this technology brings. However, such stories are marginal and few consider them particularly significant, which shows how distorted the AI debate is and how much it needs a reset. We inherited the fear of the world that artificial intelligence could create and brought it into the conversation about smart technologies. The emergence of a new generation of chatbots last year, followed by its latest version in March, has sparked awe and panic: awe at its perfected mimicry of human language, and panic at possible falsification of everything from school assignments to news reports.
Two weeks ago, prominent members of the tech community raised the bar of apprehension. Among others, Sam Altman, CEO of the OpenAI company that makes ChatGPT, Demis Hassabis, CEO of Google's DeepMind program, Jeffrey Hinton and Joshua Bengio, who are often called the fathers of modern artificial intelligence, issued a joint statement claiming that artificial intelligence could to herald the end of humanity. "The question of how to reduce the risk of extinction due to artificial intelligence", they warned, "should be a global priority, alongside other general societal risks such as pandemics and nuclear war".
If so many people in Silicon Valley really believe that they are making very dangerous things, as they claim, why do they continue to spend billions of dollars to design, develop and improve these products? It sounds like a hard drug addict begging to be forcibly sent to rehab. Promoting those products as super-smart and super-powerful certainly pleases the egos of tech entrepreneurs, at least as much as their profits. And yet, artificial intelligence is neither that smart nor that powerful. ChatGPT is extremely adept at copying text to appear human, but has negligible understanding of the real world. The new chatbot is, according to one analysis, little more than a "stochastic parrot".
We are far from the holy grail of "artificial general intelligence", a machine that is able to understand or learn any mental task, to manifest at least a basic, let alone a superior, form of human intelligence.
The obsession with imagined scares masks some trivial but far more important problems with artificial intelligence - the ones that got Reed in trouble and could befall us all. They tell us that we already live in a world shaped by artificial intelligence, from surveillance to disinformation. A key feature of the "new world of ambient surveillance," noted tech entrepreneur Macej Ceglovski at a US Senate committee hearing, is that "we don't have the option to opt out, just as we don't have the option to opt out of car culture by refusing to drive." We have fallen into a digital panopticon without even realizing it. However, the suggestion that we live in a world shaped by artificial intelligence is a misstatement of the problem. There is no machine without man, nor is there likely to be one.
The reason Reed was wrongfully imprisoned has less to do with artificial intelligence than with decisions made by humans. The people who created the software and trained it. People who have applied it. People who accepted the face recognition result without question. People who were granted arrest warrants because they claimed that Reed was identified by a "credible source." People who refused to reconsider their identification while Reid convinced them they were wrong. And so on.
When we talk about the "problem" of artificial intelligence, we too often leave out the human. We practice a kind of "moral outsourcing," as sociologist and programmer Raman Chaudhry puts it: we blame machines for people's decisions. We worry that AI will "eliminate jobs" and make millions of workers redundant, instead of acknowledging that decisions are made by governments and corporations and the people who run them. The headlines panickedly warn us about "racist and sexist algorithms", while the people who made them and those who use them remain almost hidden.
In other words, we began to treat the machine as a doer, and people as victims of machine work. Ironically, it is our fears of dystopia, not artificial intelligence itself, that contribute to the creation of a world in which humans are marginal and machines are at the center. Such fears also distort the possibility of legal regulation. Instead of seeing regulation as a means by which we can collectively shape our relationship to artificial intelligence and new technologies, law becomes something imposed from the top as a means of protecting people from machines. What should worry us is not so much artificial intelligence, but our fatalism and blindness to the way human societies are already using machine intelligence for political purposes.
(The Guardian; Peščanik.net; translation: M. Jovanović)
Bonus video: