SOMEONE ELSE

Technology and Democracy

Discussions about the risks of powerful tools draw the question of social control: who is worthy of the right to know and decide. Will artificial intelligence come to our heads because of too much or too little democracy?

1849 views 0 comment(s)
Photo: Reuters
Photo: Reuters
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

On the eve of the First World War, in 1914, H. Dž. Wells has published a novel in which he considers the prospect of even greater suffering. In "The Liberated World", 30 years before the Manhattan Project, Wells imagines the emergence of atomic weapons that allow a man to "carry in his handbag a sufficient amount of latent energy to destroy half a city". A global war breaks out, leading to an atomic apocalypse. In order for peace to reign, it is necessary to "establish a world government".

What worries Wales is not only the risks of new technology, but also the dangers of democracy. Wells' world government was not created by democratic will, but was imposed as a benevolent dictatorship. "Those who are ruled will show their consent by silence," states King Egbert of Wales menacingly. For Wells, the "common man" was "a violent fool in social and public affairs." Only an educated, scientifically oriented elite could "save democracy from itself."

A century later, another technology inspires a similar kind of awe and anxiety - artificial intelligence. From the conference rooms of Silicon Valley to the halls of Davos, political leaders, techno moguls and academics alike are in awe of the enormous benefits that artificial intelligence will bring, while fearing that it could also trigger the downfall of humanity as superintelligent machines take over the world. And just like a hundred years ago, at the center of the debate are issues of democracy and social control.

In 2015, journalist Steven Levy interviewed Elon Musk and Sam Altman, the two founders of the technology company OpenAI, which captured the public's attention two years ago when it launched ChatGPT, a chatbot with seemingly human characteristics. It is a company that, fearing the possible consequences of artificial intelligence, was founded by a group of influential people from Silicon Valley as a non-profit charity with the goal of ethically developing technology for the benefit of "all humanity".

Levi asked Musk and Altman about the future of artificial intelligence. "There are two schools of thought," Musk said. “It depends on whether you want it to be high or low AI? We think it's probably good that there are a lot of them."

“If I'm an evil scientist and I get to use AI, isn't that how you make me stronger?” Levi asked. The evil scientist is more likely to be empowered, Altman replied, if only a small number of people control the technology: "Then we're really in trouble."

In reality, that "sauce" is made by the technology companies themselves. Musk, who left OpenAI's board of directors six years ago to develop his own AI projects, is now suing his former company for breach of contract for putting profit above the public good and not developing AI "for the benefit of humanity."

Namely, in 2019, OpenAI established a profit subsidiary in order to be able to collect money from investors, among which Microsoft stands out in particular. When ChatGPT was launched in 2022, the way the model works was not revealed. It was necessary to be less open, said Ilya Satskever, one of the founders of the company and at the time its chief scientist, in response to criticism, in order to prevent those with evil intentions from using it "to cause great harm". Fear of technology has become a cover for protection from scrutiny.

In response to Musk's lawsuit, OpenAI last week released a series of emails exchanged between Musk and other board members. The letters make it clear that all board members agreed from the beginning that "OpenAI" [open artificial intelligence] could not really be open.

As artificial intelligence develops, Satskever wrote to Musk, "it makes sense to be less open. "Open" in OpenAI means that everyone should benefit from the fruits of artificial intelligence after it's built, but it's perfectly fine not to share the science." - Yes, answered Mask. Whatever he says with his lawsuit, Musk is no more open to openness than other techno moguls. His legal challenge to the company is more of a power struggle within Silicon Valley than an attempt to establish accountability.

Wells wrote his "Free World" at a time of great political turmoil, when many questioned whether it was wise to extend the right to vote to the working class.

"Was it desirable, was it at all safe to entrust [the masses]," asked the Fabian Beatrice Webb, "with the ballot box, the formation and control of the government of Great Britain with her vast wealth and her distant colonies?" That was also the question. in the center of Wells's novel - who can be trusted with the future?

A century later, we are once again having a heated debate about the virtues of democracy. For some, the political turmoil of recent years is the product of an excess of democracy, where the irrational and uneducated are allowed to make important decisions. "It is unfair to put the responsibility of making historic decisions of great complexity and sophistication on unqualified fools," said Richard Dawkins after the Brexit referendum, a sentiment with which Wales would agree.

For others, it is precisely this disdain for ordinary people that has fueled the deficit of democracy, where entire sections of the population feel denied a say in how society is governed.

It's a disdain that also sharpens discussions about technology. As in The Free World, the discussion of artificial intelligence is not only about technology, but also about issues of openness and control. Despite the hype, we are far from "superintelligent" machines. Today's AI models, such as ChatGPT or Claude 3, unveiled last week by another AI company, Anthropic, are so good at predicting the next word in a sequence, that they can lead us to imagine that they are capable of carrying on a human conversation. They are not, however, intelligent in any human sense, have negligible understanding of the real world, and will not extinguish the human species.

The problems raised by AI are not existential, but social. From algorithmic bias to mass surveillance, from disinformation and censorship to copyright theft, our concern should not be that machines will one day be able to take power over humans, but that they are already doing so by establishing inequality and injustice, providing tools for consolidating power to those with power.

That's why what we might call the "Egbert Maneuver" - the insistence that some technologies are so dangerous that they must be exempt from democratic pressure and placed under the control of a select few - sounds so threatening. Our problem is not just some evil scientist, but all those for whom the fear of the evil scientist serves as a protection against questioning.

(The Guardian; Peščanik.net; translation: M. Jovanović)

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)