SOMEONE ELSE

What is the political agenda of artificial intelligence?

Can AI alone decide the course of our history? Or will it end up as another technological invention that benefits only a certain group of people?

4250 views 0 comment(s)
Photo: Shutterstock
Photo: Shutterstock
Disclaimer: The translations are mostly done through AI translator and might not be 100% accurate.

“The handmill gives you company with a feudal lord; steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen again and again throughout history how technological achievements determine the dominant mode of production and thus the type of political power present in society.

So, what will artificial intelligence (AI) give us? Who will capitalize on this new technology that is not only becoming the dominant productive force in our societies (as the hand or steam mill once was) but also, as we read in the news, seems to be "fast escaping our control".

Can AI take over life on its own, as many believe it will, and determine the course of our history on its own? Or will it end up as one of a series of technological inventions that serve a particular policy and benefit a particular group of people?

Recently, examples of hyper-real content created by AI include an 'interview' with former Formula 1 champion Michael Schumacher who has been unable to speak to the media since a serious skiing accident in 2013, a "photograph" of former US President Donald Trump's arrest and seemingly authentic essay for students "written" by the famous chatbot ChatGPT have caused great concern among intellectuals, politicians and academics about the dangers that new technology can pose to our societies.

In March, such concerns prompted Apple co-founder Steve Wozniak, AI expert Yoshua Bengi, and Tesla/Twitter CEO Elon Musk (among others) to sign an open letter accusing AI labs of being “trapped in a race out of control, on the development and launch of increasingly powerful digital minds that no one - not even their creators - understands, no one predicts their development or reliably controls" and called on the development of AI to pause its work. Recently, Geoffrey Hinton, known as one of the three 'godfathers of AI', quit his job at Google to "speak freely about the dangers of AI" and added that he regrets, at least in part, his contribution to the field.

We accept that AI - like all technology that marks an era - comes with significant negatives and dangers, but, unlike Wozniak, Bengi, Hinton and others, we do not believe that it can determine the course of history on its own without any human guidance. We do not share their concerns because we know that, just as with all our other technological systems and devices, our political, social and cultural agendas are also embedded in AI technologies.

As philosopher Donna Haraway explains, “technology is not neutral. We are inside what we make and it is inside us”.

Before we further explain why we are not afraid of the so-called reign of AI, we need to define and explain what AI - which we are dealing with now - actually is. This is a difficult task, not only because of the complexity of the product but also because of the media mythologizing of AI.

What is constantly being marketed to the public today is that conscious machines are (almost) here, that our daily lives will resemble those of movies like "2001: A Space Odyssey", "Blade Runner" and "The Matrix".

That's the wrong narrative. Although we are undoubtedly building ever more capable computers and calculators, there is no indication that we have created - or are close to creating - a digital mind that can truly "think".

Noam Chomsky recently wrote (along with Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from the science of linguistics and the philosophy of knowledge that [machine learning programs like ChatGPT] are profoundly different from human reasoning and language use”. Despite its incredibly convincing answers to a number of people's questions, ChatGPT is “a massive statistical pattern matching engine, powered by terabytes of data, that provides the closest conversational answer or most likely answer to a scientific question”. Imitating the German philosopher Martin Heidegger (and creating the danger of restarting the centuries-old struggle of continental and analytical philosophers) we could say "AI doesn't think, it simply calculates".

Federico Faggin, the inventor of the first commercial microprocessor, the mythical Intel 4004, makes this clear in his 2022 book Irrudicibile (“Irreducible”): “There is a clear difference between symbolic machine 'knowledge'... and human semantic knowledge. The first is objective information that can be copied and shared, and the second is a subjective and personal experience that happens in the intimacy of the omniscient being".

Interpreting the latest theories in quantum physics, Faggin seems to have made a philosophical conclusion that goes very well with ancient Neoplatonism—an achievement that may earn him forever being considered a heretic in scientific circles despite his incredible achievements as an inventor.

But what does all this mean for our future? If our super-intelligent Centaur Chiron cannot really "think" (and thus become an independent force that can determine the course of human history), who will he be of use and give political power? In other words, on whose life values ​​will he depend?

Chomsky and his colleagues posed a similar question to Chat-GPT.

“As an AI, I have no moral beliefs or the ability to make moral decisions, so I cannot be considered either moral or immoral. My lack of moral conviction is simply a result of my nature as a learning machine model,” the chatbot answered them.

Where have we heard this attitude before? Isn't it eerily similar to the ethnically neutral vision of hardline liberalism?

Liberalism tends to close in the private sphere of the individual all religious, civil and political values ​​that proved very dangerous during the 16th and 17th centuries. He wants all aspects of society to be regulated by a certain - and somehow mysterious - form of rationality: the market.

AI seems to promote just that form of mysterious rationality. It's true, it's coming as a new global "big business" innovation that will take jobs away from people - making manual workers, doctors, bartenders, journalists and others redundant. The moral values ​​of the new bots are identical to those of the market. It's hard to imagine all the possible scenarios now, but a terrible one is developing.

David Krueger, assistant professor of machine learning at the University of Cambridge, recently commented to New Scientist: Basically, every AI scientist (including me) gets paid by the big tech companies. At some point, society may stop believing the assurances of people with major conflicts of interest and conclude, as I have, that their rejection [of AI warnings] is wishful thinking rather than real counter-arguments”.

If society opposes AI and the promoters of that technology, it could prove Marx wrong and prevent the leading technological development of this age from deciding who holds political power.

But AI doesn't seem to be going anywhere right now. And his political agenda is completely synchronized with that of free market capitalism, the main (unpublished) goal and purpose of which is to tear apart any form of social solidarity and community.

The danger of AI is not that it is an "uncontrollable" digital intelligence that could destroy our self-awareness and truth with the help of the "fake" recordings, essays, news and histories it creates. The danger lies in the fact that this undeniably monumental invention bases all its decisions and moves on the same dangerous values ​​that drive predatory capitalism.

(Al Jazeera)

Bonus video:

(Opinions and views published in the "Columns" section are not necessarily the views of the "Vijesti" editorial office.)