“The hand mill gives you fellowship with the feudal lord; the steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen time and again throughout history how technological inventions determine the dominant mode of production and with it the type of political authority that exists in society.
So what will artificial intelligence give us? Who will take advantage of this new technology, which has not only become the dominant productive force in our societies (like the hand mill and steam mill in the past) but, as we continue to read the news, it also appears to be “fast escaping. our control”?
Will AI take on a life of its own, as many seem to believe it will, and single-handedly decide the course of our history? Or could it be just another technological invention that serves a particular agenda and benefits a segment of people?
Recently, examples of hyperrealistic, AI-generated content, such as an “interview” with former Formula One world champion Michael Schumacher, who has not spoken to the press since a devastating accident in skiing in 2013; “photos” showing former President Donald Trump being arrested in New York; and seemingly authentic student essays “written” by OpenAI’s famous chatbot ChatGPT have raised serious concerns among intellectuals, politicians and academics about the dangers of this new technology in our societies.
In March, such concerns led Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk among many others to sign an open letter accusing the AI labs that are “locked in an out-of-control race to develop and deploy more powerful digital minds that no one — not even their creators — can understand, predict, or can be reliably controlled” and called on AI developers to stop their work. More recently, Geoffrey Hinton – known as one of the three “godfathers of AI” stop Google “to speak freely about the dangers of AI” and said he, at least in part, regrets his contributions to the field.
We accept that AI – like all time-determining technologies – has many failures and risks, but unlike Wozniak, Bengio, Hinton and others, we do not believe that it can determine the course of history in its itself, without any input or guidance from the people. We do not share such concerns because we know that, as is the case with all our other technological devices and systems, our political, social and cultural agendas are also built into the technologies of AI. As philosopher Donna Haraway explains, “Technology is not neutral. We are inside what we do, and it is inside us.”
Before we explain why we are not afraid of a so-called AI takeover, we need to define and explain what AI – what we are dealing with today – actually is. This is a challenging task, not only because of the complexity of the product at hand but also because of the media mythology of AI.
What is often announced to the public today is that machine learning is (almost) here, that our everyday world will soon resemble those depicted in movies such as 2001: A Space Odyssey, Blade Runner and The Matrix.
This is a false narrative. While we have undoubtedly built more capable computers and calculators, there is no sign that we have created – or come anywhere close to creating – a digital mind that can actually “think”.
Noam Chomsky recently argued (with Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from linguistic science and the philosophy of knowledge that [machine learning programmes like ChatGPT] it is very different from how people reason and use language.” Despite the surprisingly convincing answers to various questions from people, ChatGPT is “a powerful statistical engine for pattern matching, which flows through hundreds of terabytes of data and -extrapolate the most likely answer to a conversation or the most likely answer to a scientific question”. Imitating the German philosopher Martin Heidegger (and at the risk of repeating the old battle between continental and analytical philosophers), we can say, “AI does not think. It simply calculates.”
Federico Faggin, the inventor of the first commercial microprocessor, the fabled Intel 4004, explained this in his 2022 book Irriducibile (Irreducible): “There is a clear distinction between symbolic machine ‘knowledge’ … and of human semantic knowledge. The former is objective information that can be copied and shared; the latter is a subjective and private experience that occurs in the intimacy of the conscious being.”
In interpreting the latest theories of Quantum Physics, Faggin appears to have developed a philosophical conclusion that fits well with ancient Neoplatonism – a feat that may ensure that he is forever regarded as a scientific heretic. circles despite his extraordinary achievements as an inventor.
But what does all this mean for our future? If our super-intelligent Centaur Chiron can never “think” (and therefore emerge as an independent force that can determine the course of human history), who exactly will benefit and be given political authority? In other words, what values can its decisions be based on?
Chomsky and his colleagues asked a similar question in ChatGPT.
“As an AI, I do not have moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral,” the chatbot told them. “My lack of moral beliefs is simply a consequence of my behavior as a machine learning model.”
Where have we heard this position before? Isn’t this strangely similar to the ethically neutral vision of hardcore liberalism?
Liberalism seeks to hide in the private individual sphere all the religious, civil and political values that proved dangerous and destructive in the 16th and 17th centuries. It wants all aspects of society to be regulated by a particular – and somewhat mysterious – form of rationality: the market.
AI appears to promote the same brand of mysterious rationality. The truth is, it is emerging as the next global “big business” innovation that will steal jobs from people – making workers, doctors, lawyers, journalists and others redundant. The moral values of the new bots are the same as the market. It’s hard to imagine all the possible developments now, but a scary scenario is emerging.
David Krueger, assistant professor of machine learning at the University of Cambridge, commented recently to New Scientist: “Virtually every AI researcher (myself included) has received funding from big tech. At some point , society may stop believing the commitments from people with such strong conflicts of interest and conclude, like me, that their removal [of warnings about AI] betrays wishful thinking rather than sound counterarguments.”
If society stands up to AI and its proponents, it can prove Marx wrong and prevent the main technological advances of today from determining who holds political authority.
But for now, AI seems to be here to stay. And its political agenda is fully aligned with free market capitalism, its main (unstated) goal and purpose is to destroy any form of social cohesion and community.
The danger of AI is not that it is an impossible-to-control digital intelligence that destroys our sense of self and reality through the “fake” images, writings, news and histories it creates. The danger is that this undeniably wonderful invention appears to base all of its decisions and actions on the same harmful and dangerous values that drive predatory capitalism.
The views expressed in this article are those of the authors and do not necessarily reflect the editorial stance of Al Jazeera.