On Monday, Ethereum creator Vitalik Buterin reflected on his own take on “techno-optimism,” inspired by Marc Andreessen, who opined about AI in his Techno-Optimist Manifesto last October. While Buterin agreed with Andreessen’s positive outlook, Buterin also noted the importance of how AI is developed and the future direction of the technology.
Buterin recognized the existing danger of artificial intelligence, including the cause of the extinction of the human race.
“This is a serious claim: as much damage as the worst scenario of climate change, or an artificial pandemic or a nuclear war, could cause, there are many islands of civilization that stay put to pick up the pieces,” he said.
“But a superintelligent AI, if it decides to turn its back on us, may leave no survivors and end humanity for good,” Buterin said. “Even Mars can’t be safe.”
Buterin points to a 2022 survey on AI Impacts, which says between 5% and 10% of participants believe that humans face extinction from AI or from the failure of humans to control the AI, respectively. He said that an open-source movement focused on security is better for leading the development of AI than closed and proprietary corporations and venture capital funds.
“If we want a future that is both superintelligent and ‘human’ – one where people are not just pets, but actually retain meaningful agency around the world – then it looks like something like this most natural option,” he said.
What is needed, Buterin continued, is the active intention of the person to choose its direction and result. “The ‘maximize profit’ formula doesn’t come to them automatically,” he said.
Buterin said he loves technology because it expands human potential, pointing to the history of innovations from hand tools to smartphones.
“I believe that these things are very good, and that extending man’s reach beyond the planets and stars is very good, because I believe that man is very good,” Buterin said.
Buterin said that while he believes that changing technology will lead to a brighter future for humanity, he rejects the idea that the world should remain as it is today, with less greed and more -public health care.
“There are certain types of technology that are more reliably making the world better than other types of technology,” Buterin said. “There are certain types of technology that can, if developed, mitigate the negative effects of other types of technology.”
Buterin warned about the rise of digital authoritarianism and surveillance technology used against those who oppose or oppose the government, which is controlled by a small cabal of technocrats. He says most people would rather see more advanced AI delayed by a decade than be monopolized by one group.
“My basic fear is that the same kinds of management technologies that allow OpenAI to serve over a hundred million customers with 500 employees will also allow a 500-person political elite, or even a 5-person board, which will maintain an iron fist in an entire country,” he said.
While Buterin said he is sympathetic to the effective acceleration (also known as “e/acc”) movement, he has mixed feelings about its enthusiasm for military technology.
“Enthusiasm about modern military technology as a force for good seems to require the belief that a technologically dominant power can be trusted as one of the good guys in most conflicts, now and in the future,” he said, citing of the idea that military technology is good. because it is built and controlled by America and America is good.
“Being an e/acc requires being a maximalist in America, betting everything on the current and future morale of the government and the future success of the country?” he said.
Buterin cautions against giving “extreme and opaque power” to a small group of people with the hope that they will use it wisely, preferring instead a philosophy of “d/acc”— or defense, decentralization, democracy, and diversity. This thinking, he says, can be adapted to effective altruists, libertarians, pluralists, blockchain advocates, and solar and lunar punks.
“A defense-favored world is a better world, for many reasons,” Buterin said. “First of course is the direct safety benefit: fewer people die, less economic value is destroyed, less time is wasted in conflict.
“What is less appreciated though is that a defense-favored world makes it easier for healthier, more open and more freedom-respecting forms of governance to flourish,” he concluded.
While he emphasized the need to build and accelerate, Buterin said society should always question what we are accelerating. Buterin suggested that the 21st century would be “the crucial century” for mankind that could decide the fate of humanity for the millennium.
“These are challenging problems,” Buterin said. “But I look forward to watching and participating in the great collective effort of our species to find the answers.”
Edited by Ryan Ozawa.