Brace yourself: the arrival of a superintelligent AI is just around the corner.
A blog post coauthored by the CEO of OpenAI, Sam Altman, OpenAI President Greg Brockman, and OpenAI Chief Scientist Ilya Sutskever warned that the development of artificial intelligence requires heavy regulation to avoid potentially catastrophic scenarios.
“Now is a good time to start thinking about managing superintelligence,” Altman said, noting that future AI systems may well surpass AGI in terms of capabilities. “Given the picture as we see it now, it is conceivable that within the next ten years, AI systems will exceed the level of expert expertise in most domains, and will perform many productive activities such as one of the largest the corporation today.”
Altman echoed concerns raised in his own recent testimony in front of Congress, the three outlined three pillars that they consider essential for strategic planning in the future.
The “starting point”
First, OpenAI believes that there must be a balance between control and innovation, and pushed for a social agreement “that allows us to maintain safety and help the smooth integration of these social systems.”
Next, they promoted the idea of an “international authority” tasked with system inspections, implementing audits, testing compliance with safety standards, and deploying security restrictions. Similar to the International Atomic Energy Agency, they propose what a global AI regulatory body could look like.
Finally, they emphasize the need for “technical capabilities” to maintain control of the superintelligence and keep it “safe.” What this entails remains nebulous, even with OpenAI, but the post warns against onerous regulatory measures such as licenses and audits for technology that falls below the bar for superintelligence.
In essence, the idea is to keep superintelligence consistent with the intentions of its trainers, preventing a “foom scenario” — a rapid, unstoppable explosion of AI capabilities that surpass human control.
OpenAI also warns of the possible catastrophic impact that uncontrolled development of AI models could have on future societies. Other experts in the field have already raised similar concerns, from godfather of AI to founders of AI companies such as Stability AI and even former OpenAI workers involved in GPT LLM training previously. This urgent call for a proactive approach to AI management and regulation has caught the attention of regulators around the world.
The Challenge of a “Safe” Superintelligence
OpenAI believes that once these points are addressed, the potential of AI can be more freely exploited for good: “This technology will improve our societies, and the creative ability of all to use these new device that will surely surprise us,” they said.
The authors also explained that space is currently developing at a rapid pace, and that will not change. “Stopping it would require something like a global surveillance regime, and even that is not guaranteed to work,” the blog read.
Despite these challenges, OpenAI’s leadership remains committed to exploring the question, “How can we ensure that the technical capability to keep a superintelligence safe is achievable?” The world doesn’t have an answer right now, but it desperately needs one—one that ChatGPT can’t provide.