For the second time this month, OpenAI CEO Sam Altman went to Washington to discuss artificial intelligence with US lawmakers. Altman appeared before the US Senate Committee on the Judiciary along with IBM’s Chief Privacy & Trust Officer Christina Montgomery and Gary Marcus, Professor Emeritus at New York University.
When Louisiana Senator John Kennedy asked how they would regulate AI, Altman said a government office should be formed and tasked with setting standards.
“I will create a new agency that licenses any effort that exceeds a certain level of capabilities, and that can get the license and ensure compliance with safety standards,” said Altman, adding that the agency should require independent audits of any AI technology.
“Not only from the company or the agency, but the experts who can say that the model is or is not in compliance with the standards of the state and safety and these percentages of the performance of the question X or Y,” he said.
While Altman said the government should regulate technology, he balked at the idea of leading the agency himself. “I love my job now,” he said.
Using the FDA as an example, Professor Marcus said there should be a safety review for artificial intelligence similar to how drugs are reviewed before being allowed to go on the market.
“If you’re going to introduce something to 100 million people, someone has to have eyes on it,” added Professor Marcus.
The agency, he said, must be agile and able to keep up with what is happening in the industry. It will pre-review projects and also review them after they are released to the world—with the authority to recall the tech if necessary.
“It comes back to transparency and clarity in AI,” IBM’s Montgomery added. “We need to define the maximum use of risk, [and] requiring things like impact assessments and transparency, requiring companies to demonstrate their work, and protecting the data used to train AI in the first place.”
Governments around the world continue to battle the rise of artificial intelligence in the mainstream. Last December, the European Union passed an action on AI to promote regulatory sandboxes established by public authorities to test AI before it is released.
“To ensure a human-centric and ethical development of artificial intelligence in Europe, MEPs endorse new transparency and risk-management rules for AI systems,” wrote the European Parliament.
In March, due to privacy concerns, Italy banned OpenAI’s ChatGPT. The ban was lifted in April after OpenAI made changes to its privacy settings to allow users to opt out of using their data to train the chatbot and to turn off their chat history.
“On the question of whether we need an independent agency, I don’t think we want to slow down regulation to address the real risks right now,” Montgomery continued, adding that regulatory authorities already exist to be regulate their fields.
He acknowledged that regulatory bodies are under-resourced and lack the necessary powers.
“AI should be regulated to the point of danger, essentially,” Montgomery said. “And that’s the point where technology meets society.”