The anticipated arrival of generative AI has renewed a familiar debate about trust and safety: Can technology executives be trusted to pursue society’s best interests?
Because its training data is created by humans, AI is inherently biased and therefore subject to our own imperfect, emotional way of seeing the world. We are well aware of the risks, from reinforcing discrimination and racial inequality to promoting polarization.
OpenAI CEO Sam Altman has asking for our “patience and good faith” as they work to “make it right.”
For decades, we’ve patiently put our faith in tech execs at our peril: They do it, so we believe them when they say they can fix it. Trust in tech companies continues to decline, and according to the 2023 Edelman Trust Barometer, globally 65% worry technology is impossible to know if what people see or hear is true.
It’s time for Silicon Valley to embrace a different approach to earning our trust — one that has proven effective in the country’s legal system.
A justice system approach to trust and legitimacy
Grounded in social psychology, procedural justice is based on research that shows that people believe that institutions and actors are more trustworthy and legitimate when they are listened to and experience neutral, impartial and transparent decision-making.
Four key areas of procedural justice are:
- Neutrality: Decisions are unbiased and guided by transparent reasoning.
- Respect: Everyone is treated with respect and dignity.
- Voice: Everyone has a chance to tell their side of the story.
- Credibility: Decision makers express credible motives about those affected by their decisions.
Using this framework, the police have improved trust and cooperation with their communities and some social media companies have begun to use these ideas to shape management and moderation methods.
Here are some ideas for how AI companies can adapt this framework to build trust and legitimacy.
Build the right team to answer the right questions
As argued by Professor Safiya Noble of UCLA, questions surrounding algorithmic bias cannot be solved by engineers alone, because they are systemic social issues that require human perspectives – apart from any company – to ensure social dialogue, consent and ultimately regulation-both self and government.
In “System Error: Where Big Tech Went Wrong and How We Can Reboot,” three Stanford professors critically discuss the shortcomings of computer science training and engineering culture due to its obsession with optimization, often which rejects the values at the core of a democratic society.
In a blog post, Open AI says it values social input: “Because the upside of AGI is so great, we don’t believe it’s possible or desirable for society to stop its development forever ; instead, society and AGI developers need to figure out how to fix it.
However, the company’s hiring page and promoter Sam Altman’s tweets show that the company is hiring more machine learning engineers and computer scientists because “ChatGPT has an ambitious road map and is bottlenecked by engineering.”
Are these computer scientists and engineers equipped to make decisions that, as OpenAI puts it, “will require more caution than society typically applies to new technologies”?
Tech companies need to hire multi-disciplinary teams that include social scientists who understand the effects of technology on people and society. With different perspectives on how to train AI applications and implement safety parameters, companies can articulate clear reasoning for their decisions. This, in turn, will improve the public perception of technology as neutral and trustworthy.
Include outside views
Another element of procedural justice is giving people the opportunity to participate in the decision-making process. In a recent blog post on how the OpenAI company is responding to bias, the company said it was seeking “external input on our technology” pointing to a recent red teaming exercise, a process to assess risk through an adversarial approach.
While red teaming is an important risk assessment process, it should include external input. In OpenAI’s red teaming exercise, 82 of the 103 participants were employees. Of the remaining 23 participants, most were computer science scholars from mostly Western universities. To gain diverse perspectives, companies must look beyond their own employees, disciplines, and geographies.
They can also leverage more direct feedback on AI products by giving users more control over how the AI works. They may also consider providing opportunities for public comment on new policy or product changes.
Companies must ensure that all rules and related safety processes are clear and state credible motives for how decisions are made. For example, it is important to provide the public with information about how applications are trained, where the data is taken, what role people play in the training process, and what safety layers exist to minimize misuse. .
Allowing researchers to audit and understand AI models is key to building trust.
Altman captured this in a recent interview with ABC News when he said, “Society, I think, has a limited amount of time to figure out how to respond to that, how to regulate that, how it’s handled.”
Through the method of procedural justice, instead of the opacity and blind-faith of the technology pioneers, companies that build AI platforms will be able to involve society in the process and get-not demand- trust and legitimacy.