On May 16, the US Senate Subcommittee on Privacy, Technology, and the Law held a hearing to discuss the regulation of artificial intelligence (AI) algorithms. The chairman of the committee, Sen. Richard Blumenthal (D-Conn.), said that “artificial intelligence urgently needs rules and safeguards to address its enormous promise and pitfalls.” During the hearing, OpenAI CEO Sam Altman said, “If this technology can go wrong, it can go well false.”
As the capabilities of AI algorithms become more advanced, some voices in Silicon Valley and beyond are warning of the hypothetical threat of “superhuman” AI destroying human civilization. Think Skynet. But these vague concerns receive an outsized amount of airtime, while the real, concrete but not very “sci-fi” dangers of AI bias are largely ignored. These dangers are not hypothetical, and they are not in the future: They are here now.
I am an AI scientist and physician who has focused my career on understanding how AI algorithms perpetuate biases in the medical system. In a recent publication, I showed how previously developed AI algorithms for detecting skin cancers performed significantly worse on images of skin cancer in brown and Black skin. , which can cause misdiagnoses in patients of color. These dermatology algorithms are not yet in clinical practice, but several companies are working to gain regulatory approval for AI in dermatology applications. In talking to companies in this space as a researcher and consultant, I’ve learned that many continue to underrepresent different skin tones in the creation of their algorithms, despite research showing how to do so. may cause biased performance.
Outside of dermatology, the medical algorithms that have already been deployed have the potential to cause significant harm. A 2019 paper published in Science analyzed the predictions of a proprietary algorithm that has already been deployed in millions of patients. This algorithm is intended to help predict which patients have complex needs and should receive additional support, by assigning each patient a risk score. But the study found that for any given risk score, black patients were actually sicker than white patients. The algorithm is biased, and when followed, results in fewer resources being allocated to Black patients who qualify for additional care.
The risks of AI bias go beyond medicine. In criminal justice, algorithms are used to predict which individuals who have previously committed a crime are most at risk of reoffending within the next two years. While the inner workings of this algorithm are unknown, studies have found that it has racial biases: Black defendants who don’t recidivate have incorrect predictions at twice the rate of whites. defendants who do not recidivate. AI-based facial recognition technologies are known to be particularly harmful to people of color, and yet, they are already being used and leading to arrests and jail time for innocent people. . For Michael Oliver, one of the men wrongly arrested because of AI-based facial recognition, the false accusation caused him to lose his job and disrupt his life.
Some say that people themselves are biased and that algorithms can provide more “objective” decision-making. But when these algorithms are trained on biased data, they continue to produce the same biased outputs as human decision-makers in the best-case scenario – and can even amplify the biases in the worst. Yes, society is already biased, but don’t we want to build our technology to be better than the current broken reality?
As AI continues to enter more avenues in society, it’s not the Terminator we need to worry about. It is us, and the models that reflect and reinforce the most unequal aspects of our society. We need legislation and regulation that promotes deliberate and thoughtful model development and attempts to ensure that technology leads to a better world, rather than a more unjust one. As the Senate subcommittee continues to ponder AI regulation, I hope they realize that the dangers of AI are already here. These biases in already deployed, and future algorithms must be addressed now.
Roxana Daneshjou, MD, Ph.D., is a board-certified dermatologist and postdoctoral scholar in Biomedical Data Science at the Stanford School of Medicine. He is a Paul and Daisy Soros fellow and a Public Voices fellow at The OpEd Project. Follow her on Twitter @RoxanaDaneshjou.
Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or distributed.