We already knew an artificial intelligence could reflect the racial bias of its creator.
But San Francisco thinks the tech could potentially do the opposite as well, by identifying and counteracting racial prejudice — and it plans to put the theory to the test in a way that could change the legal system forever.
On Wednesday, San Francisco District Attorney George Gascon announced that city prosecutors will begin using an AI-powered “bias-mitigation tool” created by Stanford University researchers on July 1.
The tool analyzes police reports and automatically redacts any information that may allude to an individual’s race. This could include their last name, eye color, hair color, or location.
It also removes any information that might identify the law enforcement involved in the case, such as their badge number, a DA spokesperson told The Verge.
Prosecutors will look at these redacted reports, record their decision on whether to charge a suspect, and then see the unredacted report before making their final charging decision.
According to Gascon, tracking changes between the first and final decisions could help the DA suss out any racial bias in the charging process.
“This technology will reduce the threat that implicit bias poses to the purity of decisions which have serious ramifications for the accused,” Gascon said in a statement, according to the San Francisco Examiner. “That will help make our system of justice more fair and just.”
READ MORE: San Francisco says it will use AI to reduce bias when charging people with crimes [The Verge]