Can artificial intelligence be racist?

Experts say AI tools lack transparency and could cause disparities at an unprecedented level.

Illustration by Daniel Zender for Yahoo News
Illustration by Daniel Zender for Yahoo News

What’s happening

The rise of artificial intelligence has led to the creation of generative AI tools, like ChatGPT, that provide automated predictions based on large amounts of data. But as the use of AI reaches new heights, experts say the booming technology increases racial bias and discriminatory practices.

“Those tools are trained to make predictions based on historical data of what's happening or has happened before. So automated predictions will mirror and amplify the existing discrimination in the context in which it's used,” Olga Akselrod, senior staff attorney in the Racial Justice Program at the American Civil Liberties Union, told Yahoo News.

In a 2020 study by Cambridge University, researchers found that AI causes unequal opportunities for marginalized groups. But AI continues to grow despite the inequities. Currently, 35% of companies are now using AI, and 42% are exploring the future adoption of the technology, according to Tech Jury.

“Predictive technologies, such as artificial intelligence, have been implemented in virtually every facet of our day, by both government and private entities, and impact truly critical decisions, such as who gets a job, who gets a loan, who goes to jail, and a host of other decisions,” Akselrod said.

Why there’s debate

According to experts, AI tools lack transparency and could cause disparities at an unprecedented level.

“Predictive tools pose a great threat to civil rights protections [because] they are used at an incredible scale that is sort of unmatched by individual decisions of the past, or even systemic decisions that weren't made with the kind of speed and frequency that decisions are made today, using predictive tools,” Akselrod said.

Since racial and economic inequities already exist in society, Akselrod says AI tools will only add to that burden.

“Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria,” Darrell West and John Allen wrote in a report published by the Brookings Institution.

But according to a recent Pew Research poll, more than 50% of Americans think racial bias in workplaces will decline if employers use AI during the hiring process, and that AI will ultimately help fight against discrimination.

Broderick Turner, director of the technology, race and prejudice lab at Harvard Business school, says AI is not racist, because it is only a tool. “However, depending on the data and rules it is trained on — both created by humans — it can be used in a racist manner,” Turner said at an Assembly Talk at Harvard.

What’s next

In July, President Biden announced plans to work alongside seven AI development companies to set guidelines that would create a safe and trustworthy AI system.

"Realizing the promise of AI by managing the risks is going to require new laws, regulations and oversight," Biden said on July 21. "In the weeks ahead, I’m going to continue to take executive action and help America lead the way to responsible innovation." He also called on Congress to pass AI legislation.

But Akselrod says the government is playing catch-up. “These more modern tools of discrimination have not yet been met with the regulation legislation and government enforcement that's needed to protect civil rights and civil liberties,” she said.

Perspectives

AI 'learns by example'

“AI is just software that learns by example. So, if you give it examples that contain or reflect certain kinds of biases or discriminatory attitudes ... you’re going to get outputs that resemble that.” — Reid Blackman, author of "Ethical Machines," on CNN

'AI has a race problem'

"AI has a race problem. What it tells us is that AI research, development and production is really driven by people that are blind to the impact that race and racism has on shaping not just technological processes, but our lives in general." — Mutale Nkonde, former journalist and technology policy expert who runs the nonprofit AI for the People, to CBC News

AI creates new roadblocks for marginalized groups

“Artificial intelligence, invisible but pervasive, affects vast swaths of American society and will affect many more. Biden must ensure that racial equity is prioritized in AI development.” — ReNika Moore, director of the Racial Justice Program, ACLU

Solutions are top of mind for experts

“The solution isn't just to make tech more inclusive, but to root out the algorithms that inherently classify certain demographics as 'other.' There is a need for accountability and transparency in AI, as well as diversity in the development of AI systems. Regulatory oversight is also a critical part of this solution. Without these changes, the future could see current racial inequities become increasingly entrenched in our digital infrastructure.” — Meredith Broussard, data journalism professor at New York University, to Yahoo News

AI systems can have gender and racial bias

“We often assume machines are neutral, but they aren’t. My research uncovered large gender and racial bias in AI systems sold by tech giants like IBM, Microsoft, and Amazon. Given the task of guessing the gender of a face, all companies performed substantially better on male faces than female faces. The companies I evaluated had error rates of no more than 1% for lighter-skinned men. For darker-skinned women, the errors soared to 35%.” — Joy Buolamwini, founder of the Algorithmic Justice League, Time

The dangers of AI should be prevented before it's released to the public

“Fundamentally, we have to have a robust human and civil rights framework for evaluating these technologies. And I think, you know, they shouldn't be allowed into the marketplace to propagate harm, and then we find out after the fact that they are dangerous, and we have to do the work of trying to recall them.” — Safiya Noble, professor of gender studies and African American studies at UCLA, to NPR