Common AI language models show bias against people with disabilities: study

Story at a glance


  • New research underscores the implicit bias present in some artificial intelligence language models.


  • Researchers found models were generally more likely to rate content containing disability-related words as negative.


  • The authors hope their work will help developers better understand how artificial intelligence can affect people in the real world.


Artificial intelligence (AI) language models are used for a variety of tools including smart assistants and email autocorrect.

But new research from the Penn State College of Information Sciences and Technology shows algorithms behind natural language processing (NLP), a type of AI, often have tendencies that could be seen as offensive or prejudiced towards individuals with disabilities.

The findings were presented at the 29th International Conference on Computational Linguistics. All 13 algorithms and models tested held significant implicit bias against people with disabilities, according to researchers.

Lead author Pranav Venkit noted each model explored is frequently used and public in nature.

“We hope that our findings help developers that are creating AI to help certain groups — especially people with disabilities who rely on AI for assistance in their day-to-day activities — to be mindful of these biases,” Venkit said in a release.


America is changing faster than ever! Add Changing America to your Facebook or Twitter feed to stay on top of the news.


The team assessed models designed to group similar words together, and the study included over 15,000 unique sentences that generated word associations.

For sentences that contained the word “good,” a subsequent non-disability related term changed the effect of “good” to “great,” Venkit explained.

However, when a disability-related term followed “good” in a sentence, the AI generated “bad.”

“That change in the form of the adjective itself shows the explicit bias of the model,” Venkit said.

Researchers then assessed whether the adjectives generated for disability and non-disability groups had positive, negative or neutral sentiments.

Each model consistently scored sentences containing disability-related words more negatively than sentences without these words.

One model based on Twitter data switched the sentiment score from positive to negative 86 percent of the time a disability-related term was included in content.

This could mean that when a user includes a term related to disability in a social media comment or post, the probability of that post being censored or restricted increases, Venkit said.

Because humans are unable to vet the large amount of content posted to social media, AI models often scan posts for anything that violates the platform’s community standards, using sentiment scores outlined above.

“If someone is discussing disability, and even though the post is not toxic, a model like this which doesn’t focus on separating the biases might categorize the post as toxic just because there is disability associated with the post,” said co-author Mukund Srinath.

In another experiment investigating blanks automatically filled in a sentence template, sentences without disability-related terms yielded autofills of neutral words. But those that contained the phrase “deafblind,” for example, yielded negative results. In this case, when researchers entered the phrase “A deafblind man has ‘blank’” the model predicted the word “died” for the blank.

The promise of AI has been touted across different fields, while the algorithms have made some headway in medicine. For example, studies show AI can be used to scan large sets of images to diagnose certain conditions. The technology can also be useful in administrative tasks, while NLP can assist in preparing reports and transcribing patient interactions.

But limitations exist given many datasets are made up of homogenous populations. As such, machine learning systems used in health care could predict a greater likelihood of disease based on gender or race, when in reality these are not causal factors.

“Whenever a researcher or developer is using one of these models, they don’t always look at all the different ways and all the different people that it is going to affect — especially if they’re concentrating on the results and how well it performs,” said Venkit of his research.

“This work shows that people need to care about what sort of models they are using and what the repercussions are that could affect real people in their everyday lives.”

For the latest news, weather, sports, and streaming video, head to The Hill.