UNIVERSITY PARK, Pa. — A growing number of organizations are using sentiment analysis tools from third-party artificial intelligence (AI) services to categorize large amounts of text into negative, neutral or positive sentences for social applications ranging from health care to policymaking. These tools, however, are driven by learned associations that often contain biases against persons with disabilities, according to researchers from the Penn State College of Information Sciences and Technology (IST).
In the paper “Automated Ableism: An Exploration of Explicit Disability Biases in Artificial Intelligence as a Service (AIaaS) Sentiment and Toxicity Analysis Models,” researchers detailed an analysis of biases against people with disabilities contained in the natural language processing (NLP) algorithms and models they tested. The work, led by Shomir Wilson, assistant professor in IST and director of the Human Language Technologies Lab, received the Best Short Paper Award from the 2023 Workshop on Trustworthy Natural Language Processing at the 61st Annual Meeting of the Association for Computation Linguistics, held July 9-14 in Toronto, Canada.
“We wanted to examine whether the nature of a discussion or an NLP model’s learned associations contributed to disability bias,” said Pranav Narayanan Venkit, a doctoral student in the College of IST and first author on the paper. “This is important because real-world organizations that outsource their AI needs may unknowingly deploy biased models.”