Woman on laptop

ASU students use AI to redefine disability representation

More than one in four adults (28.7%) in the United States have some type of disability, yet inclusive representation of disabled individuals in the media — particularly Paralympic athletes — remains a critical challenge.

Motivated by their passions, an all-female, interdisciplinary group of ASU Online students from Arizona State University has taken on this issue, coming together outside the classroom to explore how artificial intelligence (AI) can drive meaningful change.

Eliana D., Yiyan C., Sarah H. and Sofiia B. have united for a project titled Fair Play: Using AI to Monitor Ableism, which is part of ASU's AI Innovation Challenge. Through the Challenge, the team is able to access tools like ChatGPT Edu, a version of ChatGPT tailored for higher education, to discover how AI can help create more inclusive narratives about disabled individuals and challenge harmful stereotypes.

Empowering students to pioneer AI tools

Team FairPlay exemplifies the diversity and interdisciplinary collaboration that ASU encourages. Each member brings a unique skill set: Eliana, a Digital Media Literacy major at the Walter Cronkite School of Journalism and Mass Communication, contributes her expertise in media representation; and Sarah, a Disability Studies major, ensures the team uses language that aligns with current, disability-preferred terminology.

Yiyan, a Computer Science student at Ira A. Fulton School of Engineering, delves into AI’s technical capabilities and potential, while Sofiia, an undergraduate Data Science student living in California, is analyzing patterns and themes in the team’s findings.

Although they work remotely across different time zones — located throughout the United States and China — the team views this setup as a strength rather than a challenge. "Being an ASU Online student, it's just been so nice to collaborate with these women,” Eliana said. “Especially because it's a project that focuses on disability, it's very accommodating in some ways to do this remotely.”

Understanding their project process

For their project, the team is focusing on the default behavior of ChatGPT Edu to ensure their work reflects how most AI-generated content is created and consumed. For Eliana, who originally submitted the project and connected with the others via an ASU Facebook Group, this project stems from her strong interest in both media literacy and the rights of humans.

"AI is writing the narratives we see online; it’s already writing so many of the stories that we’re reading and consuming," Eliana shared. "But how much of that narrative is being researched, and how much of that narrative is being looked at by people who are disabled and by people who have the interests of disabled people in mind?"

The team has designed a dataset of prompts categorized as negative, positive, neutral or mixed to test ChatGPT Edu’s responses. This structure helps identify whether AI perpetuates or challenges harmful tropes like infantilization and exoticization.

Additionally, the team is interested in seeing if the AI can detect if an article is written with negative or positive emotion. “There is potential that we can train the AI model to understand emotion,” Yiyan said. “Think about in depth in the future: If we can have more creative inventions in the future if we apply emotional intelligence in artificial intelligence.”

Why this work matters

With over 57% of online text now generated or translated by AI, this project is timely. “I hope that we can improve how AI sees disabled people and that this project will bring awareness to people who work in AI and how they can make changes to make it better,” said Sofiia.

The team has just entered the data collection phase, feeding ChatGPT Edu carefully crafted prompts and analyzing the responses for patterns. Using statistical tools, they aim to measure recurring themes and understand how AI’s behavior shifts over time.

“Bringing ChatGPT Edu into this is really exciting because it has analyzed and gone over far more written text than our human research team ever could, and it has done so at a far more rapid rate,” Eliana said. “To me this raises the question, ‘Can AI analyze our society more accurately than humans can?’” Only time — and projects like Fair Play — will tell.