How AI Needs To Be Redesigned For People With Disabilities
Estimated read time: 12 minutes
As a completely blind screen reader user, AI excites me. I’m not afraid of big data and I’m optimistic that the changes coming to our world will bring far more positive improvements than negative harms.
When I think about AI, the things I imagine are dreams come true for many in the disabled community. Accurate image descriptions, meaning I’m never left out of anything online ever again. Improved text and image recognition, offering access to visual elements of the physical world in ways that I’ve never been able to achieve. Experiences customized exactly to my needs, in ways that were previously difficult or impossible. More accurate maps, both of outdoor and indoor locations, offering step by step instructions to get to where I’m going.
Why talk about this now?
If you’re looking for a job, it’s possible that AI reads your resume, and even analyzes your video interview. If you’re taking an online exam, AI is often used to detect if you’re cheating and to decide your results. When you attend an online meeting, AI may be used to judge your attention level.
Almost every advertisement you see, every product recommended to you, and many of the movies and music that are suggested to you, are influenced or entirely controlled by AI.
Despite widespread adoption, this technology is still in its early days. So it’s critical that we begin working to solve the problems caused by AI and big data now, rather than later.
Even though AI is still in its early stages of development, many companies are working on bringing many of these ideas to reality. Apps like Microsoft’s Seeing AI, and Envision, can already recognize products, objects, and faces, as well as read text and barcodes in real time. Facebook is using AI to suggest automatic alt text on images, Google is using AI to automatically describe images missing alt text, and Microsoft Office is using AI to describe images in Word and PowerPoint.
Of course, the holy grail for those of us who can’t drive – self-driving cars. The ability to go anywhere, at any time, without getting a ride or depending on public transit, would be life changing. And while self-driving cars aren’t quite a reality yet, companies like Waymo are already using AI to make cars more accessible.
If AI and big data are really going to bring all of us the future they promise, there are still many critical challenges we need to work to solve.
The problems
1. Recommendation systems do not consider the needs of people with disabilities
Netflix has spent many years, and many millions of dollars, creating a state-of-the-art system to recommend videos. It classifies everything on its platform into hundreds of different categories, based on genre, mood, and many other characteristics. It uses the latest techniques in data analytics to create algorithms that try to recommend just the right video to every person, at just the right time. However, if you are someone who relies on audio descriptions or closed captions to enjoy movies, many of these recommendations will be useless to you.
Why? Because none of Netflix’s algorithms seem to take those needs into account. Whether or not a movie or show includes audio description or closed captions doesn’t seem to be a datapoint used by the system at all. And it’s impossible for users to tell Netflix that they only want to consume content that has audio descriptions.
Similarly, games recommended by Steam, or apps recommended by the Apple App Store, are equally useless to people with disabilities. Sure, I might enjoy the things that they recommend. But they’ve failed to consider the most important first step: are any of those things even accessible to me? Because of this lack of consideration, instead of making life easier or better, they just add irrelevant noise to the experience of myself and many people with disabilities.
2. Advertising systems have no concept of accessibility
Personalized advertising has revolutionized the ad industry and user experiences. It promises to present people only the ads they might be interested in, and it promises businesses that it will only advertise them to potential consumers. In order to make this happen, there are enormous databases tracking what we click on, what we search for, and where we surf.
However, for people with disabilities, the interest profile that ads track is likely to be far less accurate. Unfortunately, many ads for products that I might be interested in purchasing, are completely inaccessible, so they never get my clicks, or my attention.
On the other hand, some ads for products that I find far less interesting are accessible, so I do focus on them, read them, and maybe even click on them. So, the “interest profile” that’s been built for me is highly tilted towards products that have ads that are accessible to me – rather than products I might actually want to purchase.
Sadly, this makes advertising far less useful to me, and can even affect the deals that I’m offered when shopping online, all because the system hasn’t been designed to take my needs into account.
3. AI-based video analyses do not account for people with disabilities
AI is being used to analyze videos of people, in all sorts of contexts, for all sorts of reasons. One quickly growing field where this technology is used is proctoring online exams. In order to make sure students aren’t cheating, AI is used to track students’ eyes, to see if they’re frequently looking away from the screen. However, these systems fail to consider people who can’t see – and who never look at the screen at all. Similarly, AI fails to correctly track the emotions and attention of videos of people with disabilities.
Another rapidly growing use of AI is in online job interviews. Using micro expressions, eye tracking, and other physical indicators, companies claim that AI can judge someone’s emotions, honesty, and other factors that can help decide if someone is a good job candidate or not. This can fail far more than just blind people. Someone who has had a stroke may be unable to make facial expressions that the AI can recognize. People who are not neurotypical may also react in ways that the system has not been trained to correctly understand.
So instead of eliminating bias in the hiring process, AI can instead reinforce bias – preventing diversity and the hiring of the best candidate.
4. AI-based image descriptions are not as accurate as we think
When AI generated captions go wrong, the result is usually obvious gibberish. But when AI generated image descriptions are off, they can instead leave behind biased impressions, and correct sounding but completely inaccurate information.
An excellent example of the problem is highlighted by a recent study of image recognition technology from Amazon, Google, and Microsoft. It found that across all products, the AI used keywords like “official” and “professional” most often for images of men, but keywords like “pretty” and “smile” for images of women.
In fact, as I write this, I encountered an example of the problem. While working on a presentation with Alwar Pillai, Fable’s CEO, I found that my picture on the introductory slide of our deck had been automatically described as “a man wearing a suit and tie.” However, her picture was described as “a woman smiling at the camera”. We were both dressed professionally and both smiling. And yet my professional clothes were described, whereas Alwar was only described as smiling. But she’s my boss!
5. Voice recognition does not work for everyone
A study conducted by Stanford last year found that popular voice recognition systems make twice as many errors when understanding minorities.
However, the problem runs deeper than just accents. Those with a stutter, speech impediment, or any other condition affecting their ability to speak clearly, are completely unable to use these systems at all. This means that not only may they be unable to take advantage of the accessibility improvements offered by products like Google, Alexa, or Siri, but they may also be unable to interact with AI based telephone systems. Further, automatic captioning systems may be completely unable to interpret what they’re saying.
These issues with voice recognition can have far reaching effects, from an inability to receive timely customer service over the phone, to having less access to apps and services, or to being unable to interact effectively with those who require captions.
The solutions
1. People with disabilities must be included in training and design
The first solution to the problems that AI systems cause is not a new one. In fact, it’s something people in the accessibility community have been saying for many years now: people with disabilities must be included in the design and development of products, to ensure they are fully usable for everyone. Big data and AI just magnify this need.
Unlike humans, artificial intelligence systems cannot adapt to deal with situations or data they have not been exposed to. Thus, to ensure that these systems can work for everyone, a wide variety of different types of people must be included when training the system. For example, Google launched Project Euphonia, to collect speech data from people with various speaking difficulties. This way, Google’s AI will have been exposed to and trained on many different types of speech, so that it will be better able to understand anyone, no matter who they are.
Similarly, the ORBIT project is collecting photographs of various objects taken by blind and visually impaired people, to help AI better recognize these types of photos and understand what features of the images are most important to blind users.
However, these projects are just the beginning. To work for everyone, AI training data needs to be diverse and inclusive. As well, the designs of the models and architectures should be created with the needs of everyone in mind. If a system is given no information about things like closed captions or audio descriptions, it can never learn that these features are important – even critical – to many users. If the model contains no measure of how accessible an app or game may be, recommendation systems can never become useful to people with disabilities, as they simply lack the data required.
2. People with disabilities must be included in testing
Currently, there seems to be an assumption that an AI system which works for one person will work for everyone. However, just as is the case with every other portion of product design, this does not hold true. Unfortunately, none of the mainstream voice recognition providers publish data about the accuracy of their systems, broken down either by ethnicity, gender, or disability. Instead, this research is published entirely by third parties. If the producers of these systems are not even asking the questions, or doing testing and development with diverse groups, it’s impossible to know how to improve these systems, or even how much improvement is needed, or where.
Similarly, data about how well exam proctoring systems, and AI based video interviews, work for people with disabilities is sparse. While the intent behind some of these systems is to remove biases, the questions about if that’s actually happening – or not – aren’t being asked. Instead, the assumption seems to be that because a computer does it, it can’t be biased. It’s assumed to be more precise. Instead, we need to be talking about how the design of these systems may have built-in biases from day one. We must understand what those might be and where they might come from. Only then can we begin ensuring that AI is as inclusive and fair as we want it to be.
3. People with disabilities must be included in implementation
Lastly, it’s vital that people with disabilities are included during implementation of new AI systems. This way, we can ensure that AI systems are serving the needs they are intended to serve and understand any side effects that they may cause.
As one example, automatic image description systems intended to assist those of us who are blind fail to give us the information we need to correctly use the descriptions they generate. While most systems do show if descriptions are automatically generated, they never give any information about the confidence of their descriptions. If a system is only 60 percent confident about the description it created for one image, and 99 percent confident about the description of the next one, that is critical information for a blind person to have. Also, what parts of the image is it sure about? Maybe the AI is sure the image shows a person, but only 50-50 on whether the person is a man or woman. Without getting into the debate about when and how AI should gender people – or if it should do so at all – that’s important information for anyone reading the description to have.
The ultimate issue is transparency. When it’s impossible to understand what a particular AI system is doing, it’s impossible to know if it’s taking our needs as people with disabilities into account. Can we trust it to be accurate? Can we trust what it recommends? Can we trust that it is judging and understanding us correctly? Without a lot more transparency during the implementation of AI systems, all of this is impossible to tell. As so-called edge cases, unfortunately people with disabilities can’t afford to just assume that any given AI system is true, accurate, or designed to meet our needs.
Every year, the pace of technological change increases. The pace is exciting, although it magnifies both the good and the bad – our potential for progress but also to cause unintended harm. AI and big data mark the next exponential uptick in this process.
If you work in the field of AI, it’s time to confront bias. This article outlines some things to think about, for example more inclusivity in training data and transparency of implementation at a high level. Now is the time to get down to work and begin figuring out how to take tangible action on these ideas.
If you’re a person with a disability, be aware! AI has made you lots of promises. But how well is it keeping them? Speaking as a person with a disability myself, it’s important that we don’t let the promise of a brighter tomorrow obscure the problems of today. Because if we do, today’s problems will remain unsolved, and our brighter tomorrow will never materialize.
If you’re an ally, or work in technology outside of AI development, be a part of the conversation. As AI becomes more and more ubiquitous in your life, how does it affect you and your community? As we build our AI-based future, it’s time for everyone’s voices to be heard.
Ultimately, the time to begin having these vital conversations about inclusion in AI development is now. Not next year, or next decade. Now. So, let’s begin – before it’s too late!
Reading recommendations on AI and the inclusion of people with disabilities
- Steam uses machine learning for its new game recommendation engine by Kyle Orland | Ars Technica
- Netflix Recommender System — A Big Data Case Study by Chaithanya Pramodh Kasula | Towards Data Science
- Computer Vision in the EdTech Industry — What Can AI See by Diana (Fangyuan) Yin | Alef Education
- Automated Online Exam Proctoring by Atoum et. al. | IEEE Transaction On Multimedia
- Will A Robot Be Interviewing You For Your Next Job? by Steve Pearlman | Forbes
- When AI sees a man, it thinks “official.” A woman? “Smile” by Tom Simonite | Ars Technica
- Janelle Shane: “One fun thing I discovered about Visual Chatbot. It learned from answers that humans gave, and apparently nobody ever asked “how many giraffes are there?” when the answer was zero.” | Twitter
- The Blogger Behind “AI Weirdness” Thinks Today’s AI Is Dumb and Dangerous by Janelle Shane | IEEE Spectrum
- Automated speech recognition less accurate for blacks by Edmund L. Andrews | Stanford News
- How AI can improve products for people with impaired speech by Julie Cattiau | Google
- The ORBIT (Object Recognition for Blind Image Training) Dataset – Meta learning for personalised object recognition aimed at visually impaired people | ORBIT
- Google Cloud Model Cards
- You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place by Janelle Shane | Goodreads
- Underspecification Presents Challenges for Credibility in Modern Machine Learning by D’Amour et. al. | Arxiv
- Talk about “We Count!” by Jutta Treviranus | The Walrus Talks Inclusion 2019
About the author
Sam Proulx, Community Manager, Fable
Samuel Proulx is the Community Lead at Fable. Sam has managed online communities in various spaces for 18 years; he brings this expertise to Fable, helping us build an inclusive team of people from all walks of life, which spans the entire country. Completely blind himself, he knows and values the importance of accessibility and diversity in all aspects of life. Sam is an expert in accessibility, accessibility testing, community management, Drupal, WordPress, and Ubuntu.