An illustration of a 55 AUS score with a clipboard underneath. There are icons of a bar graph, a person speaking into a headset, a pie chart, and a volume control on either side.

Accessible Usability Scale (AUS) analysis of desktop screen readers 

User research results from over 1,000 AUS scores and recommendations for screen reader research engagements 

Assistive tech type Total responses AUS average AUS median
Screen reader totals 1191 55 55
Screen reader Total responses AUS average AUS median
JAWS 568 57 58
NVDA 374 55 55
VoiceOver 249 50 50
An illustration of a 55 AUS score with a clipboard underneath. There are icons of a bar graph, a person speaking into a headset, a pie chart, and a volume control on either side.

Accessible Usability Scale (AUS) analysis of desktop screen readers 

User research results from over 1,000 AUS scores and recommendations for screen reader research engagements 

Assistive tech type Total responses AUS average AUS median
Screen reader totals 1191 55 55
Screen reader Total responses AUS average AUS median
JAWS 568 57 58
NVDA 374 55 55
VoiceOver 249 50 50

Background 

Measuring web accessibility 

Today, the most relied upon marker for measuring web accessibility is adherence to the Web Content and Accessibility Guidelines (WCAG). WCAG is made up of 4 principles, 12+ guidelines, and 60 to 80 success criteria depending on the version and standard you’re referencing. WCAG criteria are effective as an evaluative tool, but difficult to leverage for product-led organizations.  

In 2020, Fable began developing the Accessible Usability Scale (AUS) with the goal of measuring user experiences for assistive technology users. The AUS consists of ten questions administered at the end of a user research session to calculate a score. It is inspired by the System Usability Scale, but specifically adapts questions for people using assistive technology.  

What are assistive technologies?

Assistive technology is often described as any product or piece of equipment used to improve or increase the functional ability of people with disabilities. This understates the widespread and mainstream adoption of assistive technologies, which are used by majority of the global population. We consider assistive technology as features, tools, and products that adapt experiences to provide access to the full range of human diversity and experience. This is a reframing to acknowledge that disability can be permanent, temporary, and situational, as well as that environments are as relevant in the conversation as the individual. 

Learn more about assistive technologies in Fable’s Assistive Technology Glossary 

 The AUS is available online (Creative Commons Attribution 4.0), and generates a usability score out of 100. We have collected thousands of AUS submissions and new trends continue to emerge as our data pool grows.  

Read about how an AUS score is calculated

Technology configurations used by assistive technology users (AT users) are captured with every AUS submission to provide a deeper layer of insights. Specifically, we capture: 

  • The type of device (e.g., Apple iOS, Android mobile, Windows computer, etc.)   
  • The type of product, web-based or native application 
  • If web-based, the type of browser being used 

The type of assistive technology being used, including:   

  • Screen magnification (assistive technology that presents enlarged screen content and other visual modifications)  
  • Alternative navigation (assistive technology that replaces a standard keyboard or mouse)  
  • Screen readers (assistive technology that outputs on-screen text using text-to-speech)  

Screen reader user experiences are poor 

A screen reader is software that converts digital experiences into spoken language or Braille. Apple, Windows, Google, Android and others build screen readers into their operating systems to help users with visual and cognitive impairments interact with digital products.

The blind community are the primary users of screen readers. Research by Bourne et al. (2021) estimates that 43.4 million people worldwide are blind, and 295 million people have moderate to severe vision impairment. Screen readers are also used by millions of people who are not visually impaired – for example, people with cognitive impairments, dyslexia, and low literacy levels all report leveraging screen readers.

In our 2022 AUS benchmarks blog post, we identified how the experience of screen reader users is worse than other assistive technology users, with the average AUS score for screen readers being 12 points lower than alternative navigation users and a full 17 points lower than screen magnification users. 

Screen readers are falling behind". bar graph showing screen magnification users with an AUS score of 72, alternative navigation users with a score of 67 and screen reader users with a score of 55.

What does that mean practically? According to our AUS data, 1 in 3 screen reader users believe that they would need the support of another person to use all the features of web-based products and applications, compared to 1 in 7 screen magnification and alternative navigation users. That represents tens of millions of people around the world not able to shop, learn, bank, etc.

The magnitude of the challenges that screen reader users face deserves concentrated attention. The purpose of this study is to dig deeper into the experiences of screen reader users. Specifically, we explore how the three popular screen readers – JAWS for Windows, NVDA for Windows, and VoiceOver for Mac – perform on the AUS, and we recommend questions to ask screen reader users in user research engagements.  

Methodology 

The AUS is comprised of ten questions, with each question speaking to a different aspect of an assistive tech users experience with digital products. Specifically, the AUS asks: 

# Questions
1 I would use this web-based product/native application frequently, if I had a reason to.
2 I found the web-based product/native application unnecessarily complex.
3 I thought the web-based product/native application was easy to use.
4 I think that I would need the support of another person to use all the features of this web-based product/native application.
5 I found the various functions of the web-based product/native application made sense and were compatible with my technology.
6 I thought there was too much inconsistency in how this web-based product/native application worked.
7 I would imagine that most people with my assistive technology would learn to use this web-based product/native application quickly.
8 I found this web-based product/native application very cumbersome or awkward to use.
9 I felt very confident using the web-based product/native application.
10 I needed to familiarize myself with the web-based product/native application before I would use it effectively.

Our goal of this study was to dig into how screen reader users respond to these questions, depending on their screen reader of choice. While it is important to identify if the average AUS score of each screen reader differs, we are most interested in the potential reasons why 

We examined AUS data submissions from screen reader users testing a wide range of digital products. Specifically, we examined data from 568 JAWS users, 374 NVDA users, and 249 VoiceOver users. 

We receive AUS submissions from a wide range of screen reader users. Some might be experienced with multiple screen reader types, and others might have used multiple types of assistive technology including a screen reader. Some users have used screen readers their entire lives and others might have started using a screen reader more recently. 

We are not comparing apples to apples, but that’s also why we are interested in better understanding assistive technology users – every user is unique. 

Takeaways 

This is how the screen readers scored on the AUS on a scale from 0 to 100: 

Icons for a 57 JAWs AUS score, 55 NVDA AUS score, and 50 VoiceOver AUS score.

According to SUS researchers, a score in the 50s represents a grade of a D. Our data suggests that the experience of screen readers users, regardless of screen reader choice, is poor when using desktop products. In many cases, experiences of screen reader users are frustrating and insufficient.

But as we dig deeper, the data also highlights how not all screen readers are made equal. Specifically, VoiceOver users scored significantly lower than both JAWS (p < .01, 95% C.I. = -10.8, 2.3) and NVDA (p < .05, 95% C.I. = -9.7, -1.5) on the AUS. Whereas, the difference in AUS scores between JAWS and NVDA was not statistically significant (p = .63, 95% C.I. = -2.3, 5.2). 

JAWS was the easiest screen reader to use of the three, but 1 in 3 JAWS users still found the desktop product they were testing to be unnecessarily complex. NVDA users felt most confident compared to JAWS and VoiceOver users, but even 1 in 2 NVDA users still did not feel confident using the desktop product they were testing. Despite Voiceover leading the way in making experiences feel familiar, for every 10 users, 6 still felt they needed to familiarize themselves before they could use a product effectively.  

VoiceOver: Familiar, but cumbersome 

VoiceOver, the native screen reader for MacOS, has some impressive claims to fame. It premiered in 2005 on MacOS X Tiger and functionally changed how a screen reader navigates digital content by introducing a robust soundscape that provided access to visual information such a location on the screen or stylistic formatting. Since then, Apple has continued to improve VoiceOver across all their devices, introducing ground-breaking features like sonification of charts and graphs, AI-based screen recognition to identify unlabeled elements, Siri integration, and new methods of making touchscreen-only maps fully accessible on mobile. 

Despite these impressive features, VoiceOver demonstrated the lowest AUS score of the three screen readers. We wanted to know why. Where does it succeed and where does it fall behind? 

  • Average AUS Score: 50/100
  • Median AUS Score: 50/100
  • 75% of scores fall below 65/100 

Where VoiceOver succeeds

It’s important to note that VoiceOver doesn’t fall behind in all areas – it brings a familiarity to the user experience. Specifically, 57% of VoiceOver users felt as though they needed to familiarize themselves with a desktop product before they could use it effectively (AUS question #10). This significantly also outpaces the 67% of JAWS users who felt the same (p < .05, 95% C.I. = -.18, -.01). 

This familiarity may be the result of being able to operate entirely in one company’s ecosystem. Specifically, VoiceOver users benefit from consistency across Apple products; they are using Apple’s screen reader on an Apple operating system on Apple hardware. This helps to create a consistency of experience from computer to computer, from app to app, and from site to site. This is an important distinction from Windows-based screen readers, which are developed by different companies with different release cycles. It’s in this consistency that VoiceOver likely succeeds in its familiarity.

Where VoiceOver falls behind

While VoiceOver outperforms JAWS and NVDA in familiarity, it falls behind in its ease of use. Specifically, only 44% of VoiceOver users believed that the desktop product they were testing was easy to use (compared to 51% for both JAWS and NVDA users – AUS question #3), and only 38% imagined that other people with their assistive tech would learn to use the desktop products quickly (compared to 50% for JAWS and NVDA – AUS question #7; F(2,1188) = 5.67, p < .01). 

"Where VoiceOver falls behind". Bar graph showing the percentage of users who agree that people with their assistive tech would learn to use the same web-based / native application product quickly. VoiceOver users agreed 38%, NVDA users 49% and JAWS users 51%.