Measuring usability for assistive technology users
As companies build and mature their accessibility programs, we often see common themes materialize. After a team builds accessibility into their development cycle and addresses their backlog of WCAG issues, the focus shifts to user experience. Companies consider “do AT users like our product” and how “does the UX compare to the rest of our users”?
In hindsight, “time to complete a task” was the least meaningful metric that we found in our efforts to measure user experience at Fable. The time it takes for an AT user to complete a task can be a valuable datapoint at the individual level but loses utility in aggregate. The problem is with averaging and comparing. Comparing an expert JAWS user to a novice Dragon NaturallySpeaking user is not meaningful. When looking at data related to the time to complete a task, the likelihood of misinterpretation increases. These experiences are fundamentally different.
Build on something that works
The System Usability Scale (SUS) was developed in 1986 by John Brooke. It’s one of many psychometrically designed surveys, though it is unique both in that it remains popular and has been consistently used by organizations for decades. John Brooke’s objective with the System Usability Scale was to take a quick snapshot of people’s satisfaction when using systems, or what we’d commonly refer to today as digital products like apps and websites.
The context is different for an AT user
The SUS consists of ten statements to be responded to on a Likert scale, which is multiple choice with options ranging from ‘Strongly Disagree’ to ‘Strongly Agree’.
One of the questions in the SUS survey is, “I think that I would need the support of a technical person to be able to use this system.” Imagine a screen reader user was going through a sign-up form that required them to draw their signature. In this case, the user likely requires assistance not from a technical person, but another person who doesn’t rely on a screen reader themselves.
Another question is, “I would imagine that most people would learn to use this system very quickly.” In this case, assistive technology users are highly aware that their experiences are unlike most users.
By adapting the SUS questions to relate to the context of an assistive technology user more meaningfully, we can similarly quantify and measure the perceived usability of digital products for assistive technology users. Now, the framework can still measure usability issues, but can do so for those who are impacted by accessibility issues.