Banner image with the text 'Case study: Cognitive accessibility pilot' on a pink background surrounded by a cognitive accessibility icon, an accessibility progress bar, and a checklist of a clipboard.

Case study: Fable’s cognitive accessibility pilot

Fable’s Cognitive Accessibility Working Group of experts in cognitive disability and accessibility research are helping to shape Fable’s approach to advancing cognitive accessibility.

  • Alwar Pillai, CEO at Fable, Chair
  • Kate Kalcevich, Head of Innovation at Fable, Co-Chair
  • Greg Weinstein, Inclusive Research Lead
  • Betty Troy, Senior UX Researcher, Accessibility and Inclusive Design
  • Aidan Bryant, Accessibility User Researcher
  • Maya Alvarado, Senior Accessibility Researcher
  • Anna Tendera, Senior Researcher
  • Sepideh Shahi, Senior Inclusive Designer
  • Larissa McKetton, Neuroscientist and Director (Working Group Invited Expert)

The working group is made up of four task forces, each with a specific objective:

  • Participants task force: Develop strategies to recruit and organize research participants while ensuring that the approach to informed consent works for various cognitive needs.
  • Methodologies task force: Conduct an in-depth literature review of user research methods for engaging with participants with cognitive disabilities and contribute best practices based on their own research experiences.
  • Research task force: Implement and iterate on the recommendations of the participants and methods task forces by designing and running a task-based evaluative user research study.
  • Authoring task force: Document the findings of the other task forces in a case study to share learnings with the wider technology industry.

“For too long, cognitive accessibility has been an afterthought in digital product development. This is despite the fact that cognitive disability is the most prevalent form of disability, affecting nearly 14% of the U.S. population. We can’t build truly inclusive experiences without including people with disabilities in the research process — including individuals with cognitive accessibility needs.”

Alwar Pillai
Co-founder and CEO, Fable

“For too long, cognitive accessibility has been an afterthought in digital product development. This is despite the fact that cognitive disability is the most prevalent form of disability, affecting nearly 14% of the U.S. population. We can’t build truly inclusive experiences without including people with disabilities in the research process — including individuals with cognitive accessibility needs.”

Alwar Pillai
Co-founder and CEO, Fable

Building the cognitive panel

The participants task force chose early on not to focus on medical diagnosis as a recruiting criteria. This was to avoid pitfalls around medical diagnoses such as excluding participants without a formal diagnoses and medical labels that don’t account for individual experiences. Challenges arise from a mismatch between technology and user needs.

We decided it was more useful, and less harmful, to recruit based on needs, challenges, and assistive technology usage. This approach is consistent with Fable’s recruitment for assistive technology user panels.

We chose to focus on the most prevalent cognitive needs. This meant starting with people who experience challenges with memory, attention, and/or reading and writing. These categories overlap a wide range of permanent and temporary cognitive disabilities including neurodivergence and are likely to have high impact on how people use digital products. Our focus may expand to other cognitive needs in the future.

To identify participants, we created a screener survey using questions drawn from a wide range of sources including clinical research, UX/UI practices, and academic needs-based studies. The screener confirmed that cognitive accessibility needs are deeply interconnected with the use of assistive tools and technologies. It also showed the intersectionality of cognitive needs, with many participants experiencing challenges in more than one domain.

Developing methods

With our participant panel in place, the team turned their focus toward developing research methods tailored to cognitive participant needs and methods to quantify study findings.

We reviewed 24 academic and industry studies spanning topics such as aging, learning disabilities, cognitive disabilities, Autism, intellectual disabilities, Down Syndrome, Dementia, and others.

Findings from the literature review were put into a cognitive accessibility engagement guide to support the research task force in planning and running research sessions.

We also explored ways of measuring participant’s experiences. We discarded metrics that we didn’t think would align with participant needs. One example was the smiley-o-meter which is a way for participants to indicate how they felt about a task:

A row of yellow faces representing a scale of emotion: awful, not very good, okay, really good, and fantastic

Anna Tendera, Methodologies task force lead, explains why we discarded it: “There are cultural differences country to country. Plus, people with Autism are known to struggle more with identification of emotions and expressions.”

Ultimately, we adapted Fable’s Accessible Usability Scale (AUS) survey for cognitive participants to build on Fable’s years of use of the AUS and the well-known System Usability Scale (SUS). We also created a simplified Single Ease Questionnaire (SEQ) that researchers could administer during sessions.

Facilitating the study

For our pilot research study, we evaluated one of Fable’s products – Fable Pathways – which offers free courses for people with disabilities through our own learning management system.

The protocol was designed to identify areas where cognitive load might be challenging. It included reviewing content, evaluating search results, determining course progress, and recalling lessons.

The research task force ran 13 moderated and 11 unmoderated user testing sessions with a cognitive panel that Fable recruited. During the moderated sessions researchers guided participants through tasks, sometimes using the SEQ to gather task-specific feedback. After each research session participants completed the cognitive AUS survey.

Five key observations from the pilot

1. Hypothetical questions are confusing

We learned that hypothetical instructions such as “Let’s say you have already watched this video…” confused participants when we referenced a task they hadn’t completed.

2. Building rapport and being clear about the purpose helps participants

Engaging in casual conversation at the beginning of the session helped foster a safe space and participants seemed more at ease to share feedback. Stating that the study is “not a test of your ability, but to gather feedback” helped participants focus on the tasks rather than performing them “correctly.”

3. Participants strayed away from tasks often

Some pilot study participants strayed from the intended tasks, even when clear instructions were given. We settled on leading the participant to the starting point for each task since the task evaluation didn’t include exploring the ease of locating the starting point.

“It was challenging to keep [participants] on task, and where we needed to be for what we wanted to focus on because they would go off to a different place. So, it was really important to keep prompting at the right time.”
– Betty Troy, Working group member

4. Unmoderated sessions were more challenging

Participants had more challenges completing tasks and sharing feedback in unmoderated sessions than in moderated sessions. Fable continues to iterate on methods of performing unmoderated research with this audience group outside of the pilot study.

“As researchers, we were supposed to be there just to help them to get to the right place, and afterwards we were there just to help with technical difficulties. We were not supposed to help them with completing the tasks.
– Sepideh Shahi, Research task force lead

5. Talk aloud protocols won’t always work

Our research into methodologies for facilitation suggested that a cognitive audience might not be able to follow a talk aloud protocol, sharing their thoughts and feelings while they are completing tasks. We did indeed find that some participants needed to silently complete tasks first and then share their feedback rather than talking during tasks.

Analyzing the findings

As a starting point, we created a spreadsheet with the issues found by each participant categorized by:

  • severity
  • category (navigation, content, images, etc.)
  • type of issue (accessibility, usability, or both)

We also evaluated participants on their tech savviness, noted their use of assistive technology and other supports, and took into consideration their cognitive AUS scores. participants using screen readers.

Our analysis found content related insights were the most common issue category at 29% of all issues. This is different from what we see with assistive technology audiences where tech compatibility is the biggest issue category. We also saw significant overlap between accessibility and usability issues with 42% of issues tagged as both. Lastly, 26% of issues were categorized by the researcher as high severity, indicating the importance of this audience’s insights.

“It was interesting to see how much intersectionality these participants had. Most had multiple cognitive needs and many also used assistive technology (AT), but in different ways than Fable’s community members who rely on AT for vision loss and mobility disabilities. For example, we saw participants using screen readers and captions to help with reading comprehension.”

Kate Kalcevich
Working group co-chair

“It was interesting to see how much intersectionality these participants had. Most had multiple cognitive needs and many also used assistive technology (AT), but in different ways than Fable’s community members who rely on AT for vision loss and mobility disabilities. For example, we saw participants using screen readers and captions to help with reading comprehension.”

Kate Kalcevich
Working group co-chair

What’s next?

We’re excited to learn more about this audience and the research insights they provide as we roll out cognitive accessibility research and testing to our customers. As we collect more cognitive AUS data points and run more studies, our understanding of the value of this audience will continue to grow.

Our initial findings indicate that teams could replace some of their general population user research with a cognitive audience and obtain the same usability findings, but also additional insights that if actioned could dramatically improve the usability and delightfulness of digital products.

Contributors

Alwar Pillai, CEO at Fable, Chair

Kate Kalcevich, Head of Innovation at Fable, Co-Chair

Greg is a white man wearing a black sweater. He is bald and wears a long beard.

Greg Weinstein, Inclusive Research Lead

Betty Troy headshot

Betty Troy, Senior UX Researcher, Accessibility and Inclusive Design

Aidan Bryant headshot

Aidan Bryant, Accessibility User Researcher

Maya Alvarado headshot

Maya Alvarado, Senior Accessibility Researcher

Anna Tendera headshot

Anna Tendera, Senior Researcher

Sepideh Shahi headshot

Sepideh Shahi, Senior Inclusive Designer

Larissa McKetton headshot

Larissa McKetton, Neuroscientist and Director (Working Group Invited Expert)

Create exceptional product experiences

Uncover how accessibility for people with disabilities makes your products better for everyone.

A man using a joystick