
How Salesforce uses real-world accessibility feedback to shape AI UX design
AI experts at Stanford University predict that 2026 will be the year “AI evangelism gives way to AI evaluation.” In other words, it’s time to test AI utility at scale in real-world products. As product and UX leaders face increased pressure to quickly integrate AI into their offerings quickly, many will find themselves standing at an accessibility crossroads.
- One path prioritizes speed to market, which will inadvertently exclude many people with disabilities who want to use AI products.
- The other path gives teams permission to test product assumptions against real-world user experiences. This approach takes AI products from checkbox “accessible” to actually usable for people with disabilities.
As a leader in accessibility, it’s no surprise Salesforce chose the second path, treating accessible AI as a strategic advantage that enhances overall team agility.
In this article, we’ll share real examples of accessibility issues that Salesforce teams uncovered in AI UX design by testing with people with disabilities. We’ll also share the company’s practical, repeatable framework for weaving real-world accessibility research into everyday AI development — without slowing teams down.
Accessibility is critical to building trustworthy AI
Salesforce views compliance with accessibility standards and regulations — like the Web Content Accessibility Guidelines (WCAG) or the European Accessibility Act (EAA) — as a bare minimum. Their ultimate goal is to create exceptional user experiences.
“No one designs or develops a product so that only 50% or 80% of people can use it,” says Adam Rodenbeck, Lead Digital Accessibility Engineer on the Product Accessibility team at Salesforce. “You want to design and develop products so everyone can use them. Our goal with user testing is to ensure that all users can use our products delightfully and easily.”
Salesforce teams help achieve usability goals through a partnership with Fable dating back to 2023.
Fable and Salesforce partnership
Three real usability issues Salesforce surfaced by testing with people with disabilities
Testing Agentforce with people with disabilities revealed critical AI UX design issues that automated tools can’t catch. Below, we share three video clips taken from actual Salesforce moderated research sessions, so you can see for yourself the usability insights surfaced.
Session 1: Kamil can’t access the Agentforce chat
“Everything seems to be accessible, except the AI section.”
– Kamil, onscreen keyboard user
Task: Find and interact with the Agentforce chat at the bottom of a Salesforce web page.
What went wrong: Kamil used the Tab key on his on-screen keyboard to navigate through different elements on the page, yet he couldn’t reliably reach the AI chat input. When he finally landed near the component, the focus indicator was subtle, resulting in the Agentforce chat box blending into a thick blue border.
AI UX design insights the Salesforce team gained by working with Kamil:
- It was difficult for Kamil to navigate to Agentforce at the bottom of the page using an onscreen keyboard.
- Design choices can make it hard to find page elements. For example, the outline around the Agentforce chat made it hard for Kamil to spot it on the page.
- The process was quite time-consuming, putting Kamil at high risk for abandoning the AI chat experience altogether.
Session 2: Caitlin is forced to use workarounds to access AI
“If I wasn’t testing, I just would have given up.”
– Caitlin, screen reader user
Task: Navigate to the Agentforce chat panel on a web page.
What went wrong: The keystrokes required to get in and out of the Agentforce chat panel were unpredictable for Caitlin.
AI UX design insights the Salesforce team gained by working with Caitlin:
- Caitlin would find it most helpful to choose Agentforce in the toolbar and be automatically placed into the panel.
- The inability to get into the chat panel contributed to negative perceptions of other areas of the interface.
- User confidence collapses quickly if they can’t predict how focus and navigation will work. They are inclined to stop exploring. If they do get into the element, they may also hesitate to navigate away from it for fear that they won’t be able to easily reach it again.
- The team discovered that Caitlin’s navigation issue was more tied to the screen reader she was using than the actual AI product. The Salesforce team used feedback loops in place with screen reader companies to share that feedback.
Session 3: David wishes for a speech-to-text feature
“Typing is difficult, painful, slow… I’d love to see speech-to-text here.”
– David, sip and puff device user
Task: Type an AI prompt.
What went wrong: Prompting assumes easy, fast text entry. For David, typing is very difficult, so he prefers speech-to-text. He keeps his prompt short and hopes that it will make sense to the AI.
AI UX design insights the Salesforce team gained by working with David:
- Typing using a sip and puff device is highly fatiguing.
- If users can’t prompt AI precisely, they won’t get the best outcomes. There’s a risk that AI UX design will underdeliver by default.
- Offering a speech-to-text input option would drastically change the value David gets from this AI feature.
- The team has been thinking about how to make the experience better for users like David, including pre-generated prompt suggestions.
A final overarching takeaway
One of the most striking insights to come from these three user experiences is that the AI feature being tested was the least accessible part of the product. Many of the issues uncovered were not WCAG violations. They were usability failures that only surfaced through AI UX design research with real assistive technology users.
“If you work in accessibility, if you aren’t currently doing user research like this, it can be daunting to get started. But I cannot say enough good things about our partnership with Fable, and how valuable it’s been for our product teams. It’s having real impact in the decisions that we’re making, the patterns that we’re creating, and just also really helping the product teams understand different user needs in a new way.”

Katie Puskarich
Senior Manager, Product Accessibility Enablement, Salesforce
“If you work in accessibility, if you aren’t currently doing user research like this, it can be daunting to get started. But I cannot say enough good things about our partnership with Fable, and how valuable it’s been for our product teams. It’s having real impact in the decisions that we’re making, the patterns that we’re creating, and just also really helping the product teams understand different user needs in a new way.”

Katie Puskarich
Senior Manager of Product Accessibility Enablement at Salesforce
Borrow Salesforce’s accessible AI playbook
The team at Salesforce threads accessibility throughout the software development lifecycle.
“We include accessibility reviews when we’re looking at designs, talking with engineers about how to implement the instructions they’re getting from designers, and into testing,” says Adam. “Not only do we test to see whether the components are acting the way that we would like them to, but we also want to know how users experience the products that we’re designing.”
Conducting usability research with people with disabilities gives Salesforce product and UX design teams the opportunity to interact with people who are at different stages in their assistive technology journeys.
- Working with the Fable Community eliminates the need for time-consuming recruitment. Instead, they get instant access to a panel of people with vision, mobility, and cognitive accessibility needs.
- This provides teams with a deeper understanding of how different users understand and interact with Salesforce AI UX designs.
Salesforce also chooses to partner with Fable’s in-house team of accessibility research experts to do a lot of the heavy lifting. There are several stages in the Salesforce research feedback loop:

Step 1: Collaborative planning
Salesforce’s product and accessibility teams meet with Fable researchers to plan out each research study, including features they are testing, what they hope to learn, and which assistive technologies they need to see in action.
Step 2: Moderated sessions
Salesforce trusts Fable’s accessibility experts to run live sessions with people with disabilities who use screen readers, magnification, onscreen keyboards, sip and puff devices, or have other access needs.
Step 3: Product team share‑outs with clips
Fable’s experts analyze research findings, delivering key insights and prioritizing guidance in alignment with key Salesforce business goals. Fable also delivers salient “a-ha” moments by sharing short video clips that show testers working through the tasks. Product teams can ask questions and connect what they’re seeing to their own AI UX design decisions.
Step 4: Accessibility validation and logging
A Salesforce digital accessibility engineer (DAE) is assigned to help the product team apply the findings. The DAE validates what testers experienced with the product, separates clear accessibility bugs from broader usability feedback, and logs both in the same systems used for other product work.
“Sometimes it might be that there’s an actual bug in something, and if it is an accessibility bug, that digital accessibility engineer will then log that bug for the product team to fix it,” says Katie. “More often than not it’s usability feedback. There’s a different mechanism for tracking that feedback.”
If you’re interested in the top accessibility and usability findings related to Salesforce’s Agentforce experiences over almost a year of testing, watch the clip below.
Step 5: Prioritization and scaling
The Salesforce DAE works with the product team to manage prioritization and backlog. This individual also evaluates whether it’s possible to extrapolate findings to other areas of Agentforce where teams are working on similar interaction patterns.
Step 6: Improvements, fixes, and release notes
Changes ship as part of normal releases and are reflected in release notes to keep customers up to date. Internally, the loop is considered “closed” before the next study starts. This loop happens repeatedly and is embedded in every Salesforce AI UX design study.
“We really just want to make sure that people know—hey, we heard you, we’re listening, we fixed it,” says Adam. “We want to build that trust and accountability with the disability community, because when you solve for one, you can extend it to many. And we want to keep our users with disabilities at the heart of what we’re doing with Agentforce. It’s a high priority for us.”
Including people with disabilities leads to better AI for all
AI products offer amazing potential to elevate the experience of every user. For people with disabilities in particular, AI offers the promises of reducing cognitive load, enabling voice interaction for users with physical disabilities, and even acting as assistive technology itself.
But this can only happen if product, UX, and accessibility leaders work together to include diverse perspectives when designing, developing, and releasing products:
Otherwise, AI adds another layer that people must fight through to make products usable.
“There were a number of people on the product team who had never seen someone use a sip and puff device like that before. It was an enlightening experience,” says Katie. “It really puts a human element into the choices that we make. If we aren’t considering those needs, we’re unintentionally creating barriers for them.”
Watch the full webinar recording to see exactly how Salesforce is ensuring AI technology doesn’t just move faster — it moves towards usability for all.

