Person with hearing aid using a mobile device

Why You Need to Do User Testing with Deaf and Hard of Hearing People

Reading time: estimated 7 minutes

On This Page

Meryl K. Evans, Equal Entry, Director of Marketing.  She argues for why you need to conduct user testing with deaf and hard of hearing people, and also describes why their needs are often overlooked. Further, Meryl speaks to how to test captions and transcripts for maximum accessibility.

Introduction

One of the most powerful and effective ways to ensure your digital content is accessible and inclusive is by involving people with disabilities. However, one group is often overlooked. And that’s the deaf and hard of hearing.

Development teams think the deaf/HH can navigate websites and digital products with little trouble. And that all they need is captions and transcripts for any audio.

Thus, development teams do their own testing with captions and transcripts. They figure turning off the sound is enough to mimic the deaf/HH experience.

It’s not.

And besides, there’s more to testing captions than checking accuracy.

How to test captions

While many people who aren’t deaf/HH use captions, they don’t depend on captions like deaf/HH people do. It’s a wholly different experience for those who need the captions. They have no fail-safe backup like hearing people do by turning on the sound. 

Testing captions isn’t as simple as you might think. Here are 10 factors to pay attention to when checking the quality of captioned videos. 

1. Readability

If the video contains closed captions, then you don’t have to worry about readability. It defaults to the standard caption format. And depending on the platform, viewers can customize the captions to their preference. 

Development and QA cannot identify whether the open captions are readable. Open captions are captions that always show up on the video. You cannot turn it off and on. The captions are essentially an image glued on the video. 

Sometimes you need to use open captions for a video. This is especially true of mobile social networks like Instagram and TikTok. If you use Facebook, Twitter, or LinkedIn mobile app, then you’ll need to upload open captions from the device. These networks accept caption files when on a desktop, laptop, or another computer. 

Unfortunately, many of the open caption styles offered in mobile apps are not accessible. They violate many of the upcoming rules of quality captions. It’s not just contrast you need to consider. If the captions are ALL CAPS or animated, they will create a poor captioning experience.

2. Accuracy

Deaf/HH watch captioned videos and review transcripts every day. Hours and hours of it. By comparison, development and QA only spends a few minutes watching a video to check the captions for accuracy. They can do that, but they have to be careful not to let their hearing fill in the blanks for missing captions.

3. Synchronized

Hearing developers and testers can also check synchronization. But it’s very important to pay attention to the audio and the captions. They must be in sync. Turning the sound off is going to be harder for hearing people to catch out of sync captions. That’s because they often don’t depend on lipreading for listening like some deaf/HH do. 

Yes, deaf/HH people can tell when a video is out of sync. Even with no sound. It could be the captions don’t match the action on the screen. Or they don’t match the lip movements. 

4. Length

The next big factor in high-quality captions is length. This is another one where many companies falter. The lines will be too wide or too short. Or they’ll show three more lines of captions. The ideal length is one or two lines with no more than 32 characters per line. 

Long lines of captions convert the captioning experience from scanning to reading. The most effective captions are scannable. They allow people to view the video without being hung up on the captions. But when the captions are long, then it forces people to read and miss out on the action on the video.  

Another problem with length is bad breaking points or line division. This is the last word of the caption lines. When not done right, it causes cognitive overload and confusion. 

It’s one of those things deaf/HH will notice and hearing people won’t. Captioning Key’s line breaks section is the best one. Check out the examples on the page and you can see what a difference it makes. 

Here are examples of bad breaking points: 

ending. Starting another sentence
in one line. 

Splitting names like the author is Meryl
Evans. 

Meryl Evans should stay together. It’d be better to do this: 

Splitting names like the author
is Meryl Evans. 

She ate an orange, banana, and
apple for breakfast. 

Avoid ending lines after a conjunction, so the following works better. 

She ate an orange, banana, 
and apple for breakfast. 

And then there are captions that leave you hanging like ending a caption on words like “to,” “and,” for” and you have to wait for the next caption to complete the thought. Little things like this minimize cognitive overload. 

5. Position

The next factor in quality captions is position. This is the easiest one. Put the captions on the bottom. You can move the captions up temporarily to display on-screen text. Just be sure to bring them back down once the text clears. I’ve conducted many polls on position and 99 percent choose the bottom. 

When I watch videos with the captions at the top the whole time, I miss a lot more of the action on the screen. Some explain this by saying the captions on the bottom put them closer to people’s faces.  

However, the camera pans out for many scenes where you can see the whole person or maybe no one. There’s no scientific study, but many of us agree that we can watch more of the video with captions on the bottom than on the top. Captions belong on the bottom with occasional exceptions. 

6. Sound

Sound, or lack thereof, is a real problem that frequently happens in international films. These films contain captions for audio that’s not in the viewer’s language. And that’s it. If you’re watching a Japanese film. Anytime they speak Japanese, the captions will show the English version. 

But if they switch to English, there will be no captions. If there is music, song lyrics, or important sounds, those won’t be captioned either. This is one of those things you can’t turn off the sound to test the captions. And sometimes living with hearing, you become accustomed to sounds that you don’t think about it being missing for the deaf/HH. 

7. Credits

The next factor is also easy to catch. Whenever text appears on the screen, the captions should not overlap the text. Viewers want to see both the captions and the onscreen text or credits. Simple as that. 

8. Voice Changes

A voice changes for a reason. A person speaking hoarsely may be sick, losing their voice, or something else. Sometimes a person imitates something, someone, or animals. Those need to be highlighted. When a person suddenly speaks softly or loudly, this needs to be mentioned.  

The deaf/HH can catch confusing points in voice changes and speaker identification. Like the time I saw a performer singing nursery rhymes on a show. I couldn’t figure out why people applauded his singing. The next time he was on the show, the captions revealed the reason his singing impressed the audience. He imitated famous rappers. 

9. Speaker Identification

Hearing people can often identify a speaker as they can hear the person’s voice and recognize the speaker. That’s not the case for the deaf/HH. I was watching a scene. I replayed it and replayed it. I needed to know who spoke a specific line. There was no telling from watching people’s lips as it was a fast conversation with four people. Speaker identification matters. 

10. Flow / Movement

And finally, flow or movement is mainly a problem with open captions and live captions. Some of the mobile apps provide caption options with moving or scrolling captions. The only time captions can get away with scrolling or rollup is when it’s a live show.  

Moving captions are a problem for people with vestibular disorders, migraines, and reading disabilities. Pop in captions create a better experience because they allow the viewers to read at their own pace. This is where one or two lines of captions pop in and then pop out. 

When I attended a captioned virtual reality presentation, I ran into a problem few hearing people will notice. I didn’t feel so great because reading the captions was like watching a ping pong match. No matter what I tried to do to minimize the head movement, it didn’t work. 

Why You Need Transcripts 

Captioned videos aren’t the only thing that deaf/HH need. Transcripts for videos and podcasts are also important. Whenever possible, you want to offer both captions and transcripts. Some deaf people prefer captions. Some prefer transcripts. And people who use refreshable Braille or screen readers need transcripts. These don’t work with captions. 

And too many of them are not readable. They contain large blocks of text with very few paragraph breaks if any. This is not scannable. This is not readable. It causes cognitive overload, and no one wants to read it.