Interactive visualizations have changed the way we understand our lives. For example, they can present the number of coronavirus infections in every state.
But these graphics are often not accessible to people who use screen readers, software that scans the content of a computer screen and makes the content available through synthesized speech or Braille. Millions of Americans use screen readers for a variety of reasons, including blindness or partial blindness, learning disabilities, or motion sensitivity.
The team presented this project May 3 at CHI 2022 in New Orleans.
“If I look at a chart, I can pull out all the information I’m interested in, maybe it’s the general trend or maybe it’s the maximum,” the lead author said. Ather Sharif, UW doctoral candidate at the Paul G. Allen School of Computer Science & Engineering. “Currently, screen reader users get very little or no information about online visualizations, which in light of the COVID-19 pandemic can sometimes be a matter of life or The goal of our project is to give screen-reading users a platform where they can extract as much or as little information as they want.”
Screen readers can tell users about the text displayed on the screen, because that’s what researchers call “one-dimensional information.”
“There’s a beginning and an end to a sentence and everything else comes in between,” said co-lead author Jacob O. Wobbrock, UW professor in the School of Information. “But as soon as you move things into two-dimensional spaces, like visualizations, there’s no clear beginning and end. It’s just not structured the same way, which means it there is no obvious entry point or sequencing for screen readers.”
The team began the project by working with five screen reader users who were partially or completely blind to understand how a potential tool might work.
“In the area of accessibility, it’s really important to follow the principle of ‘nothing about us without us,'” Sharif said. “We’re not going to build something and then see how it works. We’re going to build it based on user feedback. We want to build what they need.”
To implement VoxLens, visualization designers only need to add a single line of code.
“We didn’t want people jumping from one visualization to another and getting inconsistent information,” Sharif said. “We’ve made VoxLens a public library, which means you’ll hear the same type of summary for all visualizations. Designers can just add that line of code and we do the rest.”
Researchers evaluated VoxLens by recruiting 22 fully or partially blind screen reader users. Participants learned how to use VoxLens and then completed nine tasks, each involving answering questions about a visualization.
Compared to participants of a previous study who did not have access to this tool, VoxLens users completed tasks with 122% increased accuracy and 36% reduced interaction time.
“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find the maximum,” Sharif said. “In our study, interaction time refers to the time it takes to extract information, and that’s why reducing it is a good thing.”
The team also interviewed six participants about their experiences.
“We wanted to make sure that those accuracy and interaction time numbers we saw were reflected in how participants felt about VoxLens,” Sharif said. “We received very positive feedback. Someone told us that he had been trying to access visualizations for 12 years and this was the first time he could do it easily.”
“This work is part of a much larger program for us – to eliminate bias in design,” said co-lead author Catherine Reinecke, UW associate professor at the Allen School. “When we build technology, we tend to think of people who look like us and have the same abilities as us. For example, D3 has really revolutionized access to online visualizations and improved the way people can understand information. But there are ingrained values and people are being left behind. It’s really important that we start thinking more about how to make technology useful to everyone.
The other co-authors of this article are Olivia Wanga UW undergraduate student at the Allen School, and Alida Muongchan, a UW undergraduate student studying human-centered design and engineering. This research was funded by the Mani Charitable Foundation, University of Washington Center for an informed publicand the University of Washington Center for Research and Education in Accessible Technologies and Experiences.
For more information, contact Sharif at email@example.comWobbrock to firstname.lastname@example.org and Reinecke Reinecke@cs.washington.edu.
The title of the article
Publication date of articles
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of press releases posted on EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.