Accessible Data Visualization utilizing SVG metadata

Presentation entitled “Future Vocabulary for Accessible Data Visualization” with the Austin Accessibility and Inclusive Design group.

I attended a local talk entitled Future Vocabulary for Accessible Data Visualization with the Austin Accessibility and Inclusive Design group this week. The presentation topic concerned an experimental method of making images accessible via browser based web speech API’s. The group utilized Describler screen reader for the majority of the talk, though folks in the gallery did try to utilize safari and chrome on their individual personal devices.

Doug Schepers, the W3C Developer Relations Lead, has produced a system for introducing aria readable text to be read out loud via a standard browsers speech recognition feature. Leveraging the api and the tags in the svg’s associate xml file, a sight impaired user is able to navigate through a complex vector image and derive content that is accessible to a sighted visitor, but as presented, would actually be able to glean even deeper levels of data and information embedded in the image that may go un-accessed by a visual user.

The presentation examined a rudimentary rendering of a car and allowed for tab interfacing throughout the svg, each tab associated with a data point that was read through the browser aloud in real time, describing attributes such as panels, orientation of the vehicle to the viewer on screen and tires. Things got very interesting when the conversation steered towards graphical representations. Graphical waypoints such as the axis labels and nodes became accessible for a general interpretation of the image. Additionally, comparative discrete mathematical values such as what percentage the current in-focus data point construed in relation to the entire data set were enunciated, a data point not directly available to visitors relying solely on a visual reference of the image. In this instance a sighted visitor not utilizing a speech feature would miss out on information in the image.

There was some discussion about how best to implement this into the build lifecycle. Illustrator was floated as an obvious go to with some talk that a script could be implemented to pull info from layers in the AI file into the xml when exporting the svg. Finally one of the neatest new things I learned about was sonfication, the representation of information with sound, rather than visually, being utilized as a way of audibly representing the yaw mean of the graphical data point distribution, with a arching sound.

Doug illuminates on metadata inclusion via vector rendering software such as in inkscape in more detail here.

Here is a thorough write up where he touches further on several of these points.

accessibility