Section 508, 1194.22(b)

Skip Quick Links

“For any time-based multimedia presentation (e.g., a movie or animation), synchronize equivalent alternatives (e.g., captions or auditory descriptions of the visual track) with the presentation.”

WCAG 1.0 Checkpoint: 1.4


Auditory presentations must be accompanied by text transcripts, textual equivalents of auditory events. When these transcripts are presented synchronously with a video presentation they are called captions and are used by people who cannot hear the audio track of the video material.

Some media formats, for example QuickTime and SMIL (Synchronized Multimedia Integration Language) allow captions and video descriptions to be added to the multimedia clip. The following example demonstrates that captions should include speech as well as other sounds in the environment to help viewers understand what is going on.


Captions for a scene from "E.T." The phone rings three times, then is answered. [phone rings] [ring] [ring] Hello?

Until the format you are using supports alternative tracks, two versions of the movie could be made available, one with captions and descriptive video, and one without. Some technologies, such as SMIL and SAMI, allow separate audio/visual files to be combined with text files via a synchronization file to create captioned audio and movies.

Some technologies also allow the user to choose from multiple sets of captions to match their reading skills. For more information see the latest SMIL specification.

Equivalents for sounds can be provided in the form of a text phrase on the page that links to a text transcript or description of the sound file. The link to the transcript should appear in a highly visible location such as at the top of the page. However, if a script is automatically loading a sound, it should also be able to automatically load a visual indication that the sound is currently being played and provide a description or transcript of the sound.


Core Techniques for Web Content Accessibility Guidelines 1.0: