Note: The creation of this article on testing prerecorded media alternative was human-based, with the assistance on artificial intelligence.

Explanation of the success criteria

WCAG 1.2.8 Media Alternative (Prerecorded) is a Level AAA conformance level Success Criterion. It states that an alternative for time-based media is provided for all prerecorded synchronized media and for all prerecorded video-only media.

This success criterion ensures audiovisual content is accessible to people with both vision and hearing impairments by providing a text alternative in the same language as the video or page.

This approach provides all visual and auditory information from synchronized media in text form. Unlike audio description, it isn’t limited to pauses in dialogue, offering a full narrative of visual context, actions, expressions, non-speech sounds, and complete dialogue transcripts, following the same sequence as the media for a more complete representation.

See the note under Intent for additional detailed guidance as it pertains to similar success criterion at different conformance levels.

Note that this Success Criterion is at a conformance level of AAA. This means that this Success Criterion is generally considered aspirational, going beyond the standard A & AA conformance levels. It addresses more specific accessibility needs and is not mandatory for all websites or content. However, achieving Level AAA can provide additional benefits in terms of inclusivity.

Who does this benefit?

  • People who are deaf-blind, with limited or no vision and hearing, can access information in audiovisual content.

An appeal

Text transcripts should not be a heavy lift to implement. If you have a video with captions, you already have the base for transcripts, with some minor modifications. Make transcripts a default offering as an alternative to time-based media,

Testing via automated testing

Automated testing for transcripts can quickly scan a page or its code to detect the presence of linked or embedded transcript elements. It’s highly scalable across platforms, making it an efficient and fast method for identifying whether transcript content is available.

Automated testing for transcripts has significant limitations. It cannot assess the completeness or accuracy of the transcript, nor can it evaluate whether the transcript format is readable or easy to follow. The tools are prone to high false positives or negatives, for example, they may detect a transcript tag and assume compliance, even if the transcript is missing or non-functional. Additionally, automated tests do not evaluate whether visual descriptions are included, making them insufficient for ensuring full accessibility.

Testing via Artificial Intelligence (AI)

AI-based testing for transcripts offers several advantages. It can automatically locate transcript text, whether it’s embedded directly on the page or linked externally. The process is fast, as it can analyze audio, visual, and transcript components simultaneously. Additionally, it scales well, making it especially effective for evaluating large video libraries efficiently.

AI-based testing for transcripts offers useful capabilities but comes with limitations. It may attempt to detect missing audio or visual descriptions to assess transcript completeness, though this detection is not always accurate. Some AI tools can compare the transcript with audio and visual elements to check for alignment, helping assess transcript accuracy. These tools can also evaluate basic formatting aspects, such as paragraph structure and speaker labels, contributing to overall usability. However, false positives and negatives remain a concern, particularly with partial or low-quality transcripts. While AI may identify missing non-verbal visual descriptions by analyzing video content, its effectiveness in this area is still moderate.

Testing via Manual testing

Manual-based testing for transcripts involves several key checks to ensure quality and accessibility. It confirms the presence of a full transcript and verifies its completeness by ensuring all audio and visual content is described. Accuracy is assessed by checking grammar, timing, and faithful representation of speech and actions. The format and usability are also evaluated, focusing on readability and logical flow. When reviewed properly, the likelihood of false positives or negatives remains low. Finally, it ensures the inclusion of visual descriptions, capturing actions and context beyond just dialogue.

Manual-based testing for transcripts is slow and time-consuming, as it requires listening to or watching the entire content. This approach also lacks scalability, being labor-intensive and inefficient for large volumes of content.

Which approach is best?

No single approach for testing extended audio description is perfect. However, using the strengths of each approach in combination can have a positive effect.

Automated testing can be used initially to detect missing transcript references. AI-based testing can then be used to flag incomplete transcripts, supporting broad, large-scale QA. Manual testing overall is the best overall for ensuring that the transcript is complete, accurate, and includes both audio and visual details.

Related Resources