Note: The creation of this prerecorded Sign Language content was human-based, with the assistance of artificial intelligence.

Explanation of the success criteria

WCAG 1.2.6 Sign Language (Prerecorded) is a Level AAA conformance level Success Criterion. It states that Sign language interpretation is provided for all prerecorded audio content in synchronized media.

This success criterion ensures that people who are deaf or hard of hearing and fluent in sign language can fully understand the audio content in synchronized media. Sign language conveys nuances like intonation and emotion that captions cannot, providing richer and more effective access. Additionally, sign language users can process it more quickly, making it better suited for time-based presentations.

Note that this Success Criterion is at a conformance level of AAA. This means that this Success Criterion is generally considered aspirational. It addresses more specific accessibility needs and is not mandatory for all websites or content. However, achieving Level AAA can provide additional benefits in terms of inclusivity.

Who does this benefit?

Sign language benefits primarily individuals who are deaf or hard of hearing and prefer to communicate using sign language. It provides a more accessible and natural mode of communication for these individuals, allowing them to express themselves and understand others more effectively. Additionally, sign language can benefit:

  • People with speech or language impairments who may find sign language an easier or more effective communication method.
  • Hearing individuals who interact with deaf or hard of hearing people, such as family members, friends, educators, or coworkers.
  • Early learners who use sign language to develop communication skills before verbal language acquisition.
  • Healthcare professionals, educators, and others who work in inclusive environments and need to communicate with a diverse population.

Testing via automated testing

Automated testing for sign language accessibility includes detecting the presence through media file tags or and overlay elements. This process is notably fast, relying on basic metadata scanning, which allows it to scale efficiently across large volumes of video content.

Automated testing has several limitations. It cannot assess the content or accuracy of the sign language presented, nor can it evaluate synchronization with the audio or spoken content. Additionally, it is unable to detect whether the signer is clearly visible or properly positioned on screen. The reliance on metadata can lead to high rates of false positives or negatives, as the system may incorrectly assume the presence or absence of sign content. Furthermore, it cannot distinguish between different types or dialects.

Testing via Artificial Intelligence (AI)

AI-based testing for sign language can identify the presence of a signer through visual pattern detection, making it more efficient than manual review while still slower than purely metadata-based automation. This approach offers good scalability for initial screenings, providing a practical balance between accuracy and speed when assessing large volumes of video content.

Artificial Intelligence offers promising capabilities for testing sign language but still has notable limitations. Most AI systems cannot fully interpret or validate the meaning of sign language content. While some can estimate synchronization by analyzing visual and auditory cues, such as facial movements and timing, they are generally limited in their ability to assess the accessibility or visibility of the signer on screen. The accuracy of AI-based assessments often depends on the clarity and quality of the video. Although a few experimental models show potential in detecting regional sign variations, their reliability remains inconsistent.

Testing via Manual testing

Manual testing for sign language accessibility involves a thorough human review of video content to ensure quality and accuracy. Testers verify the presence of embedded or interpreted sign language and evaluate whether the signs accurately convey the spoken content. They also assess the synchronization between the signer and the spoken audio, as well as the signer’s size, clarity, and placement on the screen to ensure visibility and usability. Unlike automated methods, manual testing provides low false positive rates because humans can interpret the actual video context. Additionally, human reviewers can identify the specific type used, such as ASL or BSL, adding another layer of accuracy.

Manual testing is highly accurate but also time-consuming, as it requires reviewers to watch and assess the full video content. This makes scalability a significant challenge; as the volume of content increases, so does the effort and time required, limiting the feasibility of manual review for large-scale or frequent video updates.

Which approach is best?

No single approach guarantees effective testing of sign language for prerecorded video. However, using the strengths of each approach in combination can have a positive effect.

Automated testing can identify indicators of sign language presence, such as metadata or tags, but it cannot assess content accuracy or signer visibility. AI-based testing offers improved capabilities by recognizing the visual presence of a signer and estimating synchronization with audio, helping to flag potential issues—yet it still lacks the maturity to verify linguistic accuracy. Manual testing remains essential for ensuring the correctness, visibility, and appropriate use, providing the most reliable and context-aware evaluation.

Related Resources