Note: The creation of this article on testing Language of Parts was human-based, with the assistance of artificial intelligence.
Explanation of the success criteria
WCAG 3.1.2 Language of Parts is a Level AA conformance level Success Criterion. It addresses a crucial dimension of digital accessibility: the effective handling of multilingual content within a single page or document. In an increasingly globalized web, pages often contain multiple languages, and users must be able to engage with each section in a meaningful way. This criterion requires that any portion of content that differs in primary language from the surrounding text be programmatically identifiable. By doing so, assistive technologies such as screen readers can accurately switch to the correct pronunciation rules, enabling a seamless and intelligible reading experience for users.
Proper implementation typically involves the precise use of language attributes in HTML or other markup to define the language of each segment. For example:
<span lang="fr">Déjà vu</span>
By specifying language at the level of individual sections or phrases, developers ensure that users who rely on text-to-speech, translation tools, or other assistive technologies can navigate and comprehend multilingual content effectively. This makes digital content genuinely inclusive, allowing people who speak different languages, or who depend on assistive tools, to interact with the content without confusion or frustration.
Who does this benefit?
- Screen reader users: Correct language attribution ensures accurate pronunciation and reading flow.
- Users of translation tools: Programmatically defined language sections allow software to translate only the relevant content, avoiding misinterpretation.
- Individuals with cognitive or learning disabilities: Clear language separation reduces potential confusion when content switches between languages.
- Non-native speakers: Accurate language marking helps browsers and assistive tools present multilingual content in a comprehensible manner.
Testing via Automated testing
Automated tools provide speed and consistency, rapidly scanning large websites to detect missing or incorrect language attributes in HTML or other markup. They are particularly effective at flagging sections where the language is unspecified or inconsistently applied, giving developers a clear starting point for remediation.
The limitation, however, is that automated testing cannot assess context or linguistic accuracy. Subtle language changes, such as phrases embedded in paragraphs or mixed-language content, often escape detection, making automated testing necessary but insufficient on its own.
Testing via Artificial Intelligence (AI)
Artificial intelligence introduces a more sophisticated layer of analysis. Leveraging natural language processing, AI can examine content, predict language shifts, and identify inconsistencies that automated tools might miss. For instance, it can detect code-switched text or sections where the language attribute does not match the written content.
Despite its power, AI is not infallible; it can produce false positives, misidentify dialects, or fail to recognize uncommon languages. Human validation remains essential to confirm findings and resolve ambiguities.
Testing via Manual Testing
Manual testing remains the gold standard for precision and contextual understanding. Human evaluators can accurately identify language changes, verify the correctness of applied attributes, and observe real-world assistive technology behavior. Manual assessment ensures that screen readers, translation tools, and other accessibility aids function as intended across multilingual content.
The downside is that manual testing is time-intensive, laborious, and may vary across evaluators without strict guidelines.
Which approach is best?
The most effective method for ensuring WCAG 3.1.2 compliance is a hybrid approach that integrates automated, AI-based, and manual testing.
The process begins with automated scans to quickly identify missing or misapplied language attributes, providing a foundation for remediation. AI-based testing then examines the actual content, detecting nuanced language shifts, code-switching, or semantic inconsistencies that automated tools alone cannot catch. Finally, manual testing validates the experience in real-world conditions, confirming that assistive technologies pronounce and interpret each language segment correctly, while also capturing subtleties like embedded foreign phrases or mixed-language formatting.
By combining the strengths of these three methods, the hybrid approach ensures that all portions of a page or document are programmatically identifiable in their correct languages. The result is a richer, more inclusive digital experience that empowers users of assistive technologies, translation tools, and multilingual audiences alike, delivering accessibility that is both technically robust and human-centered.
Related Resources
- Understanding WCAG 3.1.2 Language of Parts
- mind the WCAG automation gap
- Using language attributes to identify changes in the human language
- List of common primary language subtags
- Specifying the language for a passage or phrase with the Lang entry in PDF documents
- Language tags in HTML and XML
- Authoring HTML: Language declarations
- Declaring language in HTML