Note: The creation of this article on testing Unusual Words was human-based, with the assistance of artificial intelligence.
Explanation of the success criteria
WCAG 3.1.3 Unusual Words is a Level AAA conformance level Success Criterion. It addresses one of the subtler but critically important aspects of digital accessibility: ensuring that all users can comprehend content, regardless of their language proficiency, cognitive abilities, or prior knowledge. This criterion requires that words, phrases, or terminology that are uncommon, specialized, idiomatic, or otherwise potentially unfamiliar to the audience be clarified. By providing definitions, explanations, or contextual cues, content creators can make their digital experiences more inclusive and navigable.
Examples of Unusual words:
- Technical jargon: “To resolve the issue, you need to flush the DNS cache.”
- Idiom: “The candidate was really trying to get the genie back in the bottle after the gaffe.”
- Specialized term: “The experiment showed an increase in the hertz of the signal.”
Clarifying these terms enhances comprehension for diverse audiences, from people with cognitive or learning disabilities to non-native speakers, ultimately supporting a more inclusive digital environment.
While Level AAA is aspirational rather than mandatory, striving for 3.1.3 Unusual Words signals a deeper commitment to accessibility. Organizations that embrace this standard demonstrate that accessibility is not just a compliance checkbox, it is a strategic priority that puts real user experiences first.
Who does this benefit?
- People with cognitive or learning disabilities: Individuals who may struggle with complex vocabulary or abstract terms gain clarity through explicit explanations.
- Non-native speakers: Defining idioms, jargon, or uncommon words helps readers whose primary language differs from the content language.
- People with limited literacy skills: Simplifying or providing context for unusual words allows readers with basic literacy to access information more effectively.
- Assistive technology users: Screen readers, text-to-speech tools, and other assistive technologies can convey explanations or definitions, making content more understandable.
- General audience: Even users without disabilities benefit from clear explanations of technical, specialized, or domain-specific terms, improving comprehension, engagement, and overall usability.
Testing via Automated testing
Automated tools can rapidly scan large volumes of content to flag uncommon words, jargon, acronyms, or specialized terminology. They typically rely on dictionaries, frequency analysis, and rule-based heuristics, making them highly efficient for initial assessments. However, these tools lack contextual understanding and audience sensitivity, meaning they may generate false positives or overlook words that are unusual for a specific audience segment.
Testing via Artificial Intelligence (AI)
AI-driven analysis introduces contextual intelligence by using natural language processing to evaluate whether terms are likely unfamiliar and suggesting clarifications or simplified alternatives. AI can account for nuances like reading level, domain specificity, and target audience knowledge, capturing subtleties that automated tools often miss. Yet, AI recommendations can be inconsistent, susceptible to bias from training data, and require human oversight to validate appropriateness and accuracy.
Testing via Manual Testing
Human review and user testing remain the gold standard for evaluating comprehension. Manual testing allows reviewers to assess whether explanations are genuinely clear, relevant, and culturally appropriate, and whether unusual words are effectively conveyed to real users, including those with cognitive or language-related challenges. The trade-off is that manual testing is resource-intensive, time-consuming, and less scalable, particularly for large or frequently updated content.
Which approach is best?
In practice, a hybrid approach combining automated flagging, AI contextual analysis, and manual review often provides the most thorough and practical method for ensuring compliance with 3.1.3, balancing efficiency with accuracy and real-world usability.
The process begins with automated testing, which efficiently scans large volumes of content to identify uncommon words, technical jargon, idioms, acronyms, and specialized terminology. Automated tools leverage frequency analysis, dictionaries, and glossaries to flag potential issues quickly, providing a broad baseline for further investigation. While fast and scalable, this stage cannot reliably account for context or audience-specific familiarity, making it an initial but incomplete step.
Following automated scanning, AI-based testing adds a layer of nuanced, contextual understanding. AI algorithms analyze the surrounding content to assess whether flagged words are likely to be unfamiliar to the target audience. Advanced natural language processing can suggest simplified alternatives, definitions, or contextual explanations tailored to specific reading levels or user demographics. This approach captures subtle cases that automated tools often miss, including domain-specific terminology or culturally dependent idioms. However, AI recommendations are not infallible; they may reflect biases in training data or misinterpret nuanced language, necessitating human oversight.
The final stage, manual testing, ensures true comprehension and accessibility. Human reviewers, ideally including representatives of the intended audience, assess whether explanations are meaningful, clear, culturally appropriate, and effectively support understanding. Manual evaluation can also validate AI suggestions, provide qualitative insights, and uncover edge cases that neither automation nor AI could reliably detect. While this step is resource-intensive and less scalable, it is indispensable for ensuring that content is genuinely understandable to all users, particularly those with cognitive, linguistic, or learning challenges.
By combining these three approaches, the hybrid methodology achieves an optimal balance between efficiency, scalability, and real-world usability. It ensures that unusual words are not merely flagged or defined but are communicated in ways that enhance comprehension, promote inclusivity, and deliver a truly user-centered digital experience. This comprehensive approach transforms accessibility testing from a checkbox exercise into a strategic practice that strengthens both content clarity and audience engagement.
Related Resources
- Understanding WCAG 3.1.3 Unusual Words
- mind the WCAG automation gap
- Providing the definition of a word or phrase used in an unusual or restricted way
- Linking to definitions
- Using description lists
- Using inline definitions
- Using the dfn element to identify the defining instance of a word
- Providing the definition of a word or phrase used in an unusual or restricted way
- Providing a glossary
- Providing a function to search an online dictionary