Skip to main content

Testing Methods: Images of Text (No Exception)

A colorful graphic, with the text You Logo presented in the middle. This graphic could be considered an exception to this WCAG success criterion

Note: The creation of this article on testing Images of Text (No Exception) was human-based, with the assistance on artificial intelligence.

Explanation of the success criteria

WCAG 1.4.9 Images of Text (No Exception) is a Level AAA conformance level Success Criterion. Text must be displayed using real text, not images of text, so it can be resized, styled, and read by assistive technologies. The only exceptions are for essential images of text (where the visual presentation is critical to meaning) or when it is necessary for branding purposes, such as a stylized logo.

Unlike WCAG Success Criterion 1.4.5 Images of Text (Level AA), 1.4.9 does not allow images of text just for design convenience, only when absolutely essential.

Note that this Success Criterion is at a conformance level of AAA. This means that this Success Criterion is generally considered aspirational, going beyond the standard A & AA conformance levels. It addresses more specific accessibility needs and is not mandatory for all websites or content. However, achieving Level AAA can provide additional benefits in terms of inclusivity.

Who does this benefit?

  • People with low vision, who may struggle with the default font, size, or color.
  • People with visual tracking difficulties, who may find the line spacing or alignment hard to follow.
  • People with cognitive disabilities that impact reading comprehension.

Testing via Automated testing

Automated testing is fast and scalable, quickly spotting clear cases of text displayed as images, like text in elements or CSS backgrounds, across large page sets, saving time in initial reviews.

But these tools have limits: they can’t reliably tell if text images are decorative or essential, nor can they interpret complex visuals like logos or infographics, leaving important context gaps.

Testing via Artificial Intelligence (AI)

AI-based testing is a powerful tool, using techniques like OCR to detect text within images across complex layouts at scale. It can quickly flag potential issues that automated tools might miss.

However, AI still struggles with context, distinguishing essential images of text, like logos, from decorative ones, and judging whether image-based text is necessary or just a design choice.

Testing via Manual testing

Manual testing shines at spotting images of text that seem necessary but aren’t, human testers can judge context and intent, deciding if a stylized image adds meaning or is just decorative. They also verify exemptions like logos for branding.

That said, manual testing is time-consuming, less consistent, and requires expertise to determine what’s truly essential, which can vary by case. It doesn’t scale well, making it inefficient for large sites compared to automated tools.

Which approach is best?

No single approach for testing 1.4.9 Images of Text (No Exception) is perfect. However, using the strengths of each approach in combination can have a positive effect.

The best approach combines automated, AI-based, and manual methods for thorough coverage. Automated tools can quickly scan large volumes of pages to detect obvious images of text, flagging potential issues for review. AI-based testing adds value by analyzing image content more intelligently, helping to distinguish between purely decorative images and those that might convey meaningful text or branding elements. However, because this criterion hinges on context and intent, manual testing is essential to confirm whether images of text are truly necessary or if real text alternatives could be used instead. Manual reviewers can evaluate design intent, branding needs, and user impact, ensuring accurate and nuanced decisions. Together, these approaches balance efficiency with depth, providing a comprehensive and practical testing strategy.

Related Resources