Note: The creation of this article on testing methods for Sensory Characteristics was human-based, with the assistance on artificial intelligence.
Explanation of the success criteria
WCAG 1.3.3 Sensory Characteristics is a Level A conformance level Success Criterion. It ensures that instructions for understanding and using content do not rely solely on sensory attributes like shape, color, size, visual position, orientation, or sound.
The following examples demonstrate failures of this success criterion due to reliance on sensory cues:
- Click the green button.
- Select the button on the right.
- Enter the code you hear.
- View the term and conditions below the chart.
Writers must supplement sensory cues with additional textual descriptions. For example, instead of saying “Click the green button,” write “Click the Submit button.”
Who does this benefit?
- Blind and low-vision users may struggle to understand instructions that rely solely on shape or location.
- Deaf or hard of hearing users may miss important sounds if they have difficulty hearing or cannot hear at all.
- Users with cognitive disabilities that affect spatial understanding may also need clearer, more descriptive instructions to navigate content effectively.
What’s involved in testing this Success Criterion
To test this success criterion, check whether instructions or prompts rely solely on visual, auditory, or spatial cues, and if they do, confirm that they also provide textual or programmatic support that doesn’t depend on those sensory characteristics.
- Identify instructional content
- Determine if sensory characteristics are used
- Check for supplementary information
- Test with assistive technologies
- Test with styles disabled
Testing via Automated testing
Automated testing for 1.3.3 Sensory Characteristics helps identify potential structural issues and flags areas for further investigation. It detects keywords like “above,” “below,” “left,” “right,” “red button,” and “green icon,” and scans large volumes of content quickly to highlight possible violations for human review, reducing the manual burden.
However, this type of testing remains limited because it focuses on instructional language that requires understanding human intent, context, and semantics—areas where automation struggles. Automated tools can’t reliably determine whether an instruction like “click the icon on the right” is necessary, supplemented, or ambiguous. They often misinterpret terms like “right” or “red” when used in non-sensory contexts (e.g., “the right decision”), leading to false positives or negatives. These tools frequently miss implied sensory cues and aren’t suitable for determining final conformance.
Testing via Artificial Intelligence (AI)
AI-based testing for 1.3.3 Sensory Characteristics quickly scans large volumes of content for potential sensory-based instructions, flagging patterns commonly associated with sensory language. It dramatically reduces manual review time, especially on large or dynamic sites with frequent content updates.
However, AI-based testing cannot be the sole method for this success criterion. It lacks contextual understanding and often misinterprets figurative language or subtle sensory cues, especially in dynamic interfaces. Without computer vision, it struggles to interpret visual layouts and may miss issues in tooltips, alt text, or interactive elements. AI cannot determine whether instructions make sense without sensory cues, lacks empathy, and fails to grasp user intent. It may also overlook key ARIA roles and relationships essential for identifying when sensory characteristics are the only way to convey information.
Testing via Manual testing
Human testers provide the most accurate, context-aware evaluation of WCAG 1.3.3 Sensory Characteristics. They interpret instruction intent, assess layout, color, and sound, and verify that users with assistive technologies can complete tasks without relying on sensory cues. Unlike automated tools, manual testing detects subtle, ambiguous, and layout-dependent issues, ensuring clarity and inclusivity, especially in complex, dynamic, or high-impact interfaces requiring precision and empathy.
However, manual testing demands skilled testers, consumes significant time and resources, and struggles to scale for large or frequently updated sites. Results can vary without standardized procedures, and sensory issues in dynamic content may be missed without a thorough workflow. These limitations reduce its efficiency as a standalone approach in fast-paced development.
Which approach is best?
No single approach for testing 1.3.3 Sensory Characteristics is perfect. To effectively test WCAG 2.1 Success Criterion 1.3.3 Sensory Characteristics, the most reliable approach is to combine automated, AI-based, and manual testing in a layered, complementary strategy. Each method has distinct strengths and limitations, and when used together, they provide broad coverage, efficiency, and contextual accuracy.
Use basic automation, such as accessibility linters or rule-based tools, to check for semantic issues that may relate to unclear UI components. Use AI-based tools to scan text for sensory instructions and generate flagged content. Then let manual testing validate flagged content and test additional UI for instruction clarity, while also performing performing assistive technology testing, like a screen reader, to ensure users with visual/cognitive disabilities can follow instructions.
One thought on “Testing Methods: Sensory Characteristics”
Comments are closed.