Skip to main content

Testing Methods: Identify Purpose

WCAG Success Criterion Identify Purpose Level AAA

Note: The creation of this article on testing Identify Purpose was human-based, with the assistance on artificial intelligence.

Explanation of the success criteria

WCAG 1.3.6 Identify Purpose is a Level AAA conformance level Success Criterion. It aims to make web content more personalizable and adaptable, particularly for users with cognitive or learning disabilities.

Both WCAG 2.1 Success Criteria 1.3.5 and 1.3.6 aim to enhance accessibility through semantic identification of purpose, but they focus on different types of elements and serve different user needs.

  • SC 1.3.5 – Identify Input Purpose (AA) focuses in on form fields that collect personal user data
  • SC 1.3.6 – Identify Purpose (AAA) focuses on all interface components (not just form inputs) and regions of content.

This success criterion builds on 1.3.5 to programmatically identify the purpose, not only of selected text inputs, but icons, region of a page, and other user interface components. The page code would contain metadata that assistive technologies could interpret to give users additional information or modify the page for the user.

Example: Buttons and Icons with Programmatic Meaning

Here’s a button with an envelope icon. While many will understand that the button visually communicates an action related to email, this purpose may not be familiar to others.

Using aria-label to label an icon button programmatically defines its purpose, allowing assistive technologies to identify the button’s function and adapt indicators or terminology to meet the user’s needs.

<button type="button" class="btn btn-primary" aria-label="Email">
   <svg xmlns="http://www.w3.org/2000/svg" width="36" height="36" fill="currentColor" class="bi bi-envelope" viewBox="0 0 16 16">
      <path ... ></path>
   </svg>
</button>

Example: Landmarks to Identify Page Regions

Using ARIA landmarks and appropriate labeling helps identify the purpose of different page regions, aiding navigation. In this example, we have a navigation region, which is labeled as “primary.” Assistive technology, like a screen reader, will announce the following as “navigation, primary.”

<nav role="navigation" aria-label="primary">
  <!-- Navigation links -->
</nav>

Now to address the elephant in the room.

Difference between Success Criteria 1.3.6 and 4.1.2

WCAG 2.1 Success Criteria 1.3.6 and 4.1.2 both relate to semantics and assistive technology, but they serve different purposes and apply to different aspects of the user interface.

FeatureSC 1.3.6 – Identify PurposeSC 4.1.2 – Name, Role, Value
WCAG LevelAAAA
GoalEnable personalization and support cognitive accessibilityEnsure assistive technologies understand interactive elements
FocusPurpose of UI components and regionsAccessibility of dynamic UI elements
Primary TechniqueSemantic roles, ARIA landmarks, personalization metadataProper ARIA usage for name, role, state, and updates
User BenefitSupports personalization for cognitive/learning disabilitiesEnables screen reader interaction for all users
ApplicabilityAll UI elements and page regionsCustom widgets and dynamic controls

Note that this Success Criterion is at a conformance level of AAA. This means that this Success Criterion is generally considered aspirational, going beyond the standard A & AA conformance levels. It addresses more specific accessibility needs and is not mandatory for all websites or content. However, achieving Level AAA can provide additional benefits in terms of inclusivity.

Who does this benefit?

  • Primarily users with cognitive, language, and learning disabilities, but also supports personalization for other user groups.

Testing via Automated testing

Automated tools quickly scan large codebases or pages for semantic roles, ARIA landmarks, and metadata. They reliably detect recognized programmatic identifiers like ARIA roles, HTML landmarks, and standardized or custom metadata. By applying rules consistently, they reduce human error and flag missing or misused semantics for manual review.

However, most tools don’t assess personalization support or whether roles accurately reflect an element’s purpose. They can’t interpret visual cues, understand intent, or evaluate if semantics improve usability, especially for users with cognitive disabilities. They may falsely signal compliance or overlook helpful non-standard implementations.

Testing via Artificial Intelligence (AI)

AI-based tools quickly scan large interfaces, identifies visual patterns (like common icons or button shapes), and infers likely purposes (e.g., gear icon = settings), helping flag elements that need semantic identification. It uses natural language processing and visual context to suggest ARIA roles or metadata, even when markup is missing. AI also prioritizes components lacking semantics, streamlining manual testing. When integrated into development pipelines, AI continuously monitors changes and alerts teams to new UI elements missing programmatic identification.

However, AI can’t confirm author intent or verify whether assigned roles or metadata are accurate or meaningful. It may misinterpret custom icons, layouts, or unfamiliar interactions, especially without relevant training data. AI often relies on visual cues over code, producing false positives (e.g., labeling an icon “settings” without checking its role). Even when it flags missing purpose, AI may not grasp the impact on personalization or cognitive accessibility.

Testing via Manual testing

Human testers can interpret the purpose of UI components and regions based on context, design intent, and user flow, something automation often misses. They assess whether ARIA roles, landmarks, or metadata are not just present but accurate and meaningful for assistive technologies and personalization. Manual testing reveals how missing or incorrect semantics affect users, especially those with cognitive or learning disabilities, ensuring a user-centered evaluation. Testers adapt to custom or non-standard designs where automation may fail. They also review the interface holistically, evaluating relationships between components to ensure a consistent semantic structure.

However, manual testing is time-consuming and resource-intensive, particularly for large or complex interfaces. It requires deep knowledge of ARIA, HTML semantics, and assistive technologies, skills that not all teams possess. Even experienced testers can miss elements, apply inconsistent judgments, or misinterpret ambiguous components. Manual testing doesn’t scale easily and lacks ongoing monitoring, making it harder to catch semantic regressions between reviews.

Which approach is best?

No single approach for testing 1.3.6 Identify Purpose is perfect. However, using the strengths of each approach in combination can have a positive effect.

Automated and AI-based tools offer a useful first pass for identifying structural or potential semantic issues under WCAG 1.3.6, especially in large or dynamic interfaces. However, meaningful testing of this criterion requires manual evaluation to accurately assess purpose, intent, and support for personalization. Human testers provide the necessary context and judgment to ensure semantic accuracy and user impact, particularly for users with cognitive disabilities. While manual testing delivers deeper insights than automation, it’s resource-intensive and most effective when combined with automated methods for scalable, sustainable coverage.

Related Resources