Note: The creation of this article on testing On Focus was human-based, with the assistance on artificial intelligence.
Explanation of the success criteria
WCAG 3.2.1 On Focus is a Level A conformance level Success Criterion. It ensures that when interactive elements, such as form fields, buttons, or links, receive focus, they do not trigger unexpected changes of context. A “change of context” can include actions like loading a new page, submitting a form, opening a new window, or significantly altering the page’s layout or content. The rationale is simple yet profound: users must retain full control over their interactions. When focus behaviors are predictable, users relying on keyboards, screen readers, or other assistive technologies can navigate digital interfaces without disorientation or confusion. Ultimately, this criterion underpins a more usable and inclusive web, improving the experience for people with motor disabilities, cognitive impairments, low vision, and anyone who relies on structured, controlled navigation.
Who does this benefit?
- Keyboard-only users, including individuals with motor disabilities or repetitive strain injuries, benefit because they can move through content without inadvertently triggering actions like submitting forms or opening new pages.
- Screen reader users gain a more stable, predictable experience. Unexpected focus-triggered changes can disrupt navigation or cause them to lose their place in the reading order, making content difficult or frustrating to access.
- Users with cognitive or learning disabilities experience less confusion and anxiety, as interactions behave in a consistent and understandable way.
- Users with low vision benefit from predictable visual cues, ensuring the interface does not shift unexpectedly while they orient themselves.
In short, WCAG 3.2.1 reinforces user control, building trust, consistency, and accessibility across digital platforms.
Testing via Automated testing
Automated tools can rapidly scan large sites or applications, identifying obvious focus-related issues such as elements that trigger page reloads, form submissions, or modal pop-ups upon receiving focus. Their strength lies in efficiency and consistency, allowing teams to detect recurring violations across many pages. However, automated testing is inherently limited: it cannot assess context, user intent, or the subjective experience of disorientation. As a result, automated tools may produce false positives or overlook nuanced focus behaviors that impact real users.
Testing via Artificial Intelligence (AI)
AI introduces contextual intelligence into the testing process. By analyzing user flows and simulating potential interactions, AI can predict where unexpected focus behaviors might confuse or disrupt users. It can highlight high-risk focus points, suggest remediation strategies, and identify complex patterns that automated scans might miss. Despite these capabilities, AI is still probabilistic. It may misinterpret highly dynamic interfaces or custom components, meaning human validation is essential to ensure accuracy.
Testing via Manual testing
Manual testing remains the gold standard for WCAG 3.2.1 compliance. Human testers can experience interactions as real users would, using keyboards, screen readers, and other assistive technologies to navigate content. This approach captures subtle or context-dependent issues that automated and AI-based tools cannot detect, providing a comprehensive understanding of focus behavior. The trade-off is that manual testing is time-intensive, resource-heavy, and reliant on tester expertise.
Which approach is best?
In practice, a hybrid approach, using automated tools to scan broadly, AI to prioritize and provide contextual insight, and manual testing to validate and refine findings, yields the most reliable, comprehensive assessment of On Focus compliance.
It begins with automated testing, which rapidly analyzes large volumes of content to flag obvious focus-related issues, elements that trigger page reloads, form submissions, or unexpected pop-ups when they receive focus. This stage delivers broad coverage and consistency, ensuring that technical violations are quickly identified across even the most complex sites or applications.
Building on this foundation, AI-based testing provides a layer of nuance and context. AI can simulate real user interactions, infer likely confusion points, and assess which focus behaviors pose the highest risk to accessibility. It can also generate actionable recommendations for designers and developers, highlighting areas where user experience may be compromised. This step transforms raw technical data into meaningful insights, guiding organizations toward interventions that are both effective and efficient.
Finally, manual testing anchors the process in real-world experience. Human testers navigate the interface using keyboards, screen readers, and other assistive technologies, observing firsthand how focus behaviors affect usability. This stage captures subtle, context-dependent issues that automated and AI tools cannot detect, validating not just compliance but the quality of the user experience. It ensures that focus changes are intuitive, predictable, and non-disorienting for all users, including those with motor, cognitive, or visual impairments.
By integrating automated, AI-based, and manual testing, organizations achieve both breadth and depth in their assessment. Technical violations are efficiently captured, while the real-world impact on users is rigorously validated. This hybrid methodology goes beyond mere compliance, fostering digital experiences that are not only accessible but trustworthy, inclusive, and high-quality, reflecting a genuine commitment to usability and equity in the digital landscape.