Note: The creation of this article on testing Keyboard was human-based, with the assistance on artificial intelligence.
Explanation of the success criteria
WCAG 2.1.1 Keyboard is a Level A conformance level Success Criterion. It ensures that content is operable via a keyboard or alternate keyboard interface whenever possible. The only exception is if the task truly requires moving along a path, not just reaching a point, for example, drawing a signature.
When it comes to digital accessibility, few success criteria are as fundamental as WCAG 2.1.1 Keyboard. If a user can’t navigate your product without a mouse, it doesn’t matter how polished the design is or how innovative the features are, your experience is broken. The question isn’t whether to test for keyboard accessibility, but how to test effectively. Automated tools, AI-based platforms, and manual testing all have a role to play, but each has its limits. Understanding their strengths and weaknesses is essential if you want more than just a compliance checkbox.
Who does this benefit?
- People who are blind and rely on keyboards instead of devices requiring eye-hand coordination, like a mouse
- People with low vision who may struggle to locate or follow an on-screen pointer
- People with hand tremors who often find precise mouse movements challenging and prefer using a keyboard
Testing via Automated testing
Automated testing offers speed and consistency at scale. It can instantly scan thousands of pages and flag code-level issues like missing tabindex values or non-focusable controls. Integrated into CI/CD pipelines, automation helps teams catch regressions early, saving time and effort.
Yet automation is only surface-level. It cannot simulate real keyboard navigation, evaluate logical focus order, or detect focus traps in complex components. Worst of all, it can create a dangerous illusion of compliance: a clean report doesn’t mean a site is actually keyboard accessible.
Testing via Artificial Intelligence (AI)
AI-based testing has emerged as a bridge between automation and manual review. By simulating navigation patterns and leveraging adaptive learning, AI can uncover interaction issues traditional tools overlook. It’s particularly good at detecting patterns in dynamic applications and predicting where barriers might exist.
But let’s be clear, AI isn’t magic. Its accuracy varies, results can be opaque, and it still can’t evaluate true usability. Marketers love to oversell it as a silver bullet, but in practice, it’s only one more tool in the toolbox.
Testing via Manual testing
Manual testing is where the truth comes out. A skilled tester with a keyboard can uncover issues that no tool or AI model can reliably detect, illogical tab orders, inaccessible modals, focus traps, and broken interactive widgets. Manual testing also provides context, something machines simply can’t replicate: not just whether an element is technically focusable, but whether it’s actually usable.
Of course, this level of testing is slower, requires expertise, and costs more to scale. But it remains the gold standard for confirming compliance with WCAG 2.1.1.
Which approach is best?
No single approach for testing WCAG 2.1.1 Keyboard is perfect. However, using the strengths of each approach in combination can have a positive effect.
Start with automation to quickly identify the low-hanging fruit. Automated tools can flag missing tabindex, non-focusable controls, or improper use of semantic elements. Integrate these checks into your CI/CD pipeline so regressions never make it to production. Think of automation as your early warning system, fast, consistent, but not comprehensive. The goal is to catch obvious technical issues early and at scale.
Once automation clears the surface issues, AI-based testing helps expand coverage. These tools can simulate navigation paths, detect potential focus traps, and identify interaction barriers in dynamic components. AI adds value by highlighting probable problem areas that deserve human attention, especially in modern single-page applications or complex JavaScript-driven UIs. The goal is to prioritize where human testers should dig deeper by using predictive insights.
No matter how advanced the tools, only a human tester with a keyboard can validate the full user experience. This includes verifying logical focus order, ensuring modals and menus don’t trap focus, testing discoverability of interactive elements, and confirming that tasks can be completed without a mouse. Manual testing provides the context and usability judgment that machines cannot.