Note: The creation of this article on testing methods for Orientation was human-based, with the assistance on artificial intelligence.
Explanation of the success criteria
WCAG 1.3.4 Orientation is a Level AA conformance level Success Criterion. Content must support both portrait and landscape positions unless a specific orientation is essential. “Essential” doesn’t mean the design is locked in or that the team lacks time or resources to update it. Valid exceptions typically include use cases like bank check imaging, piano apps, presentation slides, or virtual reality experiences, situations where the content truly depends on a specific orientation.
If a website or app lacks a legitimate reason to restrict orientation, it must not do so.
I once shared a commuter train with a motor-impaired passenger who used a motorized chair. His smartphone was mounted in a landscape fixed position on the armrest. What if he tried to access a site or app that refused to adapt to his device’s position? For him, and many others, flexibility in screen position isn’t a convenience, it’s a necessity.
Who does this benefit?
- Users with limited dexterity who use mounted devices access content in their device’s fixed position.
- Users with low vision benefit from multiple positions by choosing the view that best meets their needs, such as landscape mode for larger text.
Testing orientation via Automated testing
Automated testing provides a useful first pass for detecting hard-coded orientation restrictions. It flags technical indicators like @media
queries that force a specific position, restrictive viewport
settings, and orientation-locking code in HTML or native apps (e.g., Android’s android:screenOrientation
). Developers can integrate these tests into CI/CD pipelines to catch orientation issues early. Automated tools also ensure consistency by running the same way every time, minimizing human bias.
However, automated tools can’t replicate the user experience of physically rotating a device or switching between portrait and landscape on different screen sizes. They can’t judge whether an orientation lock is essential (e.g., for a piano app that requires landscape mode). They often miss visual or layout problems in one position, especially in dynamic or complex content. These tools may flag harmless code or overlook orientation issues hidden in custom frameworks. Most mainstream accessibility tools don’t test positions specifically, making it a niche area in automation.
Testing orientation via Artificial Intelligence (AI)
AI-based tools quickly scan code, especially CSS and JavaScript, for signs of enforced orientation, such as @media (orientation: landscape)
or scripts that lock rotation. They can analyze large websites or apps at scale, flagging potential orientation issues. Advanced AI can recognize patterns where the position is often restricted (e.g., game interfaces, video players, form wizards), helping teams prioritize high-risk content. Once configured, teams can rerun these tools in CI/CD pipelines with minimal cost or effort.
However, AI-based testing has limitations. Tools may misinterpret orientation-related code, flagging layout adaptations (which are allowed) as violations. They can’t fully simulate user behavior on mobile devices or determine whether orientation locks are essential, such as in a digital spirit level app. AI often misses orientation restrictions enforced in native or hybrid environments and may either flag valid media queries or overlook issues hidden in complex interactions.
AI-based testing offers an efficient first pass for detecting possible issues, especially in code. But it cannot replace manual testing, which is essential for understanding context, user intent, and interactive behavior.
Testing orientation via Manual testing
Manual testing for this success criterion offers clear benefits and challenges.
By using real devices or emulators, testers can observe how interfaces respond to position changes in practical scenarios. They can assess whether layout shifts maintain usability, readability, and logical content flow. Human judgment allows testers to determine when orientation restrictions are essential, something automated tools often miss. Manual testing also uncovers UI elements that break, disappear, or overlap after rotation and ensures assistive technologies like screen readers and magnifiers work properly in both positions.
However, manual testing is time-consuming and resource-intensive. Testing multiple devices and positions doesn’t scale easily for large applications. It requires access to a range of physical devices or simulators, which may not always be available. Results can vary due to device differences or tester interpretation unless guided by strict test procedures.
Which approach is best?
No single method perfectly tests 1.3.4 Orientation, but combining their strengths yields better results.
Start with automated scans to catch code-level issues. Use AI-based analysis to detect layout problems across devices. Follow with manual testing of key content, critical user flows, and at least one phone and one tablet. Document and justify any orientation restrictions, ensuring they meet the “essential” exception. Include orientation checks in regression tests for new features and responsive designs.