Reframing Accessibility: AI as an Epistemological Translator
Exploring how AI can bridge the gap between creator intent and inclusive user experiences by serving as an epistemological translator for accessibility
Introduction
Accessibility is fundamentally about epistemological translation—ensuring that meaning remains legible across different ways of perceiving, operating, and understanding digital experiences. The challenge is that digital interfaces are typically designed from a dominant epistemology (such as a visual-first, mouse-driven perspective), which does not inherently align with the diverse ways users experience technology.
A Simple Example of an Epistemological Gap
For example, a designer might visually indicate a section title by making text bold and large. But for a blind user using a screen reader, this visual emphasis carries no meaning unless it is marked up semantically as a <h1>
or <h2>
heading.
The problem is not just a missing label or tag—it’s an epistemological gap. The designer’s way of encoding importance is not automatically meaningful in the user’s way of perceiving. Accessibility, then, is not just about technical compliance but about translating intent into multiple perceptual frameworks.
This white paper proposes a shift in accessibility thinking, using AI not as a rule enforcer but as an intelligent interpreter that bridges the gap between creator intent and inclusive user experiences.
The Core Challenge: Accessibility as a Problem of Epistemological Gaps
Most developers, designers, and product owners create software based on their own cognitive models—typically a visual-first, mouse-reliant paradigm. This is not a flaw, but a natural byproduct of how they interact with technology. However, accessibility requires that digital experiences be perceptual, operable, and understandable across multiple perceptual and cognitive frameworks.
Current Accessibility Gaps:
- Creators express intent in their natural epistemology (e.g., assuming visual hierarchy is enough to convey structure).
- Assistive technologies require a structured, machine-readable representation of that intent (e.g., semantic HTML, ARIA roles, alternative text).
- Users experience interfaces in different modalities (e.g., via screen readers, keyboard navigation, voice controls, or cognitive adaptations).
- The gap between creation and consumption requires an intermediary layer of translation.
Today, this translation is manual and knowledge-dependent, requiring developers to explicitly define accessibility affordances within their existing workflow. This leads to accessibility being perceived as an additional burden rather than a natural part of the creation process.
AI as the Missing Link: Automating Accessibility as a Dynamic Translation Process
Artificial Intelligence (AI) is fundamentally about symbolic transformation—mapping one representation of meaning into another. At its core, Large Language Models (LLMs) work by transforming input symbols into output symbols based on learned relationships in data. This means AI is not just processing words, but translating intent, context, and structure across different modalities.
AI as an Interactive Accessibility Guide
AI does more than just generate multiple accessibility solutions—it acts as an interactive reasoning system that helps creators refine their intent. Instead of presenting isolated accessibility fixes, AI can contextualize issues by answering questions such as:
- “How would this be perceived by a blind user?” → AI provides an immediate textual summary of how assistive technologies currently interpret the interface.
- “What’s the best way to make this accessible while preserving my intent?” → AI can offer multiple structured solutions, allowing the creator to choose or refine an approach.
- “Why does this matter?” → AI can use metaphors and reasoning tools to explain why a particular change is necessary.
This dialectical exchange ensures that accessibility is not just enforced, but clarified and internalized, allowing creators to make informed, intentional decisions.
Key Benefits of AI-Driven Accessibility Translation
- Empowers accessibility experts by embedding their guidance into AI-driven workflows, ensuring best practices are applied at the point of creation.
- Preserves creator intent by offering multiple AI-generated interpretations, allowing creators to refine and confirm accessibility choices.
- Makes accessibility seamless by translating creative intent into structured accessibility systems, ensuring Perceivable, Operable, Understandable, and Robust (POUR) access across all modalities.
AI’s Non-Deterministic Nature: A Key Advantage for Accessibility
One of the biggest strengths of AI in accessibility is its non-deterministic nature—it does not produce a single “right” answer, but rather multiple possible interpretations of a given accessibility solution.
Addressing AI Risks: Why Hallucinations Are Minimal
A common concern with AI in accessibility is the risk of hallucination—AI generating incorrect or misleading information. However, in this workflow, hallucination risks are minimized because:
- AI is operating within strictly defined accessibility criteria (e.g., WCAG success criteria, semantic HTML principles) rather than making freeform assumptions.
- AI outputs are not automatically applied—creators confirm and refine AI-suggested interpretations before implementation.
- The AI’s role is not to replace human judgment but to provide structured prompts that facilitate accessibility-conscious decision-making.
In this way, AI functions as a focused epistemological translator, aligning accessibility guidance with creator intent without replacing critical human oversight.
AI-Enhanced Accessibility Testing and Remediation
Instead of treating AI as a simple rule-checker, we can leverage it as an active reasoning system that enhances both testing and remediation.
Example Workflow: AI-Assisted Accessibility Refinement
Scenario: A Developer Creates a Button, but AI Detects a Potential Issue
-
A developer writes the following code:
<div class="btn primary">Submit</div>
-
AI analyzes the situation using multiple inputs:
- Source code analysis: Detects that this div visually resembles a button.
- Rendered UI analysis: Identifies button-like styling.
- Accessibility tree analysis: Notices that it is not recognized as a button.
-
AI provides feedback:
- Current screen reader interpretation: “Generic submit.”
- Recommended fix: “You’re making a button. You should use
<button>
or addrole="button"
with keyboard support.”
-
The developer confirms or refines AI’s interpretation:
- “No, it’s just a container.” → AI adjusts and learns.
- “Oh, good catch!” → AI applies the fix.
This demonstrates AI’s ability to contextualize, reason about accessibility, and adapt to human input, making it a dynamic accessibility assistant rather than just an enforcer.
The Fastest Path to Inclusion: Making Accessibility a Byproduct of Creation
One of the biggest barriers to accessibility adoption is the friction in making accessible decisions within the creative process. AI can significantly reduce this friction by embedding accessibility guidance at the point of creation, ensuring that best practices are naturally integrated rather than treated as an afterthought.
For those who want to engage more deeply, AI can provide dynamic learning opportunities—explaining its reasoning, offering insights into accessibility best practices, and adapting to the creator’s intent. This ensures that accessibility is not just enforced, but actively understood and refined in ways that align with the creator’s vision.
Conclusion: AI as an Accessibility Interpreter, Not Just an Enforcer
By reframing accessibility as an epistemological translation challenge, we unlock AI’s potential as a bridge between creative intent and structured accessibility systems. AI can facilitate meaningful, multimodal translations of digital experiences, ensuring that accessibility is not just an afterthought but an integral part of creation.
By embedding accessibility expertise at the point of design and development, AI empowers creators to build inclusively by default—reducing friction, increasing clarity, and enabling a future where accessibility is a natural, expressive part of the creative process.