Why Explainable AI Must Include Blind and Low-Vision Users
Explainable AI largely ignores the needs of blind and low-vision users. This oversight creates barriers as AI systems become more autonomous. It's time for a change.
Explainable Artificial Intelligence (XAI) is a buzzword we hear often, but the conversation largely overlooks a critical group: blind and low-vision (BLV) users. When AI systems are built without them in mind, the consequences are far from trivial. We're not just talking about inconvenience. we're talking about barriers to independence and accessibility.
Why Visual Explanations Fall Short
One glaring issue is that most XAI efforts are highly visual. Meanwhile, AI is evolving from simple query-based tools into something more autonomous, making complex decisions that impact our lives. This shift puts BLV users at a disadvantage, as they can't easily access or understand the visual cues that explain AI's decisions.
Imagine relying on AI for decision-making support but having no way to verify or question those decisions because the explanations aren't accessible to you. Single errors can spiral into significant issues before they even get identified. The lack of timely feedback turns small mistakes into big problems. And yet, the broader AI community seems to be dragging its feet on addressing this.
A Call for Inclusive XAI
So, what's missing? First, we need conversational explanations designed with the BLV community in mind. It's not just about making sure the AI works. it's also about making sure its users feel empowered and informed. The research shows BLV users often blame themselves for AI failures, a clear sign that the system is failing them.
If AI is going to serve everyone, it needs to be designed with everyone in mind. Multimodal interfaces that incorporate sound and touch can bridge the gap. Yet, these solutions aren't being prioritized. Ask who funded the study, and you'll see that those calling the shots often don't have skin in the accessibility game. The benchmark doesn't capture what matters most.
Who Benefits from Inclusive AI?
But who benefits from an inclusive XAI system? Everyone. When we design for the most marginalized, everyone gains. Inclusive design isn't just good ethics. it's good business. AI developers need to adopt a blame-aware approach to explanations, being transparent about limitations and failures. This isn't just a courtesy. it's a necessity as AI agents increasingly act autonomously.
The real question is, why isn't this a standard practice already? Whose data? Whose labor? Whose benefit? These are the questions we need to be asking if we're serious about making AI ethical and accessible. The paper buries the most important finding in the appendix: the call for participatory development approaches that include BLV users from the outset.
, making AI explainable to BLV users isn't just a checkbox on a diversity form. It's a fundamental necessity for ethical AI. If we don't act now, we're not just leaving people behind. we're complicit in a system that ignores their needs. And that's a story about power, not just performance.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
The science of creating machines that can perform tasks requiring human-like intelligence — reasoning, learning, perception, language understanding, and decision-making.
A standardized test used to measure and compare AI model performance.
The practice of developing AI systems that are fair, transparent, accountable, and respect human rights.
AI models that can understand and generate multiple types of data — text, images, audio, video.