AI Audit Standards: More Confusion Than Clarity?
Audit standards for AI systems like ASB 018 often miss the mark, introducing more ambiguity than accountability. These gaps could have real consequences in critical areas like criminal justice.
AI governance is increasingly leaning on audit standards to ensure systems are up to scratch. But here's the kicker, poorly designed standards might actually obscure flaws rather than reveal them. Take ASB 018, for example. It's a standard meant to audit probabilistic genotyping software, used in the U.S. criminal legal system for analyzing DNA.
The Gaps in Audit Standards
ASB 018 is supposed to ensure audits highlight system failures and suggest restrictions on software use. Sounds great in theory. But in practice, audits can check all the boxes without ever addressing these issues. This isn't just an academic exercise. We're talking about software that's playing a role in determining someone's guilt or innocence.
So, what's the problem? The standard is riddled with vague language and undefined terms. It's like trying to follow a map where some of the roads just disappear. How do you expect auditors to hold systems accountable when the guidelines themselves are so wishy-washy?
Implications for the Criminal Justice System
Why should you care? Because these gaps have real-world implications. Imagine a scenario where flawed software analysis impacts a court case. The press release said AI transformation. The employee survey said otherwise. The gap between the keynote and the cubicle is enormous, and here it's not just the cubicle, it's the courtroom.
Does this mean we should ditch audit standards altogether? Not at all. But what we need is a serious overhaul. Standards should be precise and enforceable, not just a checklist that auditors can breeze through without real scrutiny. Otherwise, we're just putting a stamp of approval on systems that might not deserve it.
Where Do We Go From Here?
The road ahead should involve rethinking how these standards are crafted. Are they genuinely improving the systems they're meant to evaluate, or are they just giving us a false sense of security? The people who actually use these tools need to be part of the conversation. After all, they're the ones who see the gaps that the standards often miss.
In the end, it's about accountability. If audit standards continue to fall short, the integrity of AI systems, especially those with high stakes like DNA analysis, will remain in question. We need to move beyond tick-box exercises and get serious about what these audits should accomplish.
Get AI news in your inbox
Daily digest of what matters in AI.