When AI Agents Play Nice: The New Norms of Fairness
AI agents are venturing into fairness decisions, but do they mimic human judgment? A new framework shows they don't always align.
In the late 2010s, NormCore was more than a fashion trend. It was a social statement, embracing sameness to signal belonging. Fast forward to today's tech-savvy world, where a similar kind of norm-setting is happening in the area of Multi-agent Artificial Intelligence (MAAI).
AI Agents and Fairness
MAAI systems are now capable of deliberating and negotiating, aiming to reach shared decisions in fairness-sensitive domains. It's like letting a bunch of AI agents loose in a room and asking them to hammer out a consensus on what's fair. But here's the kicker: researchers often treat these norms like targets for AI agents to hit, assuming these digital minds can mirror human judgment. Spoiler alert: they don't.
Introducing NormCoRe
The new kid on the block is Normative Common Ground Replication (NormCoRe). This framework takes the intricate design of human experiments and translates it into the world of AI agents. Think of it as giving AI agents a crash course in human behavioral studies. Building on behavioral science and latest MAAI architectures, NormCoRe maps human study designs onto AI environments. This isn't just a nerdy exercise, it's about systematically documenting how these studies are structured and analyzing the norms that emerge.
A Fair Test
NormCoRe has already been put to the test, replicating a classic experimental study on distributive justice. Participants in this study were asked to negotiate fairness under a 'veil of ignorance', meaning they didn't know their future position in the outcome. The twist? AI agents were thrown into the mix, and their judgments were compared to human benchmarks.
The results? AI normative judgments weren't a carbon copy of human evaluations. They varied based on the foundation model chosen and the language defining agent personas. So, is AI ready to take over decisions on fairness? Not quite, if you ask me.
Why It Matters
Why should you care about AI agents squabbling over fairness? Because these agents are creeping into roles traditionally held by humans, from automated customer service to more sensitive areas like legal decisions. How they decide what's fair affects us all. Are we comfortable with AI making decisions that don't match human values?
NormCoRe offers a structured way to study these AI-fueled dynamics. It provides a documented path to understanding how AI agents interpret and apply norms, and it raises questions about the future roles of AI in decision-making.
That's the week in AI. See you Monday.
Get AI news in your inbox
Daily digest of what matters in AI.