Monkey Brains vs. Machine Minds: The Real Gap in Visual Processing
New research reveals a striking divergence between neural representations in monkeys and deep learning models. The gap widens as AI models improve on ImageNet-1k.
visual processing, there's a stark divide between human-like brains and artificial minds. Recent research has thrown a spotlight on how deep neural networks, trained on tasks like image classification, stack up against neural activities in the monkey visual cortex. And the results are anything but what you'd expect from industry hype.
Decision Variable Correlation: A New Metric
Forget the superficial alignment between neural networks and the visual cortex. Enter Decision Variable Correlation (DVC). This approach doesn't just skim the surface. it digs into how two observers, whether human or machine, process decisions on an image-by-image basis. It's a more nuanced take, focusing on task-relevant information rather than general representational alignment.
According to the findings, when you pit neural models against each other, the similarity is on par with what you'd find between two monkeys. But throw in a comparison between monkeys and models, and there's a noticeable drop. The kicker? As these AI models heighten their performance on datasets like ImageNet-1k, their similarity to monkey brains plummets even further.
Adversarial Training: A False Hope?
So, does adversarial training bridge the gap? Hardly. While it ramps up model-to-model similarity, it leaves model-to-monkey similarity in the dust. Even pre-training on larger datasets doesn't bridge this divide. The promise that more data or cunning tricks could align AI with biological systems remains unfulfilled.
Why should this matter? If AI's trajectory is diverging from biological processes, are we really building smarter systems or just different ones? Slapping a model on a GPU rental isn't a convergence thesis. The intersection is real. Ninety percent of the projects aren't.
The Real Question
Here's the million-dollar question: If we're engineering intelligence that grows increasingly alien to our own, who sets the benchmarks for relevance? If the AI can hold a wallet, who writes the risk model? The industry needs to wake up. Show me the inference costs. Then we'll talk. A world where AI excels at tasks yet deviates significantly from human-like processing warrants a deeper conversation. It's about aligning not just performance metrics but the very essence of understanding.
Get AI news in your inbox
Daily digest of what matters in AI.
Key Terms Explained
A machine learning task where the model assigns input data to predefined categories.
A subset of machine learning that uses neural networks with many layers (hence 'deep') to learn complex patterns from large amounts of data.
Graphics Processing Unit.
The task of assigning a label to an image from a set of predefined categories.