Cracking the Code: Why Large Language Models Challenge Cognitive Assumptions
Large language models (LLMs) disrupt traditional cognitive theories, questioning our understanding of representation genesis. A rethink is overdue.
Large language models (LLMs) have thrown a spanner in the works of cognitive science. They achieve impressive cognitive feats without transitioning through a phase traditionally understood as representation genesis. This transition, essential in cognitive systems, marks the shift from a non-representing system to one whose states influence behavior in specific, content-driven ways.
Why Representation Matters
In the cognitive sciences, the ability of a system to represent has long been a given. The debate hasn't been about whether systems represent, but how they do so. Classical frameworks like the Language of Thought hypothesis, teleosemantics, and others have assumed representation as foundational. They're built on what can be termed the Representation Presupposition structure. They presuppose that representation exists and thus rely on it to explain cognitive processes.
But LLMs are different. They haven't clearly undergone this genesis. So, what cognitive abilities do they truly possess? Here lies the crux of the issue: if these systems never truly 'represent,' how do they perform tasks that seem to require it?
The Implications of Skipping Genesis
The absence of representation genesis in LLMs challenges existing cognitive theories. These models, by their very success, expose a gap in our conceptual toolkit. We lack the resources to adequately explain their capabilities without invoking representation. The result is a Representation Regress, a conceptual loop where explanations depend on a process that never occurred.
It's time for a conceptual overhaul. The lack of a theory that accounts for LLMs’ capabilities without relying on established notions of representation is more than just an academic puzzle. It's a clarion call for new frameworks that can address this gap. Are we clinging to outdated paradigms?
Rethinking Cognitive Science
For cognitive scientists, LLMs aren't just a curiosity. They signal a need to rethink foundational assumptions. The AI-AI Venn diagram is getting thicker, and it demands our attention. If LLMs can perform tasks without traditional representation, what does this mean for theories that rely on it? Can we truly understand these systems without revisiting our core assumptions?
The collision between AI and cognitive science is underway. We're at a crossroads. Will we expand our theoretical frameworks to accommodate these agentic systems, or will we remain bound by outdated models? The stakes are high in this convergence. We must build the conceptual plumbing for the future of cognitive science.
Get AI news in your inbox
Daily digest of what matters in AI.