LLMs and Human Knowledge: A Diverging Path?
Large Language Models struggle with early-stage scientific discourse, unlike humans who rely on tacit knowledge. Why this divergence matters.
In the space of AI, the comparison between Large Language Models (LLMs) and human knowledge construction reveals significant insights about how each gathers information. Notably, the paper, published in Japanese, reveals stark differences in how both entities build their understanding of the world.
Human vs. LLM Knowledge Creation
Human knowledge often starts in closed circles of experts, relying heavily on social discourse. For instance, a 2014 study on gravitational wave physics demonstrated how scientists might dismiss 'fringe science' through tacit understanding gained in these discussions. LLMs, by contrast, lack access to such early-stage discourse. They primarily draw from existing written literature, making their grasp of emergent knowledge somewhat precarious.
The benchmark results speak for themselves. In 2023, Colin Fraser's 'Dumb Monty Hall problem' revealed a critical limitation when ChatGPT struggled, while its successors succeeded a mere year later. This change wasn't due to improved reasoning but rather an expanded corpus of human-written materials.
Chasing Human Alignment
Consider the recent invention of a new Monty Hall prompt. When a panel of LLMs and humans were asked to respond, their answers diverged dramatically. Yet, the question arises: with the expansion of written discourse, will LLMs soon mimic human-like reasoning more closely? The data shows that with time and more 'by hand' human adjustments, alignment might not be far off.
The Overshadowing Dilemma
However, a phenomenon termed 'overshadowing' presents a challenge. When a dominant discourse overshadows minor prompt variations, LLMs falter, giving outdated responses that miss the nuances. Is the 'intelligence' truly in the LLMs, or is it merely a reflection of the humans behind the scenes?
Western coverage has largely overlooked this critical discussion. While LLMs advance rapidly, their reliance on static, written sources limits their ability to innovate like human counterparts. The implications for AI development are significant, as LLMs may never fully replicate the subtlety and depth of human reasoning without substantial intervention.
Get AI news in your inbox
Daily digest of what matters in AI.