Back

Confucian Machines, LLMs, & the Quest for Socratic Instruments

March 5, 20267 min read

During my time in Singapore, I have found myself reading a lot about East Asian history, the intellectual traditions that shaped it, and the foundations of the region’s modern “tiger economies.” This path eventually led me deeper into the puzzle known as the Needham Question, first articulated by the historian of science Joseph Needham.

Why did modern science and industrialization emerge first in Europe rather than in China, despite China’s long history of technological sophistication?

China produced major inventions centuries before Europe, including the compass, gunpowder, and papermaking. Yet the scientific revolution later unfolded in Europe. The intellectual breakthroughs associated with superstar figures like Isaac Newton and Albert Einstein have come to symbolize this era of Western scientific ascendancy. Historians have offered many explanations: political fragmentation, competition between states, universities, markets, and scientific institutions.

Another lens lies in ancient philosophy. Philosophical traditions shape how societies think about knowledge, authority, and inquiry. Two figures often used to illustrate different traditions are Confucius and Socrates.

Knowledge begins by admitting uncertainty. Rather than just a confession, Socrates’ famous idea that “I know that I know nothing” is a methodological starting point. It is an invitation to expose assumptions, challenge authority, and search for truth through dialogue. This spirit of institutionalized doubt eventually scaled into the Western traditions of peer review, scientific debate, and experimental falsification.

Confucian thought, by contrast, developed around somewhat different priorities. It emphasizes learning from the wisdom of the past. Teachers, canonical texts, and traditions carry authority because they represent accumulated experience. The task of the scholar is to study, interpret, and apply this inherited knowledge.

Yet it would be a mistake to interpret these ideas merely as rigid or authoritarian. Confucian philosophy placed great value on continuity, balance, and social harmony. These principles have contributed to the formation of remarkably durable political and social institutions in the East. Rather than prioritizing intellectual disruption, Confucian thought aimed at the cultivation of ordered and flourishing communities.

Indeed, when viewed from the perspective of modern science, some of these priorities may appear newly relevant. The popular narrative of innovation often celebrates lone geniuses: singular individuals whose insights transform the world. But this image has always been somewhat mythic. Newton famously acknowledged that he stood “on the shoulders of giants,” and Einstein worked within a dense network of theoretical debates and collaborators.

Today, the collaborative nature of discovery is even more apparent. Modern scientific breakthroughs often emerge from large research teams, highly specialized expertise, and global networks of shared knowledge. Advances in fields such as particle physics, genomics, and robotics are collective achievements. Innovation rarely emerges from a single mind anymore.

In this sense, traditions that emphasize communal effort and continuity may align surprisingly well with modern science. Knowledge builds through generations, and progress often depends on shared intellectual infrastructure.

The observation becomes particularly intriguing when we turn to modern artificial intelligence. The question becomes epistemic.

Large language models are trained on vast collections of human text, learning patterns from centuries of accumulated writing. In doing so, they effectively absorb an enormous archive of human knowledge.

By a loose analogy, this resembles aspects of a Confucian knowledge system. The parallels can be sketched as follows:

Confucian knowledge modelLLM knowledge model
Wisdom transmitted through canonical textsTraining on massive bodies of written work
Scholars interpret past authoritiesModels predict patterns from past writings
Commentary builds on commentaryFine-tuning and reinforcement learning
Legitimacy tied to traditionOutput probability reflects prior usage

In both cases, the past plays a central role in shaping present knowledge.

Of course, the analogy should not be pushed too far. Confucian scholarship involved dynamic reinterpretation and philosophical debate over centuries. Likewise, language models do not simply reproduce existing texts; they synthesize patterns across vast domains of knowledge in ways that often produce novel combinations of ideas.

Still, the structural similarity raises an intriguing epistemic question: what happens when systems built on historical text become the main interface through which humans access knowledge?

If AI systems increasingly mediate our interaction with knowledge, several subtle dynamics could emerge.

One possible effect is canon reinforcement. Models learn from the distribution of past writing. Ideas that appear frequently become easier to reproduce. Yet historically, transformative ideas often began as rare or unpopular ideas. Early formulations of revolutionary theories, whether in physics, biology, or philosophy, initially occupy the margins of intellectual discourse. Systems optimized to reproduce dominant patterns may therefore, unintentionally, lean towards consensus rather than novelty.

Another effect is what might be called intellectual smoothing. Language models tend to produce responses that are coherent, balanced, and moderate. This is often extremely useful, especially when synthesizing complex subjects for readers. Yet the history of thought shows that intellectual progress frequently begins with ideas that feel strange, disruptive, or uncomfortable. Radical insights do not initially appear as balanced summaries of existing knowledge; they appear as unsettling challenges to it.

There is also a cultural risk. As AI tools become more ubiquitous, people may begin to treat their outputs as authoritative summaries of knowledge. Instead of consulting multiple sources or engaging in extended debate, users may increasingly rely on machine-generated explanations.

In such a scenario, the phrase “the Master says” (Zi yue, 子曰), which is the traditional opening of the Confucian Analects, might quietly reappear in a new form: “the model says.”

This would not occur because artificial intelligence demands authority. Rather, it would emerge from a human tendency toward convenience. When answers become easily accessible, the temptation to accept them uncritically grows stronger.

Yet focusing only on these risks overlooks something equally remarkable. Artificial intelligence introduces a form of intellectual interaction that has never existed before. For the first time in history, individuals can engage in dialogue with a system trained on a vast portion of humanity’s written knowledge.

This opens the possibility of treating AI not as an authority but as a tool for inquiry. A kind of Socratic instrument.

Through conversation with such systems, users can test ideas, generate counterarguments, explore alternative interpretations, and examine the assumptions underlying their own reasoning. Instead of delivering final answers, AI can help structure ongoing intellectual dialogue. When used in this way, it does not replace the Socratic method but amplifies it.

The deeper challenge of the AI age may therefore be cultural rather than technological. Modern knowledge systems are increasingly optimized for speed, convenience, and summarized answers. But the Socratic tradition depends on something quite different: patience, questioning, and intellectual humility.

At the heart of Socratic inquiry lies a willingness to admit uncertainty. To say “I might be wrong” is not weakness but the beginning of genuine understanding. Scientific progress depends on this same willingness to challenge assumptions, revise theories, and abandon ideas when evidence demands it.

In many contexts, acknowledging error can feel like a loss of face. Yet the history of science suggests that such moments of intellectual vulnerability are precisely what enable deeper discovery. In the age of artificial intelligence, the ability to reject hallucinated certainty and embrace epistemic humility may become more important than ever.

From this perspective, the future relationship between humans and AI need not be framed as a conflict between different intellectual traditions. Instead, it may represent the emergence of a new synthesis. The vast textual inheritance of humanity, the kind of continuity emphasized in Confucian traditions, can now be accessed through interactive dialogue. At the same time, the spirit of Socratic questioning can guide how we engage with these systems.

For the first time in history, humanity can interact directly with a system that reflects a large share of its intellectual past. And have a conversation with it.

If used thoughtfully, such tools may not diminish the Socratic tradition but extend it. Rather than marking the end of questioning, the age of artificial intelligence may create new conditions for it.

Maybe, just maybe, the ”Socratic peak” has not yet been reached. In a world increasingly shaped by powerful knowledge synthesis systems, the future of intellectual life may depend less on the machines we build than on the intentions with which we use them.