Paris, 1642. A nineteen-year-old boy watches his father, a tax commissioner for the French Crown, spend his evenings hunched over columns of numbers — adding, checking, re-adding, checking again. The work is relentless and mind-numbing: thousands of figures to be summed, thousands of opportunities for the small errors that cascade into large ones. The boy, whose name is Blaise Pascal, has a gift for mathematics that borders on the supernatural — he had independently rediscovered several of Euclid’s geometric propositions as a child, before anyone had taught them to him. He looks at his father’s labor and thinks: there must be a better way.

Over the next three years, he builds fifty prototypes. He works through problems of gear ratios, of mechanical linkage, of the carrying mechanism that transfers a unit from one column to the next when a digit rolls over from nine to zero. He produces a machine — the Pascaline — that can add and subtract numbers mechanically, without human attention to the individual operations. Turn the dials to the input numbers. Turn the crank. Read the answer.

It works. It is the first mechanical calculator in history that actually functions.

And almost immediately, the question arises that will echo through the next four centuries: if a machine can calculate, what else can it do? And if it can do anything a mind can do — is it, in some sense, a mind?

Pascal himself had an answer. He was not sure he liked it.


Why Philosophy Came First

The history of AI is usually told as a story of technology — of machines getting faster, algorithms getting smarter, datasets getting larger. And that story is true and important. But it is incomplete.

Before anyone could build a thinking machine, someone had to think about what thinking was. Before anyone could program intelligence, someone had to ask what intelligence required. Before the engineers came the philosophers — and in some cases, before the philosophers came the mathematicians, and in some cases these were the same people.

The philosophical groundwork for artificial intelligence was laid over the course of roughly four centuries, from the early 17th century to the mid-20th. It was laid by people who had no idea they were contributing to a field that did not yet exist, who were working on problems they understood in terms of mathematics, theology, and natural philosophy rather than computing. But the questions they asked, and the partial answers they gave, shaped the conceptual landscape into which AI was born.

This article is about those people and those questions. It is about the philosophers who, before anyone had built a computer, were already arguing about whether a computer could think — and about what thinking was in the first place.

Their arguments are not museum pieces. They are live debates, relevant to the most pressing questions in AI today. We have better technology than they did. We do not have better answers to their questions.


René Descartes: Drawing the Line

We have met Descartes before in this series — in the automata article, as the philosopher who provided the framework that made the automata era intellectually serious. But Descartes deserves a fuller treatment here, because his contribution to the philosophy of mind was not just a framework for thinking about automata. It was the founding document of modern philosophy of mind — the starting point from which almost every subsequent discussion of the relationship between mind and mechanism began.

Descartes was born in 1596 in La Haye, France, and trained in the Jesuit educational tradition — rigorous, broad, deeply engaged with the inherited learning of classical antiquity and scholastic philosophy. He grew profoundly dissatisfied with this inheritance. Too much of what passed for knowledge, he thought, was built on uncertain foundations — on the authority of ancient texts, on the unexamined assumptions of tradition, on reasoning that had never been subjected to genuine critical scrutiny.

His response was radical. In a series of famous thought experiments — most fully developed in the Meditations on First Philosophy, published in 1641 — he proposed to doubt everything that could possibly be doubted, to strip his beliefs down to whatever bedrock of certainty remained. The result was the most famous sentence in the history of philosophy: Cogito ergo sum — I think, therefore I am.

The cogito established that the one thing Descartes could not doubt was his own existence as a thinking thing. Whatever else might be uncertain — the external world, the evidence of his senses, the reliability of his memories — the fact that he was thinking was beyond doubt, because doubting itself was a form of thinking.

But this raised an immediate and consequential question: what was this thinking thing? And how did it relate to the physical body that Descartes also seemed to have?

His answer — dualism — divided reality into two fundamentally distinct substances. Res cogitans: thinking substance, mind. Res extensa: extended substance, matter. The body was a physical machine — Descartes described it in elaborate mechanistic terms, explaining circulation, digestion, and sensation as mechanical processes. The mind was something different: non-physical, non-extended, the seat of thought, consciousness, and will.

The problem that Descartes could never satisfactorily solve — and that has never been satisfactorily solved since — was how these two substances interacted. If mind was non-physical and matter was non-mental, how did a thought produce a bodily movement? How did a pinprick produce a sensation? How did the will move the arm?

Descartes proposed that the interaction occurred in the pineal gland — a small structure in the brain that he believed was the seat of the soul’s connection to the body. This answer satisfied almost nobody then and satisfies nobody now. The pineal gland is not the seat of anything in particular. But the problem it was proposed to solve — the hard problem of how mind and matter interact — is still the hardest problem in philosophy of mind, and arguably in all of science.

For AI, Cartesian dualism created a specific legacy. If mind was a distinct, non-physical substance, then no physical machine — no matter how sophisticated — could ever have a mind. You could build a machine that simulated mental processes from the outside, that produced outputs indistinguishable from mental outputs. But the mind itself — the conscious, thinking, experiencing subject — would be absent. The lights would be off.

This is, in updated form, the position that many critics of AI still hold. A language model can produce text that looks like understanding. But there is no understanding there — no mind, no consciousness, no genuine inner life. The machine has everything except what matters most.

Whether Descartes was right — whether mind is genuinely distinct from matter, whether consciousness requires something beyond physical processes — is still debated. It is, in fact, the central debate in the philosophy of mind. And it matters for AI in the most direct possible way: if Descartes was right, then building machines that think is literally impossible. If he was wrong — if mind is a function of matter, if consciousness emerges from physical processes — then it is merely very difficult.


Leibniz: The Calculus of Thought

If Descartes drew a line between mind and machine, Gottfried Wilhelm Leibniz tried to erase it.

Leibniz was born in Leipzig in 1646, fifty years after Descartes, and he was perhaps the broadest mind of his extraordinarily broad era. He co-invented calculus independently of Newton — a priority dispute that poisoned the relationship between British and continental mathematics for a generation. He designed and built a mechanical calculator — the Stepped Reckoner — more capable than Pascal’s Pascaline, capable of multiplication and division as well as addition and subtraction. He developed an influential system of metaphysics. He was a diplomat, a librarian, a historian, a jurist. He corresponded with almost everyone in Europe who was thinking seriously about anything.

And he had an idea that, if it had been developed in his own time, might have produced artificial intelligence three centuries earlier than it happened. He called it the calculus ratiocinator — the calculus of reasoning.

Leibniz dreamed of a universal symbolic language — what he called the characteristica universalis — that could represent all human knowledge in a precise, formal notation. Not just mathematics but logic, metaphysics, ethics, theology — all of it expressed in symbols whose relationships were explicit and unambiguous. And once you had this universal language, he believed, you could develop a calculus of reasoning — a set of rules for manipulating the symbols that would allow you to derive new truths from old ones mechanically, the way algebra allows you to derive new equations from old ones.

The implications, as Leibniz understood them, were extraordinary. If reasoning could be formalized — if all valid inference could be captured in explicit symbolic rules — then a machine that implemented those rules could, in principle, reason. Not simulate reasoning, not produce the appearance of reasoning, but actually reason. Because reasoning, in Leibniz’s account, was just the correct manipulation of symbols according to logical rules. A machine that followed the rules correctly would be reasoning correctly, regardless of whether it was made of flesh or metal.

This is a vision of AI that is extraordinarily close to the symbolic AI tradition that would dominate the field from the 1950s through the 1980s. The program that proves mathematical theorems — Newell and Simon’s Logic Theorist, demonstrated at Dartmouth in 1956 — is almost exactly what Leibniz was imagining in the late 17th century. The expert systems of the 1980s, which encoded human knowledge in formal rules and used logical inference to derive conclusions, are implementations of the calculus ratiocinator in all but name.

Leibniz never built the universal language or the reasoning machine. The project was too large, too dependent on developments in logic and mathematics that would not occur for another two centuries. But he identified the program — the conceptual approach to machine intelligence as formal symbol manipulation — that would drive AI research for its first several decades.

He also identified one of its central problems, though he did not recognize it as such. His mechanical calculator — the Stepped Reckoner — was a beautiful device, capable of performing impressive calculations. But it was, as Leibniz well knew, doing something quite different from what a mathematician did when working through the same calculations. The mathematician understood what the numbers represented. The machine did not. The machine produced the right outputs without any comprehension of what the outputs meant.

Leibniz thought this distinction would dissolve once the universal language was developed — once symbols were defined precisely enough, their meaning would be built into their formal relationships, and a machine manipulating them correctly would thereby be engaging with meaning, not just syntax. This is precisely the claim that John Searle would attack, three centuries later, with the Chinese Room argument. The debate between Leibniz and Searle — mediated by three hundred years of philosophy, mathematics, and computer science — is one of the deepest and most unresolved in the history of ideas.


Pascal: The Calculator and Its Limits

Blaise Pascal’s relationship to the philosophy of mind was more troubled and more personal than Leibniz’s. Where Leibniz was optimistic about the power of formalization, Pascal was deeply ambivalent — and his ambivalence produced some of the most penetrating observations about the limits of mechanical thinking ever written.

Pascal built the Pascaline — his mechanical calculator — out of a genuine desire to help his father and to demonstrate the power of mechanism. He succeeded on both counts. The machine worked. It could add and subtract reliably. And it demonstrated, as clearly as anything had before, that at least some of what looked like mental activity — arithmetic calculation — could be done by a machine.

But Pascal was not comforted by this demonstration. He was, in a way that is hard to fully reconstruct but that is visible in his writings, unsettled by it. He was a devout Christian, deeply serious about the inner life, deeply concerned with the relationship between reason and faith. The Pascaline seemed to threaten something he cared about.

His reflections on this threat appear most forcefully in the Pensées — the fragmentary collection of thoughts on religion and philosophy that he left unfinished at his death in 1662 at the age of thirty-nine. In the Pensées, Pascal drew a distinction that has become one of the most cited in the philosophy of mind: the distinction between the esprit de géométrie and the esprit de finesse — the geometric mind and the sensitive mind.

The geometric mind, Pascal wrote, was the mind that worked through explicit principles — through clear, defined propositions and logical deduction. It was powerful in domains where the principles were available and the deductions were clear: mathematics, formal logic, mechanics. The Pascaline was a perfect embodiment of the geometric mind — it worked through explicit rules, applied mechanically, to produce correct outputs.

But the sensitive mind worked differently. It grasped things whole — perceived the total situation, felt the right response, understood in a way that was not reducible to explicit principles because the principles were too numerous, too contextual, too deeply embedded in experience to be separately articulated. Good judgment, wisdom, tact, the ability to read a situation — these were functions of the sensitive mind. And they could not, Pascal believed, be mechanized. You could not write down the rules for good judgment, because good judgment was precisely the ability to go beyond rules to the specific situation.

This distinction maps remarkably closely onto a debate that has been central to AI throughout its history: the debate between rule-based AI and learning-based AI, between explicit knowledge representation and pattern recognition, between the system that reasons from explicit principles and the system that learns from experience. The symbolic AI tradition — expert systems, logic programming, formal knowledge representation — is Pascal’s esprit de géométrie made computational. The neural network tradition — learning from data, pattern recognition, statistical inference — is a computational attempt to capture something like the esprit de finesse.

Pascal would have been skeptical of both. He believed the sensitive mind was not just harder to formalize but in principle not formalizable — that it required a kind of engagement with the world, a kind of embodied, experienced judgment, that no explicit procedure could capture. Modern AI has made more progress than Pascal might have expected on some of the tasks he associated with the sensitive mind — recognizing faces, understanding natural language, even generating creative work. Whether it has captured what Pascal meant by the esprit de finesse, or whether it has found clever ways around the problem he identified, remains genuinely open.


Hobbes: Thinking as Computation

Thomas Hobbes, the English philosopher best known for the Leviathan and his dark view of human nature, made a contribution to the philosophy of mind that is less often discussed but directly relevant to AI: he proposed, in the mid-17th century, that thinking was a form of computation.

Hobbes’s argument, developed in Leviathan (1651) and in his earlier work De Corpore, was that reasoning was essentially addition and subtraction — the combining and separating of ideas according to rules. When we reason, we do not do something mysterious and spiritual. We compute. We add concepts together to form compound ideas. We subtract features to abstract general categories from specific instances. We follow inferential rules to derive conclusions from premises.

This is a remarkably modern view of cognition. The computational theory of mind — the view that mental processes are computational processes, that thinking is information processing — is the dominant paradigm in cognitive science today. It is the view that underlies virtually all of mainstream AI research. And Hobbes was articulating something very like it in 1651.

His materialism was also radical for its time. Hobbes denied the existence of immaterial substances — he was a thoroughgoing materialist who believed that everything, including mind, was physical. There was no Cartesian soul, no non-physical thinking substance. Thoughts were physical events in a physical brain. The brain was a kind of machine — an organic machine, but a machine nonetheless.

If Hobbes was right — if thinking was computation, and if minds were physical machines — then the question of whether artificial machines could think became a question of whether you could build a physical device that performed the right computations. The answer, in principle, was yes. The question was just engineering.

Hobbes was not widely followed in his materialism in the 17th century. His politics were too disturbing, his atheism too suspected, his materialism too radical for most of his contemporaries to embrace. But the tradition he initiated — computational, materialist, deflationary about the specialness of mind — is the tradition that eventually produced AI.

The road from Hobbes’s Leviathan to Turing’s Turing Machine is not a straight line, but it is a road.


Hume and the Bundle Theory: Where Is the Self?

David Hume, the Scottish philosopher of the 18th century, made a contribution to the philosophy of mind that seems at first to be about something quite different from AI — but that turns out to be deeply relevant to some of the most pressing questions in contemporary AI research.

Hume was an empiricist — he believed that all knowledge ultimately derived from sensory experience. And when he turned this empirical lens on the self — on the experience of being a continuous, unified subject with an identity that persisted through time — he found something disturbing.

He could not find the self.

When he looked inward, Hume reported, he found not a unified, persisting self but a bundle of perceptions — a stream of sensations, thoughts, emotions, memories, flowing one after another. He could not find a subject that was having these experiences. He found only the experiences themselves. The self, he concluded, was not a thing. It was a process — a bundle of perceptions flowing in relation to each other, giving rise to the illusion of a unified subject.

This is the bundle theory of personal identity, and it has several consequences that matter for AI.

First, it suggests that personal identity — the sense of being the same person over time — is a constructed narrative rather than a metaphysical fact. We are not selves in some deep, substantial sense. We are patterns — patterns of memory, habit, association, and continuity that generate the experience of being a unified self. If this is right, then the question of whether an AI system has a self becomes a question about whether it has the right kind of pattern — the right kind of continuity, coherence, and self-reference — rather than a question about whether it has some special non-physical essence.

Second, Hume’s bundle theory challenges the intuition that consciousness requires a subject — some unified experiencer who is conscious of something. If the self is a bundle rather than a subject, perhaps consciousness is a bundle too — not a unified inner light but a collection of processes that together produce the appearance of unified experience. This is a view that some contemporary philosophers and neuroscientists find compelling, and it has implications for how we think about machine consciousness. Perhaps the question is not whether a machine has consciousness but whether its information processing has the right kind of bundled, self-referential structure.

Third, Hume’s empiricism raised what he called the problem of induction — the problem of justifying our assumption that the future will resemble the past, that patterns observed in past experience will continue. This problem is directly relevant to machine learning. Every machine learning system makes predictions based on patterns in training data — it assumes that the future will resemble the past in the relevant ways. Hume showed that this assumption cannot be justified by logic or experience alone. It is, as he put it, a habit of mind — a tendency built into how we process information. Machine learning systems have the same tendency built into how they process data. Whether this is a solution to the problem of induction or just a replication of it in silicon is a question that epistemologists and AI researchers are still discussing.


Kant: The Categories of Thought

Immanuel Kant, responding to Hume, produced what he called a Copernican revolution in philosophy — an inversion of the standard picture of how knowledge worked.

The standard picture said: the world is a certain way, and our minds come to know it by receiving impressions from it. The mind was passive; the world was active.

Kant reversed this. He argued that the mind was not a passive receiver of experience but an active constructor of it. The world as we experience it — with its spatial relationships, its causal connections, its temporal order — was not simply given to us by the world itself. It was partly constructed by the mind, using categories — concepts like causation, substance, and unity — that the mind brought to experience rather than deriving from it.

This was a profound reorientation, and its implications for AI are still being worked out.

For one thing, it suggested that intelligence was not just a matter of processing information but of structuring it — of organizing raw experience into a framework that made it meaningful and navigable. The categories — causation, substance, space, time, necessity — were the scaffolding of thought. Without them, experience was chaos. With them, it was a world.

Modern AI systems learn representations of the world from data. The question of what structures they impose on that data — what the AI equivalents of Kant’s categories are — is an active area of research in deep learning. Neural networks develop internal representations that encode certain regularities and not others, that group things in certain ways and not others. Whether these representations are adequate — whether they capture the right structures of the world — is partly an empirical question and partly a philosophical one.

Kant also made a distinction that has become important in AI ethics: the distinction between treating persons as ends in themselves and treating them merely as means. Persons, for Kant, had dignity — an absolute worth that could not be reduced to their usefulness. They were not tools. They were ends.

As AI systems become more sophisticated — as they begin to produce outputs indistinguishable from human outputs, to form relationships with humans, to make decisions that affect human lives — the question of whether they are ends or means, whether they have dignity or merely utility, becomes increasingly urgent. We do not have a Kantian account of AI personhood. We do not even have an agreement on what such an account would look like. But Kant identified the framework within which the question must eventually be answered.


Mill and the Associationist Tradition

John Stuart Mill, the 19th century British philosopher and economist, contributed to the philosophy of mind through his development of associationism — the view that complex mental processes were built up from simple associations between ideas.

The associationist tradition, developed earlier by Hume and Hartley, held that thinking was essentially the combination and transformation of simple mental atoms — sensations and ideas — according to the laws of association: ideas that occurred together were linked, so that one tended to call up the other. Complex ideas were built from simple ones through these associative connections. Memory, imagination, reasoning — all of these were, at bottom, patterns of association.

This is, in broad outline, the account of mind that underlies modern neural network approaches to AI. A neural network is an associationist machine. It learns, through training, which patterns of activation tend to co-occur. It builds up complex representations by combining simpler ones. It generalizes from past experience by activating patterns similar to those it has seen before.

The associationist tradition also contributed to thinking about learning — about how minds develop and change through experience. Mill was interested in how education and environment shaped intelligence, how different experiences produced different mental structures, how the mind was not fixed but was continuously being formed by what it encountered. This is precisely the concern of machine learning: how to build systems that learn from their experience and improve their performance over time.

Mill’s contribution to AI philosophy was indirect but real. He helped establish the intellectual tradition — empiricist, associationist, focused on learning and experience — that would eventually find its computational expression in neural networks and machine learning. The journey from Mill’s associationist psychology to backpropagation in a deep neural network is long and winding, but the intellectual direction is continuous.


Frege and the Foundations of Logic

Gottlob Frege, the German mathematician and philosopher who worked in the latter half of the 19th century, made what is arguably the most directly important contribution to AI of any philosopher before the computer age: he invented modern logic.

Before Frege, logic had been, since Aristotle, a relatively limited tool — useful for analyzing simple patterns of inference but unable to handle the complexity of mathematical reasoning or the quantified statements (“all,” “some,” “none”) that were essential to science and mathematics.

Frege’s Begriffsschrift — “Concept Script” — published in 1879, introduced a formal notation for logic that could express quantified statements, relations between objects, and complex patterns of inference in a rigorous, unambiguous way. It was, for the first time, a logic powerful enough to serve as the foundation for all of mathematics.

The importance of this for AI can hardly be overstated. Modern AI — whether symbolic AI based on explicit logical inference or machine learning based on optimization — is built on mathematics. The mathematical foundations of computing — the work of Turing and Church on computability, of von Neumann on computer architecture, of Shannon on information theory — all relied on the logical foundations that Frege had laid. Without Frege’s logic, there would have been no precise foundation for the theory of computation, and without the theory of computation, there would have been no computers.

But Frege’s contribution went beyond the technical. His work was part of a broader project — shared with Bertrand Russell and others — of logicism: the attempt to show that all of mathematics could be derived from purely logical foundations. If successful, this project would have shown that mathematical truth was logical truth, and that the whole of mathematics was, in principle, accessible to purely mechanical deduction from logical axioms.

The logicist project ultimately failed — Gödel’s incompleteness theorems of 1931 showed that no consistent formal system could capture all of mathematical truth. But the attempt was enormously productive. It developed the tools of formal logic to an unprecedented level of precision and power. And it inspired the generation of mathematicians and logicians — including Turing — who created the theoretical foundations of computing.

Frege himself was a tragic figure. His life’s work — the Grundgesetze der Arithmetik, a two-volume attempt to derive all of arithmetic from logical principles — was devastated just as the second volume was going to press when Bertrand Russell sent him a letter pointing out a contradiction in his logical system. Frege’s response — adding a hasty appendix acknowledging that the foundations of his work had been shaken — is one of the saddest moments in the history of mathematics. He lived for another twenty years, increasingly bitter and withdrawn, never fully recovering from the blow.

He did not live to see his logical innovations become the foundation of the digital revolution. He did not know that the symbols he had devised for precision in mathematical reasoning would eventually be implemented in circuits and run on machines that would transform human civilization. He had given the world an extraordinary gift without knowing what it would become.


Russell and Whitehead: The Great Attempt

Bertrand Russell and Alfred North Whitehead’s Principia Mathematica, published in three volumes between 1910 and 1913, was the most ambitious attempt in history to reduce all of mathematics to logic.

The work was monumental in scale and technical difficulty. The first volume alone was nearly seven hundred pages of dense formal notation. It took Russell and Whitehead years to write, and years more to see through the press. It was, by any measure, a heroic intellectual effort.

It was also, in a specific and important sense, a direct ancestor of AI. The Logic Theorist — Newell and Simon’s program that proved mathematical theorems and was demonstrated at Dartmouth in 1956 — was specifically designed to prove theorems from the Principia. When the Logic Theorist found a proof more elegant than Russell and Whitehead’s own for one of the early propositions, Newell and Simon reportedly tried to submit the result as a co-authored paper with Russell. Russell, then in his eighties, was reportedly delighted. The journal rejected the submission on the grounds that a computer program could not be a co-author.

The Principia mattered for AI not just as a target for theorem-proving programs but as an existence proof of a certain kind. It demonstrated that a vast domain of reasoning — the whole of mathematics, or at least large swaths of it — could in principle be captured in explicit formal rules precise enough to be mechanically applied. If mathematics could be formalized to this degree, perhaps other domains could too. Perhaps common sense reasoning, perhaps scientific reasoning, perhaps the whole of human knowledge could be given a formal foundation sufficient for mechanical implementation.

This hope — that intelligence was fundamentally formalizable, that the right logical system and the right inferential rules would eventually capture the full range of intelligent behavior — drove symbolic AI research for decades. It was the intellectual inheritance of the Principia, applied to the project of machine intelligence.

Gödel’s incompleteness theorems showed that the specific logicist project of the Principia could not succeed — that no consistent formal system could capture all mathematical truth. But this did not immediately deflate the broader hope for AI. Perhaps intelligence did not require a complete formal system. Perhaps a sufficiently good approximation would do. Perhaps the parts of intelligence that were not formalizable were the parts that did not need to be formalized for practical purposes.

These arguments were made, and there is something to them. But the incompleteness results, and the subsequent discovery of the computational intractability of many AI problems — the combinatorial explosion that defeated early AI programs when they tried to scale beyond toy domains — gradually eroded the confidence that formalization would be sufficient. The Principia had shown how far formalization could go. Gödel had shown it could not go all the way. And AI research slowly, painfully learned that intelligence was not entirely a matter of explicit rules.


Wittgenstein: The Language Game

Ludwig Wittgenstein made two distinct and apparently contradictory contributions to the philosophy of mind — one in his early work and one in his late work — and both have been influential in AI.

His early work, the Tractatus Logico-Philosophicus (1921), was deeply influenced by Russell and Frege. It proposed a picture theory of meaning: language was meaningful because sentences pictured facts about the world, and the structure of language mirrored the structure of reality. The limits of language were the limits of what could be said clearly. Everything else — ethics, aesthetics, the mystical — could not be said, only shown.

The Tractatus was enormously influential and is, among other things, one of the most beautiful works of philosophical prose ever written. But by the time of his later work — especially the Philosophical Investigations, published posthumously in 1953 — Wittgenstein had come to believe that the picture theory was profoundly wrong.

In the Investigations, Wittgenstein argued that meaning was not a matter of pictures and facts but of use — of the practices and activities, the “language games,” in which words were embedded. Words did not mean things by pointing at them. They meant things by having roles in the activities of a community. The meaning of “pain” was not the private inner sensation but the public, social practices of expressing pain, responding to it, treating it.

This view — sometimes called use theory of meaning or semantic pragmatism — has several consequences for AI.

First, it suggests that language cannot be mastered by learning to map words to things — by building a sufficiently large and accurate vocabulary list. Mastering language requires participating in language games — in the forms of life in which language is embedded. A machine that processes text without being embedded in the social practices and forms of life in which language has its meaning might be able to produce grammatically correct and even contextually appropriate text without genuinely understanding it.

This is, in essence, an updated version of Searle’s Chinese Room objection — and Wittgenstein’s work has been cited in support of Searle’s argument. Understanding requires not just symbol manipulation but embedding in a form of life.

Second, Wittgenstein’s famous private language argument — his demonstration that a language that only one person could understand would not be a language at all — suggests that genuine meaning requires public criteria, shared practices, a community. A mind isolated from community would not be able to have genuine concepts, not because it lacked internal processing but because meaning was inherently social. This raises questions about AI systems trained on human text: are they embedded in the language games that give the text its meaning, or are they outside those games, processing the products of meaning without sharing in the form of life that produces it?

These are not idle philosophical questions. They are active research debates. The question of whether large language models understand meaning or merely process symbols is the Wittgensteinian question in computational form. Whether your answer to that question should make you more or less optimistic about the prospects for genuinely intelligent machines is something thoughtful people still disagree about.


The Mind-Body Problem: Still Unsolved

Every philosopher we have discussed in this article was, in some way, grappling with the same underlying problem: the relationship between mind and body, between consciousness and matter, between the inner life of a thinking being and the physical processes that apparently produce it.

Descartes named the problem by positing two substances and asking how they interacted. Hobbes tried to dissolve it by denying the distinction: mind was matter, thinking was physical. Leibniz tried to dissolve it differently: matter and mind were both aspects of a more fundamental reality, the monads, coordinated by God in a pre-established harmony. Kant sidestepped it by making the mind’s contribution to experience prior to any question about the external world. Hume undermined the concept of the self that seemed to make the problem urgent. Wittgenstein argued that the problem was generated by grammatical confusion — that if we paid attention to how words like “mind” and “experience” actually functioned in our language, the problem would dissolve.

None of these approaches fully worked. The mind-body problem is still with us. The “hard problem of consciousness” — the question of why there is subjective experience at all, why there is something it is like to be a conscious being — is recognized by most philosophers of mind as the deepest and most resistant problem in the field. We have made enormous progress in understanding the neural correlates of consciousness — the brain states that accompany conscious experience. We have made no progress in understanding why those brain states are accompanied by experience at all.

For AI, this matters in the most direct possible way. If we could solve the hard problem of consciousness — if we understood why and how physical processes gave rise to subjective experience — we would know what it would take to build a machine that was not just behaviorally intelligent but genuinely conscious. We would know whether it was possible in principle and what it would require in practice.

We cannot solve it. We do not know why the brain gives rise to conscious experience. We do not know whether any other physical system — a neural network, a robot, a sufficiently complex information-processing system of any kind — could give rise to conscious experience. We are, with respect to the most fundamental question about mind, where Descartes left us in 1641: aware of the problem, unable to solve it.

This is not a counsel of despair. AI systems are already extraordinarily useful, regardless of whether they are conscious. But it is a counsel of humility. The engineers and researchers who build AI systems are operating in the dark on the most fundamental questions about what they are building. They are creating things that may or may not have inner lives, may or may not experience anything, may or may not deserve moral consideration — and they do not know which, and neither does anyone else.

The philosophers who first asked these questions could not have imagined the systems that now make them urgent. But they asked the right questions. And those questions are waiting for answers that four centuries of brilliant thinking has not yet produced.


What the Philosophers Left Us

The philosophical tradition we have traced in this article left AI with several things.

It left a set of questions — about the nature of mind, the relationship between computation and understanding, the possibility of machine consciousness — that have not been answered and that define the deepest debates in AI research and AI ethics.

It left a set of conceptual tools — the frameworks of Descartes, Leibniz, Kant, Hume, Frege, and the others — that remain useful even when their specific conclusions are wrong. Cartesian dualism is almost certainly false. But the problem it identified — the hard problem of consciousness — is real. Leibniz’s calculus ratiocinator was never built. But the project it envisioned — reasoning as formal symbol manipulation — produced the first generation of AI programs.

It left a tradition of taking the question seriously. These were not credulous or mystical thinkers. They were rigorous, skeptical, demanding of precision and evidence. They took the question of machine intelligence seriously enough to engage with it carefully — to articulate the strongest objections, to think through the implications, to resist the temptation to settle the matter with an appeal to special divine dispensation or obvious common sense. This is the tradition of inquiry that AI research stands in.

And it left a humility — not the humility of defeat but the humility of genuine difficulty. These were the greatest minds of their centuries, working on the question of mind with all the tools available to them. They did not solve it. The question was harder than they expected. It is still harder than we expect.

The machines are getting better. The questions are getting harder. And the philosophers who first asked them, sitting with their quill pens and their theological anxieties and their mechanical calculators in the cold studies of 17th and 18th century Europe, would not recognize the world their questions helped create. But they would recognize the questions. They have not changed.


Further Reading

  • “The Emperor’s New Mind” by Roger Penrose — Argues that human consciousness involves non-computational processes, drawing on Gödel’s incompleteness theorems. A challenging and important counterargument to computational theories of mind.
  • “Philosophy of Mind: Classical and Contemporary Readings” edited by David Chalmers — The best single anthology covering the full range of philosophy of mind debates relevant to AI, from Descartes to the present.
  • “Descartes’ Error” by Antonio Damasio — A neuroscientist’s argument that Descartes was wrong about the separation of reason and emotion, with implications for how we think about machine intelligence.
  • “The Philosophical Investigations” by Ludwig Wittgenstein — Demanding but rewarding. The sections on language, meaning, and private experience are directly relevant to debates about AI understanding.
  • “Leibniz: Philosophical Essays” edited by Roger Ariew and Daniel Garber — A good accessible collection of Leibniz’s philosophical writing, including his work on the universal language and the calculus of reasoning.

Next in the Articles series: A4 — Ada Lovelace & The First Algorithm — We have met Ada Lovelace in her Profile. Now we go deeper into the actual mathematics — what the Notes on the Analytical Engine actually said, what the algorithm for Bernoulli numbers actually was, and why the ideas in those footnotes were more radical than even most computer scientists realize.


Minds & Machines: The Story of AI is published weekly. If this piece sparked new questions about the nature of mind and machine, share it with someone who enjoys thinking carefully about hard problems.