Hanover, New Hampshire. Summer, 1956.
The Dartmouth College campus sits in the Connecticut River Valley, surrounded by green hills that in summer feel almost impossibly peaceful. The college itself is old by American standards — chartered in 1769, built in the colonial tradition of brick and white wood, the kind of place that projects an air of unhurried confidence, of ideas given space and time to develop.
In the summer of 1956, a small group of men gathered in one of its buildings for what was formally called the Dartmouth Summer Research Project on Artificial Intelligence. There were ten of them on the official invitation list, though the actual attendance shifted over the weeks, people coming and going, some staying for the full two months, others dropping in for a week or a few days before returning to their universities and their labs.
They were mathematicians, psychologists, computer scientists, and engineers. They were young — most of them in their twenties or early thirties, at the beginning of careers that would define a field. They argued, competed, collaborated, and occasionally talked past each other entirely. They did not solve the problems they had come to solve. They did not, in any concrete sense, change the world that summer.
But they gave something a name. And the name changed everything.
Before the Name
To understand why the Dartmouth Conference mattered, you first have to understand what the world looked like before it — what the landscape of research into thinking machines looked like in the early 1950s, when the organizers were first forming the ideas that would lead to the proposal.
The answer is: fragmented, scattered, and deeply confused about its own identity.
The intellectual ingredients for a science of artificial intelligence had been assembling for decades. Alan Turing had published his landmark paper “Computing Machinery and Intelligence” in 1950, introducing the Turing Test and arguing seriously for the possibility of machine thought. Norbert Wiener had published Cybernetics in 1948, establishing a framework for thinking about communication and control in both machines and animals. Claude Shannon had invented information theory in 1948, giving scientists a mathematical language for talking about data, signals, and the transmission of meaning. The first electronic computers — ENIAC, EDVAC, and their successors — had been built in the late 1940s and were becoming, slowly and at enormous expense, available to researchers.
There was also a growing body of work on neural networks — mathematical models of how biological neurons might process information. In 1943, the neurophysiologist Warren McCulloch and the mathematician Walter Pitts had published a paper showing that networks of simple artificial neurons could, in principle, compute any logical function. In 1949, the psychologist Donald Hebb had proposed a learning rule — now known as Hebbian learning — describing how connections between neurons might strengthen with use, providing a possible mechanism for learning and memory in artificial systems.
All of this was happening. All of it was potentially part of the same conversation. But it was not, in any organized sense, a single field. Researchers working on neural networks did not necessarily read the work of researchers working on game-playing programs. People thinking about the logic of machine reasoning were often in different departments, different disciplines, different intellectual traditions from people thinking about machine perception. There was no common vocabulary, no common framework, no agreed-upon set of problems, no shared sense of what the field was trying to do.
And crucially — there was no name. Without a name, there was no field. Without a field, there was no identity, no community, no way to organize conferences, recruit students, apply for grants, or argue to university administrations that this work deserved its own department.
What happened at Dartmouth in 1956 was not primarily a scientific breakthrough. It was a naming ceremony. And naming, in the history of ideas, is not a trivial act.
John McCarthy: The Man with the Idea
The moving force behind the Dartmouth Conference was a twenty-eight-year-old mathematics instructor at Dartmouth College named John McCarthy.
McCarthy was born in Boston in 1927, the son of Irish and Lithuanian immigrant parents. His father was a labor organizer; his mother was an activist in her own right. McCarthy grew up in a household that took ideas seriously and expected its children to engage with the world intellectually and politically. He was, by all accounts, one of those children who simply could not be given enough problems to solve — mathematics, physics, anything with a puzzle at its center held his attention completely.
He enrolled at Caltech at sixteen, completed his undergraduate degree, and went on to a PhD in mathematics at Princeton, finishing in 1951. By the time he arrived at Dartmouth as a young faculty member, he had already been thinking seriously for several years about the question of machine intelligence.
What drove McCarthy was not just the theoretical question of whether machines could think — though that interested him enormously — but a more practical frustration. He believed that intelligent behavior could be understood as information processing, that the rules and procedures underlying human reasoning could in principle be made explicit, and that if you made them explicit you could implement them in a computer program. He wanted to build things. He wanted to see whether the ideas worked.
But the field he wanted to work in did not quite exist yet. The researchers whose work was closest to his interests were spread across mathematics, psychology, electrical engineering, and philosophy departments. They did not think of themselves as working on a common enterprise. Some of them had not read each other’s work.
McCarthy’s solution was characteristically direct: organize a conference, bring the relevant people together, give the enterprise a name, and see what happened.
He enlisted three co-organizers: Marvin Minsky, a young mathematician and neuroscientist at Harvard who would become one of the towering figures of early AI; Nathaniel Rochester, the lead designer of the IBM 701, IBM’s first commercially available scientific computer and at the time one of the most powerful computing machines in existence; and Claude Shannon — yes, the same Claude Shannon whose information theory had helped lay the mathematical groundwork for computing — who was then working at Bell Labs and was already one of the most celebrated scientists in America.
With these three names attached to the proposal, McCarthy had a document with genuine weight behind it. He wrote the proposal in 1955, and in it he made a statement that has echoed through the history of AI ever since.
The Proposal and Its Famous Sentence
The full title of the document McCarthy circulated was “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence.” It was addressed to the Rockefeller Foundation, whose funding McCarthy and his co-organizers were seeking to cover the conference expenses.
The proposal was a remarkable document — simultaneously visionary and pragmatic, ambitious and carefully hedged. It described a two-month summer research project in which ten carefully selected researchers would work together on the problem of machine intelligence. It listed the specific topics they intended to address: automatic computers, how computers could be programmed to use language, neural networks, the theory of the size of computation, self-improvement, abstraction, and randomness and creativity.
But the sentence that everyone remembers, the sentence that gave the field its name and its founding premise, appeared near the beginning:
“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
There it was. Artificial intelligence. Two words, deployed as the name of a field, for what appears to be the first time in a formal document.
The choice of the term was McCarthy’s, and it was deliberate. He considered and rejected other options. “Automata studies” was too narrow. “Complex information processing” was too vague. “Cybernetics” was already taken by Wiener, whose conception of the field was somewhat different from what McCarthy had in mind. McCarthy wanted a name that was direct, that pointed clearly at the central ambition, and that staked a claim. “Artificial intelligence” did all of those things.
The conjecture embedded in the proposal — that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it — was equally deliberate, equally bold, and considerably more controversial.
This is a strong claim. It asserts that intelligence — all of it, every feature, learning included — is in principle formalizable. That it can be made explicit, described precisely enough, broken down into rules and procedures clear enough, that a machine could replicate it. This was not a universally accepted view in 1955. It is not a universally accepted view today. Many philosophers, cognitive scientists, and AI researchers have argued, at various points, that human intelligence involves things that cannot be formalized — embodied experience, emotion, consciousness, intuition — that resist reduction to rules and procedures.
But McCarthy was not writing a philosophy paper. He was writing a funding proposal. He was staking a claim, drawing a line, saying: this is the conjecture, this is what we are going to work on, this is the bet we are making. And having made it, he organized a conference to start working out whether the bet paid off.
The Rockefeller Foundation awarded $7,500 — roughly $85,000 in today’s money. Not a large sum even by the standards of 1956. But enough.
The Attendees: A Room That Made History
The ten people on the original Dartmouth invitation list represented a remarkable concentration of talent — people who would, over the following decades, shape the field of AI in fundamental ways. Understanding who they were and what they were already working on helps explain both the excitement of the conference and its limitations.
John McCarthy himself was working on what would become one of the most important contributions of his career: a programming language called LISP, which he would publish in 1958. LISP — List Processing — was designed specifically for AI research, built around the manipulation of symbolic lists rather than numbers. It would become the dominant language of AI research for the next thirty years and influenced the design of virtually every programming language that followed.
Marvin Minsky was working on neural networks and the theory of computation. He had built, in 1951, one of the first neural network machines — a device called SNARC, which used forty vacuum tubes and hundreds of other components to simulate a network of forty artificial neurons learning to navigate a maze. Minsky would go on to co-found the AI Laboratory at MIT, write foundational works on AI and cognitive science, and become the most publicly prominent figure in early AI — celebrated, controversial, and ultimately blamed by many for the first AI winter through his devastating 1969 critique of neural networks.
Nathaniel Rochester had been thinking seriously about neural networks and had already attempted, before the conference, to simulate a neural network on the IBM 701. The simulation had not worked as he hoped — the network did not learn in the way he expected — and this failure was on his mind coming into Dartmouth. Rochester was less a theorist than an engineer and systems builder, and his presence reflected the importance of people who could actually make the machines work, not just theorize about them.
Claude Shannon was, by 1956, already a legend. His 1948 paper “A Mathematical Theory of Communication” had essentially invented information theory — a framework so fundamental that it underlies not just computing but all of modern communications technology. Shannon was also the author of a 1950 paper describing how a computer could be programmed to play chess — one of the earliest serious attempts to think about machine game-playing. He was less active in AI than the others in subsequent years, but his intellectual prestige lent the conference enormous credibility.
Allen Newell and Herbert Simon arrived at Dartmouth with what was arguably the most impressive concrete result of anyone in the room: a working AI program called the Logic Theorist.
Newell was a young researcher who had worked at the RAND Corporation, the influential think tank that was doing pioneering work in computing and strategic analysis. Simon was a polymath of extraordinary range — an economist, political scientist, psychologist, and computer scientist who would eventually win the Nobel Prize in Economics. Together, they had built the Logic Theorist, a computer program that could prove mathematical theorems in symbolic logic.
Not just any theorems — theorems from Principia Mathematica, the monumental work by Bertrand Russell and Alfred Whitehead that had attempted to establish all of mathematics on purely logical foundations. The Logic Theorist proved thirty-eight of the first fifty-two theorems in the Principia. In one case, it found a proof more elegant than Russell and Whitehead’s own.
Newell and Simon arrived at Dartmouth having already done what everyone else in the room was trying to do: build a program that performed a task requiring what looked unmistakably like reasoning. They were justifiably excited. And the reception they received from their colleagues was, by some accounts, more muted than they expected.
The other attendees included Ray Solomonoff, a young researcher working on probability theory and machine learning who would go on to develop important ideas about algorithmic probability and inductive reasoning. Oliver Selfridge, who was working on pattern recognition and would develop an influential model of how the brain might recognize objects. Trenchard More, a mathematician. And Arthur Samuel, who was not on the original list but attended and was already famous in AI circles for something remarkable: a checkers-playing program he had written for IBM that could, through a process of self-play and evaluation, improve its own performance. Samuel’s program was the first demonstration of machine learning in the modern sense — a program that got better at a task through experience rather than explicit reprogramming.
This was the room. Ten people — actually more, with various visitors and attendees over the full two months — who between them held most of the key ideas that would define AI research for the next several decades.
What Actually Happened
The Dartmouth Conference is often described in the history of AI as a founding moment, a watershed, the summer when everything began. And in terms of naming and identity, it was exactly that. But in terms of what actually happened during those two months in Hanover, the reality was more complicated and more human.
The conference was, by most accounts, somewhat chaotic. McCarthy had envisioned a structured collaborative research project — ten people working together, making progress on shared problems, producing results. What he got was something more like a rotating series of individual presentations and arguments, with people working on their own projects and occasionally engaging with each other’s ideas.
The attendance was inconsistent. People came and went. Shannon, one of the four co-organizers, appeared only briefly. Some attendees spent much of their time working on their own programs on the available computers rather than collaborating with others. The ten-person group never really functioned as a coherent team.
There were also significant intellectual disagreements — disagreements that would deepen over the following decades and define some of the central fault lines in AI research.
The most important was the divide between what would come to be called symbolic AI and connectionism.
Newell and Simon, with their Logic Theorist, represented the symbolic approach. In their view, intelligence was fundamentally about the manipulation of symbols according to explicit rules. You could write down the rules, implement them in a program, and achieve intelligent behavior. This approach was powerful, interpretable, and amenable to the kind of rigorous mathematical analysis that computer scientists and mathematicians were comfortable with.
Minsky and Rochester, with their interest in neural networks, represented a different tradition. They were interested in learning systems — machines that could develop their own internal representations rather than having rules explicitly specified for them. Neural networks were messier and harder to analyze, but they held the promise of learning from experience in the way biological systems did.
These two approaches — rule-based symbolic AI and learning-based neural networks — would compete for dominance in AI research for the next sixty years. At Dartmouth in 1956, both were present, both were argued for, and no consensus emerged.
There was also a deeper philosophical disagreement that surfaced at the conference and never fully went away: the question of what AI was actually for, and what success would look like.
McCarthy’s original conjecture — that every aspect of intelligence can be precisely described and simulated — implied a goal of general intelligence. The aim was to build systems that could do anything an intelligent person could do, across domains, in a flexible and adaptive way. This was the original dream: artificial general intelligence.
But the programs people were actually building — the Logic Theorist, Samuel’s checkers player, early game-playing programs — were highly specialized. They were good at one specific task and completely incapable of doing anything else. A program that proved theorems could not play checkers. A program that played checkers could not hold a conversation. This narrowness was a practical necessity — the computers of 1956 were absurdly limited by modern standards — but it raised a question that would become increasingly uncomfortable: was narrow task performance really intelligence? Or was it something else, something that looked like intelligence from a distance but was actually just a very sophisticated trick?
This question — sometimes called the “narrow vs. general” debate — is still very much alive in AI today. Modern AI systems are extraordinarily capable within their specific domains. GPT-4 can write, reason about text, answer questions, and generate code — but these are all, arguably, variations on the same underlying task of language processing. Whether any current AI system has genuinely general intelligence, or whether we are still in the era of sophisticated narrow tools, is a question on which serious researchers disagree.
Dartmouth did not resolve it. It barely even properly formulated it.
The Triumphs: What the Conference Actually Produced
Despite the organizational chaos and the unresolved disagreements, the Dartmouth Conference was not without real intellectual achievements — both direct and indirect.
The most significant direct outcome was the extended attention given to the Logic Theorist and the work that Newell, Simon, and their collaborator J.C. Shaw were developing beyond it. In the year after Dartmouth, they would create the General Problem Solver — a program designed not to solve any specific type of problem but to implement a general strategy for problem-solving that could in principle be applied to any domain. The GPS used a technique called means-ends analysis: identify the difference between the current state and the goal state, and choose actions that reduce that difference. Simple in description, powerful in application — and a direct outgrowth of the intellectual ferment that Dartmouth catalyzed.
Dartmouth also served as the occasion for several researchers to sharpen and formalize ideas they had been developing in isolation. McCarthy’s thinking about LISP — the programming language he was developing — was clarified and refined through the conversations at the conference. Minsky’s thinking about the relationship between neural networks and symbolic computation was pushed by his arguments with the other attendees.
The conference also created something that did not exist before it: a community. The ten people who attended — along with the wider network of researchers who heard about it, read the papers that came from it, and subsequently entered the field — constituted the first generation of AI researchers. They knew each other. They argued with each other. They competed and collaborated and went on to train the next generation of students who would build the field’s institutions.
This community-formation aspect of Dartmouth is easy to underestimate. Science does not happen in isolation. It happens in communities — in the informal conversations at conferences, in the debates in seminar rooms, in the competitive drive that comes from knowing other people are working on the same problems and might get there first. Before Dartmouth, there was no AI community. After it, there was.
And there was a name. A name that could appear on grant applications, on department titles, on course listings, on the covers of journals. A name that meant something — that pointed clearly at a coherent set of problems and a coherent set of approaches.
Names matter. They create categories, and categories shape thought. Once “artificial intelligence” existed as a named field, people could identify themselves as working in it, could recruit students to it, could argue for its importance to funding agencies, could build careers around it. The name was not just a label. It was an organizing principle.
The Shadow of Overconfidence
But the Dartmouth Conference also cast a long shadow — and the shadow was made of hubris.
The proposal McCarthy submitted to the Rockefeller Foundation contained, alongside its naming of the field and its listing of research topics, a remarkable series of assumptions about how quickly the central problems of AI could be solved. The entire project, McCarthy wrote, was proposed on the assumption that a significant advance could be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
A summer. Two months. To make significant advances on the problem of creating machine intelligence.
In retrospect, this optimism seems almost comical. The problems McCarthy listed in his proposal — getting computers to use language, to form abstractions, to solve problems currently reserved for humans, to improve themselves — turned out to be among the hardest problems in the history of science. Some of them are still unsolved, nearly seventy years later. Others have only recently been made significant progress on, through approaches — deep learning, massive neural networks, vast training datasets — that nobody at Dartmouth was imagining in 1956.
But McCarthy’s optimism was not unusual. It was, in a sense, the mood of the moment.
The mid-1950s were a period of almost boundless technological optimism in the United States. The war had been won. Nuclear power promised cheap, unlimited energy. Sputnik had not yet been launched — that would come in October 1957 — and the space race with the Soviet Union had not yet created the anxiety that would subsequently drive so much American science policy. Computers were new and powerful and improving rapidly. Everything seemed possible.
The specific optimism about AI was reinforced by the genuine and impressive results that had already been achieved. The Logic Theorist worked. Samuel’s checkers program worked. These were not trivial demonstrations — they were programs that did things that, until they existed, people had assumed only humans could do. If a program could prove mathematical theorems, surely it was only a matter of time — a year, maybe five years, certainly not decades — before programs could do everything else that required intelligence.
This reasoning — from an impressive specific demonstration to a confident general prediction — is a pattern that recurs throughout AI history, right up to the present day. Every major AI advance has been followed by a wave of optimism about what would come next, and that optimism has consistently been excessive. The specific demonstration works. The general capability does not follow as quickly — or in the same form — as the optimists predict.
At Dartmouth, the optimism was set at maximum. Marvin Minsky would later say, in one of the most frequently quoted predictions in AI history, that within a generation the problem of creating artificial intelligence would be substantially solved. Herbert Simon predicted in 1957 that within ten years a computer would be the world’s chess champion and would prove an important new mathematical theorem. Both predictions were wrong, by decades.
The overconfidence mattered. It set expectations that could not be met. When those expectations were not met — when the easy early successes gave way to harder and harder problems, when the programs that worked well in limited toy domains failed to scale to the messiness of the real world — the result was a collapse of confidence and funding that would send AI into its first winter. The seeds of that winter were planted at Dartmouth, in the gap between the boldness of the conjecture and the difficulty of the reality.
The Two Paths Diverge
One of the most consequential things about Dartmouth was not what it unified but what it divided.
The conference gathered, in one place, the two approaches to machine intelligence that would compete for dominance for the next six decades. On one side: the symbolic, rule-based approach championed by Newell, Simon, and McCarthy. On the other: the connectionist, learning-based approach represented by Minsky’s early neural network work and by the interests of Rochester and others.
At Dartmouth, these two approaches coexisted awkwardly. There was respect between the researchers, but also competition. And in the years immediately after Dartmouth, the symbolic approach dominated. The Logic Theorist, the General Problem Solver, McCarthy’s LISP — these were concrete programs, running on real computers, doing things that could be demonstrated and measured. The neural network approach, by contrast, was struggling to show results on the limited hardware of the late 1950s.
The dominance of symbolic AI in the years after Dartmouth was so complete that it would take until the 1980s and 1990s — and in some ways until the deep learning revolution of the 2010s — for the connectionist approach to mount a serious challenge. And it was, ultimately, the connectionist approach — vastly transformed from the simple neural networks of 1956, but recognizably descended from them — that produced the AI systems we interact with today.
This means that the symbolic tradition that dominated early AI — the tradition that Dartmouth most directly gave birth to — was, in a deep sense, a dead end. Not entirely — the ideas of symbolic AI influenced every subsequent development in the field, and there is today a renewed interest in combining symbolic and connectionist approaches. But the grand promises of symbolic AI, the claims that general intelligence could be achieved by writing explicit rules and procedures for machine reasoning, were not fulfilled.
The researchers whose approach turned out to be more fundamentally correct — Minsky’s neural network interests, the connectionist tradition — were present at Dartmouth but did not leave it triumphant. History is sometimes made by the people who don’t quite win the immediate argument.
After Dartmouth: The Field That the Conference Built
In the years immediately following Dartmouth, the field of AI grew rapidly. New programs. New institutions. New funding.
McCarthy went to MIT and then to Stanford, where he founded the Stanford AI Laboratory in 1963 — one of the two institutions, along with MIT’s AI Lab, that would dominate AI research for the next thirty years. LISP became the standard programming language of AI research. The approaches sketched at Dartmouth were refined and extended into increasingly ambitious programs.
Minsky co-founded MIT’s AI Lab in 1959 with McCarthy. He became the most publicly prominent voice of AI in the 1960s — the person journalists called when they wanted a quote about machine intelligence, the face of the field to the outside world. His predictions were characteristically bold and characteristically wrong on the specific timelines, but they kept AI in the public imagination.
Newell and Simon continued to develop their ideas about symbolic AI, eventually producing work in cognitive psychology that used computer programs as models of human thought. Simon won the Nobel Prize in Economics in 1978, partly for his work on bounded rationality — the idea that human decision-making is not perfectly rational but operates under real constraints of time and information. This work was directly inspired by his thinking about how programs solve problems.
Arthur Samuel’s checkers-playing program continued to improve and became famous — it was one of the demonstrations that made it plausible to a general audience that computers might one day do more than arithmetic. Samuel coined the phrase “machine learning” to describe what his program did — learning from experience rather than being explicitly programmed — and the phrase stuck.
The institutional consequences of Dartmouth compounded over decades. The AI Labs at MIT and Stanford became the training grounds for the researchers who would build the second and third generations of AI. The funding that flowed to AI research in the late 1950s and 1960s — primarily from the US military and DARPA — was justified by the promises made in the spirit of the Dartmouth conjecture. The field existed, had funding, had institutions, and was producing results.
Until it wasn’t.
The Long Reckoning
The story of Dartmouth cannot be told fully without telling the story of what came after — and what came after was a collision between the enormous promises the conference had inspired and the stubborn difficulty of the problems it had identified.
The programs that AI researchers built in the late 1950s and 1960s were impressive in narrow domains but failed to generalize. A program that played checkers could not play chess. A program that proved theorems in one formal system could not handle a slightly different system. Every attempt to move from the carefully constrained world of the laboratory — where problems were well-defined, domains were limited, and all the relevant knowledge could be explicitly specified — to the messiness of the real world produced failures.
The most fundamental problem was what researchers came to call the combinatorial explosion. When you try to solve a problem by searching through possible solutions — possible moves, possible proofs, possible interpretations — the number of possibilities grows exponentially with the size of the problem. Chess has more possible game positions than there are atoms in the observable universe. The search strategies that worked beautifully in simple, small problems became completely intractable when applied to larger ones.
Researchers tried various ways around this. Heuristics — rules of thumb that pruned the search space by ignoring unlikely solutions. Knowledge-based systems that supplemented general search with specific domain knowledge. Better algorithms. Faster computers. None of it was fast enough or smart enough to bridge the gap between the laboratory toy problems and the real world.
By the early 1970s, the gap between the promises made in the spirit of Dartmouth and the actual state of AI research was causing serious problems. Funding agencies that had committed millions of dollars based on predictions of rapid progress were growing impatient. A 1973 British government report — the Lighthill Report, which will have its own article in this series — delivered a devastating critique of AI research and triggered major funding cuts. The first AI winter had arrived.
The people who had been in that room at Dartmouth in 1956, or who had built their careers on the promises made there, found themselves defending a field that had not delivered what it had seemed to promise. Some retreated to more modest claims. Others doubled down. The arguments continued.
McCarthy remained a central figure in AI research for decades, working at Stanford until his death in 2011. He maintained his faith in the symbolic AI approach long after it had been largely superseded by other methods, and while some found his persistence frustrating, it reflected a genuine intellectual commitment to the ideas he had been developing since the 1950s. Minsky continued at MIT, his interests ranging across AI, cognitive science, and the philosophy of mind, becoming increasingly philosophical and less focused on building working systems. Newell and Simon continued their cognitive science work, their collaboration one of the most productive in the history of the field.
The field they had named at Dartmouth survived the winter. It always survived the winters. It came back, as it always did, with new approaches and new energy. The expert systems boom of the 1980s. The neural network revival of the 1980s and 1990s. The machine learning era of the 2000s. The deep learning revolution of the 2010s. Each cycle built on what came before, incorporated lessons from the previous failures, and pushed the frontier further.
Why Dartmouth Still Matters
The Dartmouth Conference is now nearly seventy years ago. The computers used to run the programs demonstrated there — the Logic Theorist, the early game-playing programs — had less processing power than a modern digital wristwatch. The specific approaches championed at the conference — symbolic AI, rule-based reasoning, explicit knowledge representation — have been largely supplanted by statistical learning, neural networks, and approaches that the Dartmouth attendees would barely recognize.
So why does it still matter?
It matters, first, because the conjecture is still the conjecture. “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” This is still the fundamental bet of AI research. The approaches have changed beyond recognition. The timelines have been extended by decades — and then by more decades. But the core claim — that intelligence is a natural phenomenon that can in principle be understood, formalized, and replicated — is still the premise on which every AI researcher operates.
The debate about whether that premise is correct — whether there are aspects of human intelligence that cannot be formalized, that resist simulation, that require something beyond what any computation can provide — is still alive. It is now carried on in philosophy of mind journals and in debates about the nature of large language models, rather than in the specific terms of 1950s computing. But the debate is the same debate.
It matters, second, because the community it created is the community that built everything else. The researchers who attended Dartmouth, and the students they trained, and the students those students trained, constitute the intellectual lineage of AI. The Labs at MIT and Stanford, founded by Dartmouth alumni, were the nurseries of AI research for decades. The problems identified at Dartmouth — language, learning, perception, reasoning, problem-solving — are still the central problems of AI, even if the approaches to solving them look nothing like what McCarthy and his colleagues were imagining.
It matters, third, as a lesson in the importance and the danger of founding myths. Every field has a creation story — a moment when it officially began, a founding document, a founding meeting. Dartmouth is AI’s. And like all creation stories, it is partly true and partly a simplification. The ideas that became AI had been developing for decades before 1956. The people who were most important to the development of modern AI — Turing, von Neumann, Shannon, Wiener — were not all in the room at Dartmouth, or were only briefly present. The approach to AI that turned out to be most fruitful — neural networks and learning — was represented at Dartmouth but was not the dominant approach it championed.
The truth is messier and more interesting than the founding myth. It always is.
But the name stuck. “Artificial intelligence” — McCarthy’s deliberate, ambitious, slightly provocative choice — became the name of a field that would go on to change the world. That naming was not nothing. It was, in its way, an act of creation: the moment when scattered ideas and isolated researchers became a community with a shared identity, a shared vocabulary, and a shared set of problems to solve.
The summer of 1956 was not the summer AI was invented. But it was the summer AI was born.
What Dartmouth Means Today
Sitting in 2026, nearly seventy years after ten men gathered in a New Hampshire college, it is tempting to look back at Dartmouth with a mixture of amusement and awe — amusement at the optimism, awe at the prescience.
The amusement is warranted. The prediction that two months of collaborative research would make significant advances on the problem of machine intelligence was spectacularly wrong. The breezy confidence that general AI was a decade or two away — held not just at Dartmouth but throughout the 1960s and into the 1970s — was the seed of the disillusionment that would produce the AI winters.
But the awe is also warranted. The people in that room understood the problem. They understood what they were trying to do. They identified the right questions, even if they badly underestimated how hard those questions were. The problems listed in McCarthy’s proposal — language, learning, abstraction, self-improvement — are the problems that modern AI systems are actually solving. The conjecture — that intelligence can be formalized and replicated — is the conjecture that has turned out to be productive.
And there is something more personal than that. When you look at the conversations happening today about AI — the excitement, the fear, the debates about what AI systems can and cannot do, the arguments about whether they are genuinely intelligent or just very sophisticated pattern matchers, the questions about what it would mean for a machine to truly learn and create — you are looking at the same conversation that began in that room in Hanover in 1956.
The participants have changed. The technology has changed almost beyond recognition. The stakes have grown enormously. But the question is the same question: can we build a machine that thinks?
We have been trying to answer it for seventy years, since a young mathematician named John McCarthy wrote it into a proposal for a summer conference and gave it a name. We have not finished answering it yet.
Further Reading
- “Machines Who Think” by Pamela McCorduck — The first and still one of the best comprehensive histories of AI, with a vivid account of the Dartmouth Conference and the people involved.
- “The Dream Machine” by M. Mitchell Waldrop — Focused on J.C.R. Licklider, but provides essential context for the early computing world in which Dartmouth took place.
- “Hackers” by Steven Levy — A wonderful portrait of the early hacker culture at MIT that grew directly from the post-Dartmouth AI Lab.
- The original Dartmouth proposal — McCarthy’s 1955 proposal is available online in full. It is short, readable, and remarkable.
- “Rise of the Machines” by Thomas Rid — Traces the cybernetics tradition that ran in parallel to the AI tradition and sometimes intersected with it, including at Dartmouth.
Next in the Events series: E2 — The Turing Test, 1950: The Question That Still Has No Answer — Six years before Dartmouth, a mathematician published a paper that asked a deceptively simple question: can machines think? The paper, the test it proposed, and the philosophical firestorm it started — a debate that has never been resolved and may never be.
Minds & Machines: The Story of AI is published weekly. If this piece made you think differently about where AI came from, share it with someone who should know this story.