Pittsburgh, Pennsylvania. Christmas Day, 1955. Herbert Simon comes home after the holiday dinner with a present for his family unlike any other they have received. He tells them, with the characteristic directness that his colleagues found either bracing or unsettling depending on the day, that he and his collaborator Allen Newell have invented a thinking machine.

Not a program that plays games. Not a calculator that performs arithmetic. A thinking machine — a device that can reason, that can take premises and draw conclusions, that can do what previously only human minds had been able to do: prove mathematical theorems.

His family is polite. They are not entirely sure what to make of the announcement. Simon is not a man who undersells his achievements, and they have learned to calibrate his enthusiasm. But he is not exaggerating. The Logic Theorist — the program he and Newell have been building for the past year — has, on the computer at the RAND Corporation, proved a mathematical theorem for the first time in history.

Simon goes to bed that night believing that he and Newell have solved the problem of machine intelligence. He will spend the next forty-five years discovering what the solution actually implied — what it revealed about intelligence, and what it left mysterious, and how much farther there was still to go.

This is the story of that discovery.


An Unlikely Partnership

Allen Newell and Herbert Simon were, on the surface, an improbable pair. The differences between them were striking and, in the end, exactly what made the partnership work.

Simon was twenty years older — born in 1916 to Newell’s 1927. He was a social scientist by training, holding appointments in political science, psychology, and administration when the partnership began. He had spent his career studying organisations — how bureaucracies made decisions, how individual judgment was bounded by cognitive and institutional constraints, how the rational actor model of economics failed to describe how real people actually thought and chose. He was a theorist of mind and decision, not a programmer or an engineer.

Newell was an engineer who had become a cognitive scientist by inclination. He had trained in mathematics, worked at RAND on problems of operations research and strategic analysis, and developed a fascination with human problem-solving that was rooted not in theoretical interest but in the practical desire to understand how thinking worked well enough to mechanise it. He was, at his core, a builder — a person who learned by making things.

When they met, in 1954 at Carnegie Tech, the differences were immediately apparent and immediately productive. Simon provided the theoretical framework — the ideas about bounded rationality, about satisficing rather than optimising, about the cognitive processes that produced real-world decision-making. Newell provided the technical implementation — the programming knowledge, the understanding of what computers could do, the ability to turn theoretical ideas into working programs. Neither could have done alone what they did together.

The partnership lasted, with extraordinary productivity, until Newell’s death in 1992. Over those four decades, they produced the Logic Theorist, the General Problem Solver, the information processing theory of cognition, the SOAR cognitive architecture, and the intellectual foundation of cognitive science as a discipline. They received the Turing Award together in 1975 — the highest honour in computer science — and Simon received the Nobel Prize in Economics in 1978, partly for work that grew from their collaboration. The partnership was one of the most productive in the history of science.


Herbert Simon: The Man Who Wanted to Understand Everything

Herbert Alexander Simon was born on June 15, 1916, in Milwaukee, Wisconsin, to Arthur Simon, an electrical engineer who had emigrated from Germany, and Edna Marguerite Merkel, an accomplished pianist. His childhood was intellectually rich — his father’s engineering background and his mother’s musical training created an environment that valued both analytic rigour and aesthetic sensibility, a combination that Simon would carry throughout his career.

He was, by his own account, a precocious and somewhat solitary child who found his deepest satisfactions in reading and thinking rather than in social activity. He had the kind of mind that did not rest easily — that found everything interesting, that connected ideas across apparently unrelated domains, that could not encounter a problem without wanting to understand it at its roots.

He enrolled at the University of Chicago at seventeen, where he encountered the intellectual environment that would shape his career. Chicago in the 1930s was one of the most intellectually vibrant universities in America — a place where economists, sociologists, political scientists, and psychologists were engaged in rigorous, sometimes fierce debate about the foundations of their disciplines and the proper methods of social science. Simon absorbed it all. He was particularly influenced by the political scientist Harold Lasswell, who convinced him that the study of decision-making was the key to understanding social behaviour, and by the mathematician and physicist Harold Jones, who reinforced his respect for mathematical rigour.

His PhD dissertation, completed in 1943, was on decision-making in public administration — a study of how real organisations made decisions and how the processes they used related to the ideals of rational choice. The dissertation contained, in embryonic form, the idea that would eventually win him the Nobel Prize: that human decision-making was not perfectly rational in the sense of always finding the optimal solution, but was instead bounded by cognitive limitations, by incomplete information, and by the costs of calculation — and that these bounds were not defects to be overcome but features of intelligence that any adequate theory needed to incorporate.

This idea — bounded rationality — was the theoretical foundation from which everything else followed. If human decision-making was bounded rather than optimal, then the right model of intelligence was not the ideally rational agent of classical economics but a creature that searched for good-enough solutions within its cognitive and informational constraints. And the study of how this bounded rationality worked — what heuristics it used, what information it attended to, what processes it employed — was the study of cognition itself.

Simon spent the 1940s developing these ideas in the context of organisational behaviour, producing work that would eventually earn him the Nobel Prize. But his encounter with computing in the early 1950s opened a new dimension to his project: if bounded rationality was a matter of specific cognitive processes — specific heuristics, specific search procedures, specific representations — then those processes could in principle be studied computationally, could be implemented in programs and tested against human behaviour. The computer was not just a calculating tool. It was a medium in which theories of cognition could be expressed and tested.

This is what brought him to the collaboration with Newell.


Allen Newell: The Engineer of the Mind

Allen Newell was born on March 19, 1927, in San Francisco, the son of Robert Newell, a radiologist at Stanford Medical School. He grew up in an intellectually demanding household — his father was a serious scientist, his mother a person of wide cultural interests — in the kind of upper-middle-class San Francisco milieu that produced a disproportionate share of the West Coast’s intellectual and professional elite.

Newell showed his intellectual gifts early and broadly. He was interested in mathematics, in physics, in the way complex systems worked, in the formal analysis of anything that could be formally analysed. He studied physics at Stanford as an undergraduate, found the discipline satisfying but not fully absorbing, and went to Princeton for graduate work in mathematics.

Princeton disappointed him. He was good at pure mathematics but did not find it compelling in the way that the problems he really cared about — the problems of how intelligent systems worked, of how complexity and purposive behaviour arose from simpler mechanisms — seemed to require. He left Princeton without completing his doctorate and took a position at the RAND Corporation, the influential think tank in Santa Monica that was doing some of the most interesting work in the country on strategic analysis, operations research, and computing.

RAND was the formative environment for Newell. It was a place where mathematicians, physicists, economists, psychologists, and engineers worked on hard problems without the departmental boundaries that constrained academic research. The computing facilities were excellent — RAND had access to some of the most powerful machines of the era. The intellectual culture was rigorous and interdisciplinary in exactly the way that Newell’s interests required.

At RAND, Newell encountered the problems that would define his career. He attended a lecture by Oliver Selfridge on pattern recognition — on how the brain might identify letters and words from visual input — and was captivated by the question of how to build a computational model of cognitive processes. He began reading the psychological literature on human problem-solving, trying to understand what people actually did when they solved hard problems. He began thinking about how to implement computational models of those processes.

And in 1954, he met Herbert Simon.


The Meeting and the Program

The meeting that began the partnership took place at Carnegie Tech — now Carnegie Mellon University — where Simon had been on the faculty since 1949 and where Newell had come to work as a graduate student and research associate. The connection was immediate and productive: both men were interested in the same question — how did human intelligence work? — and brought complementary capabilities to the pursuit of an answer.

Their first major collaboration was the Logic Theorist, the program they built in 1955 and 1956 that proved mathematical theorems from the Principia Mathematica. The story of how it was built, and what it demonstrated, has been told in the Events article on the Logic Theorist earlier in this series. Here the focus is on what the program meant to Newell and Simon — what it told them about their central question.

The Logic Theorist meant, first, that their basic hypothesis was correct: intelligent behaviour — specifically, the intelligent behaviour of proving mathematical theorems — could be understood as a computational process. The program did not prove theorems by magic or by intuition. It searched through a space of possible proof steps, guided by heuristics that identified promising paths and pruned unproductive ones, and eventually found valid proofs. The intelligence was in the search procedure and the evaluation of intermediate results — in exactly the kinds of processes that Simon’s theoretical work had suggested were central to bounded rationality.

The Logic Theorist also meant that their methodology was correct: you could test theories of cognition by building programs that implemented those theories and seeing whether the programs produced intelligent behaviour. A theory of how people proved theorems could be turned into a program. If the program proved theorems, that was evidence for the theory. If it failed in specific ways, the failures identified where the theory needed revision. The computer was a tool for cognitive science in a way that had not been appreciated before.

Simon’s announcement to his family on Christmas Day captured the significance accurately, even if the implications took decades to fully unfold. They had demonstrated, concretely and undeniably, that at least some of what intelligent beings did could be done by a machine following explicit rules. The question was no longer whether this was possible in principle. It was what the scope and limits of the possibility were.


The General Problem Solver: The Universal Architecture

The Logic Theorist was a specific program for a specific task — proving theorems in propositional logic. The success of the Logic Theorist immediately raised the question of how far the approach generalised. Could the same general strategy — heuristic search guided by evaluation functions — produce intelligent behaviour across a wide range of domains?

Newell and Simon’s answer to this question was the General Problem Solver, or GPS, first described in a 1957 paper and developed through the late 1950s and early 1960s. GPS was their attempt to implement a truly general problem-solving architecture — a program that could solve problems in any domain, provided the domain could be represented in the right form.

GPS was built around the concept of means-ends analysis — the strategy of identifying the difference between the current state of a problem and the desired goal state, and selecting actions that reduce that difference. The strategy was general: it applied equally to theorem proving, to chess playing, to planning a sequence of actions to achieve a physical goal, to solving a puzzle. The same architecture, with different domain-specific knowledge about states, goals, and operators, could in principle solve problems across all of these domains.

GPS worked, in the laboratory, on problems that fit its architecture well. It could solve certain logic problems, certain transformation puzzles, certain simple planning problems. The demonstrations were impressive, and the theoretical framework underlying GPS was genuinely illuminating — it identified means-ends analysis as a fundamental cognitive strategy that appeared in human problem-solving across many domains.

But GPS had a fundamental limitation that became apparent as soon as it was applied to problems of realistic complexity: the combinatorial explosion. In any non-trivial problem, the space of possible states was enormous, the space of possible operators was large, and the search tree grew exponentially with depth. GPS’s means-ends analysis pruned the search somewhat — it focused attention on the differences between current and goal states rather than exploring all possibilities blindly — but the pruning was not sufficient to manage the exponential growth when problems became large.

Newell and Simon were aware of this limitation. They did not claim that GPS was a complete solution to the problem of intelligence — they claimed that it was a plausible model of how human problem-solving worked in certain domains, and that understanding the model would advance the science of cognition. The claim was more modest, and more honest, than the public attention attracted to AI at the time sometimes suggested.


The Information Processing Hypothesis

The theoretical framework that Newell and Simon developed around the Logic Theorist and GPS — what they called the information processing theory of cognition — was the most important and most influential contribution of their partnership.

The information processing hypothesis, as they articulated it, was that human cognition could be understood as the transformation of symbolic representations according to explicit processes. Thinking was not something mysterious or inaccessible — it was information processing, operating on symbolic structures stored in memory, applied through operations that could in principle be made explicit and implemented in a computer program.

This hypothesis had several components that were individually interesting and collectively powerful.

Symbolic representations. Cognitive processes operated on symbols — internal representations of objects, properties, relations, and situations in the world. These symbols were not identical to the things they represented, but they stood for them in ways that allowed the mind to reason about the world without directly engaging with it. The symbols were structured — they could be combined and decomposed, nested within each other, modified and manipulated.

Explicit processes. The operations that transformed these representations were explicit — they followed definite rules, could be described precisely, and were in principle mechanical. They were not random, and they were not mystical. They were rule-governed transformations of symbolic structures, in a mathematical sense rigorous.

Search. Problem-solving was essentially search — navigation through a space of possible symbolic configurations to find one that satisfied the goal. The space was generated by the possible operators, and the search was guided by evaluation functions that distinguished promising states from unproductive ones.

Heuristics. The evaluation functions that guided the search were heuristics — rules of thumb that worked well in most cases but were not guaranteed to be optimal. The heuristics encoded domain knowledge — knowledge about what states were typically productive in a given domain — in a form that could guide the search efficiently.

This framework was not just a theory of AI programs. It was a theory of human cognition — a hypothesis about how human minds actually worked, about what the underlying processes of thinking were. Newell and Simon believed that the Logic Theorist and GPS were not just AI programs but models of human problem-solving — that the processes they implemented were the processes that human problem-solvers used, at the level of description at which psychological theories operated.

This claim — that AI programs could serve as models of human cognition — was the founding claim of cognitive science. It was the claim that made Newell and Simon’s work relevant not just to AI but to psychology, to linguistics, to neuroscience, to philosophy of mind. And it was, in its basic outlines, correct: information processing is the right level of description for many cognitive phenomena, and the concepts that Newell and Simon developed — symbolic representation, process, search, heuristic — have been productive in cognitive science for decades.


Protocol Analysis: The Empirical Foundation

One of the most important methodological contributions of Newell and Simon’s work was the development of protocol analysis — a technique for studying human problem-solving by having people think aloud while they worked through problems and then carefully analysing the verbal records they produced.

The technique was not entirely original to Newell and Simon — introspective methods had been used in psychology since the 19th century, and think-aloud protocols had been used by earlier researchers. But Newell and Simon developed protocol analysis as a rigorous empirical methodology, pairing it with the theoretical framework of information processing in ways that made it far more powerful than earlier introspective approaches.

The idea was straightforward: if cognitive processes were information processing — explicit transformations of symbolic representations — then those processes should in principle be visible in the verbal record of a problem-solver’s thinking. When a person solving a theorem-proving problem says “Let me try applying the substitution rule here… no, that doesn’t work… maybe I should try associativity instead,” they are providing a window into the sequence of processes they are executing. The verbal protocol is a trace of the cognitive process.

Newell and Simon collected extensive verbal protocols from human problem-solvers working on chess problems, logic problems, and other tasks that their programs were also designed to solve. They then analysed these protocols to identify the specific processes that human problem-solvers used — which operators they considered, which heuristics they applied, which dead ends they encountered, how they recovered from errors.

The comparison between the verbal protocols and the traces of their AI programs was the empirical test of their cognitive models. If the program’s behaviour closely matched the human’s behaviour — if the program considered similar operators in similar sequences, applied similar heuristics, made similar errors — that was evidence that the program was a plausible model of the human’s cognitive processes. If the match was poor — if the program diverged systematically from the human — that was evidence that the model needed revision.

This methodology was genuinely productive. The protocol analyses that Newell and Simon conducted revealed specific cognitive phenomena that their programs then had to be modified to account for — the tendency to work backward from goals, the role of analogy in problem-solving, the importance of recognising familiar patterns and adapting known solutions, the specific ways in which limited working memory constrained problem-solving strategies.

Protocol analysis became a standard method in cognitive science, used by researchers across many domains to investigate the underlying processes of human cognition. The methodology’s influence extended far beyond AI — it shaped how cognitive psychologists studied problem-solving, learning, and decision-making for decades.


The Nobel Prize: Bounded Rationality Vindicated

Herbert Simon received the Nobel Prize in Economics in 1978 — an honour that recognised not just his contributions to economic theory but the broader programme of research into human decision-making that he had pursued throughout his career.

The core of the Nobel-Prize-winning work was the theory of bounded rationality — the hypothesis that human decision-making was rational in a specific, limited sense: it was goal-directed, it used available information, it applied systematic procedures. But it was bounded by cognitive limitations, by incomplete information, by the costs of computation, by the constraints of time. Humans did not optimise — they satisficed, finding solutions that were good enough given the constraints they operated under, rather than searching exhaustively for the best possible solution.

This was, at the time of its original proposal in the 1940s and 1950s, a radical departure from the dominant model in economics — the perfectly rational agent who maximised expected utility. Simon was arguing that the dominant model was wrong as a description of how humans actually behaved, and that a more realistic model would need to account for the cognitive limitations that bounded human rationality.

The Nobel committee recognised that Simon had been right. The decades of research in the intervening period — including Simon and Newell’s work on information processing models of cognition, and a growing body of psychological research on human decision-making — had confirmed that humans were not optimal rationalists and that the bounded rationality framework provided a better description of real human behaviour.

The Nobel Prize also recognised something that was less explicitly stated in the official citation: the connection between bounded rationality and AI. The theory of bounded rationality was, at its core, a computational theory of cognition — it described human decision-making in terms of specific processes, specific representations, specific constraints. The programs that Newell and Simon had built — the Logic Theorist, GPS — were implementations of bounded rationality in computational form. The Nobel Prize was, indirectly, a recognition that the AI research programme was contributing to the scientific understanding of the human mind.

Simon was pleased by the recognition, but typically did not rest on it. He continued working — on cognitive science, on AI, on education, on complexity in organisations and economics — until shortly before his death in 2001. The Nobel Prize was not the capstone of a career winding down. It was a milestone in a career that still had two decades of productive work ahead.


The Turing Award: Recognising the AI Contribution

In 1975, Newell and Simon received the ACM Turing Award together — the highest honour in computer science, named for Alan Turing and recognising the most important contributions to the field.

The award citation recognised their foundational contributions to AI: the Logic Theorist and GPS, the information processing theory of cognition, the development of the LISP-like programming language IPL (Information Processing Language) that made AI programming possible before LISP itself was developed, and the broader programme of research that had established AI as a discipline and cognitive science as a field.

The Turing Award lectures that Newell and Simon delivered on the occasion of the award were themselves important documents. Newell’s lecture, titled “Computer Science as Empirical Inquiry: Symbols and Search,” articulated what has come to be called the Physical Symbol System Hypothesis — the claim that a physical symbol system has the necessary and sufficient means for general intelligent action. This was the most ambitious theoretical claim that Newell had made: not just that symbolic AI programs could exhibit intelligent behaviour in specific domains, but that any system capable of general intelligent action must be a physical symbol system.

The Physical Symbol System Hypothesis was immediately controversial and has remained so. Critics argued that it was too strong — that it assumed too much about the necessary form of intelligence — and that it was empirically unsupported at the level of generality Newell was claiming. They pointed out that biological neural systems, which clearly achieved general intelligence, did not appear to be physical symbol systems in the sense Newell defined. The hypothesis seemed to rule out, a priori, the possibility that learning-based, subsymbolic systems could achieve general intelligence — a conclusion that subsequent decades of AI research have challenged fundamentally.

But the hypothesis was also a genuine intellectual achievement — a clear, precise, falsifiable statement of the symbolic AI programme’s central commitment. Even those who believed it was wrong found it valuable as a statement of a position that could be argued against and tested.


The ACT Framework: Understanding Human Memory

One of Simon and Newell’s most important contributions to cognitive science that is less well-known outside the field was their development — primarily through Simon’s student John Anderson — of the ACT framework of human cognition.

ACT — Adaptive Control of Thought — was a computational theory of human memory and learning that grew from the information processing tradition Newell and Simon had established. It distinguished between declarative memory — memory for facts and events — and procedural memory — memory for how to do things, encoded as production rules. The theory described how information was learned, how it was retrieved from memory, how skills were acquired through practice, and how different kinds of knowledge interacted.

ACT, and its successors ACT* and ACT-R (Rational), became one of the most empirically productive theories in cognitive psychology. ACT-R, which John Anderson has continued to develop at Carnegie Mellon, is now one of the most widely used computational models of human cognition — a platform on which researchers have built models of language comprehension, problem-solving, skill acquisition, and memory that have been validated against human data across hundreds of experiments.

The ACT framework was not AI in the sense of trying to build intelligent machines. It was cognitive science — the scientific study of human cognition using computational methods. But it was directly descended from Newell and Simon’s AI work, implementing the information processing framework they had developed in a form that could be tested against precise empirical data about human behaviour.

The relationship between AI and cognitive science that Newell and Simon established — the idea that AI programs could serve as models of human cognition and that cognitive science could inform AI design — has remained productive for decades, producing both scientific understanding of human cognition and AI systems that are inspired by that understanding.


SOAR: The Unified Theory of Cognition

In his later career, Newell became increasingly ambitious about the scope of the cognitive theory he wanted to develop. Rather than building models of specific cognitive tasks — theorem proving, chess playing, problem-solving in specific domains — he wanted a unified theory that could account for the full range of human cognitive capability within a single computational framework.

The result was SOAR — a cognitive architecture that Newell developed through the 1980s, in collaboration with his students at Carnegie Mellon, and that he described in his final book “Unified Theories of Cognition,” published in 1990. SOAR was not just an AI program. It was a proposal about the fundamental architecture of the human mind — about what computational structure could account for the full range of human cognitive phenomena, from perception to language to problem-solving to learning.

SOAR’s central mechanism was the problem space — the representation of a problem as a state space navigated by applying operators. Every cognitive task, in SOAR, was represented as a problem in some problem space, and cognitive activity was the navigation of that space. Learning occurred through chunking — the creation of new production rules that encoded the results of problem-solving experiences, allowing future problems to be solved more efficiently by recognising familiar situations.

SOAR was extraordinarily ambitious. Newell wanted a theory that was as comprehensive as physical theories of matter and energy — a theory that unified the diverse phenomena of cognition under a small number of fundamental principles. He was reaching for the cognitive equivalent of Newton’s laws — a framework that was simple in its foundations but powerful enough to account for an enormous range of phenomena.

He did not fully achieve this ambition. SOAR accounts for some cognitive phenomena better than others, and the theory has been criticised for being too committed to the symbolic, production-rule framework to account adequately for the perceptual and motor aspects of cognition, for the continuous and graded nature of many cognitive processes, and for the kinds of learning that seem to require something more like neural network adaptation than rule chunking.

But SOAR was a genuine intellectual achievement — the most comprehensive attempt that had been made, and possibly that has been made since, to develop a unified computational theory of human cognition. It inspired a generation of researchers and produced a body of empirical work that has advanced the understanding of cognitive architecture, even where it has revealed the limitations of the specific SOAR framework.


The Controversy: Did They Set Back Neural Networks?

Newell and Simon’s influence on early AI was so dominant that their preference for symbolic, rule-based approaches shaped the research agenda of the field for decades. This influence had a shadow side: the symbolic AI paradigm that they championed, and the information processing framework that underpinned it, contributed to the neglect of neural network approaches during the years when those approaches were most in need of development.

Newell and Simon were not the primary architects of the anti-neural-network sentiment that dominated the 1970s — that distinction belongs more to Minsky and Papert’s Perceptrons book. But their work provided the positive alternative that made the symbolic approach seem like the obviously right direction. If you believed, as Newell and Simon’s success suggested, that intelligent behaviour could be achieved by implementing the right symbolic search procedures, you had less reason to explore the messy, hard-to-analyse alternatives that neural networks represented.

The researchers who kept working on neural networks through the 1970s — Hinton in particular — did so in conscious opposition to the dominant symbolic AI paradigm. They believed that the information processing framework, for all its successes, was missing something fundamental about how biological intelligence worked, and that neural networks — despite their lack of interpretable symbolic structures — were pointing at the right underlying mechanisms.

Hinton and his colleagues were right. The deep learning revolution that vindicated their approach drew on ideas — distributed representations, gradient-based learning, hierarchical feature extraction — that were antithetical to the symbolic AI framework. The dominance of symbolic AI, while producing real results, had delayed the exploration of these ideas.

Newell and Simon were aware of this criticism in their later years. Simon, in his autobiography published in 1991, acknowledged that the symbolic AI paradigm had been more limited in scope than they had initially believed, and that learning-based approaches deserved more attention than they had received. He was honest about the limits of what the information processing framework had achieved — honest in a way that the field’s optimists of the 1960s had not been.

But he also maintained that the information processing framework had been the right way to approach the central questions of cognitive science, and that the program they had pursued — building explicit computational models of cognitive processes and testing them against human data — remained the most productive methodology for understanding the mind. This claim was more defensible than his early predictions about AI timelines, and subsequent cognitive science has largely borne it out.


The Human Beings: Simon and Newell as People

Herbert Simon was, by all accounts, one of the most remarkable human beings that American academic life produced in the twentieth century. His intellectual range was genuinely extraordinary — economics, psychology, computer science, political science, organisation theory, philosophy of science, education, complexity — and in most of these fields he produced work of lasting importance.

He was also, by the accounts of those who knew him well, a person of great personal warmth and genuine humility about the limits of his own knowledge. He was direct — sometimes uncomfortably so — but not cruel. He was confident in his views but willing to change them when presented with good arguments. He took the teaching of students seriously and gave his time generously to younger researchers who came to him with genuine questions.

His autobiography, “Models of My Life,” published in 1991, is one of the most candid and self-aware intellectual autobiographies written by a major scientist. He does not shy away from his mistakes, does not over-credit his own contributions, does not pretend that his career was a smooth ascent to well-deserved recognition. It is the autobiography of a man who genuinely understood the difference between what he knew and what he hoped, and who brought the same analytical rigour to his own life that he brought to the phenomena he studied.

Allen Newell was a different kind of person. Where Simon was warm and socially adept, Newell was intense and sometimes difficult to read. He had a single-mindedness of focus that his colleagues found both inspiring and occasionally intimidating. When he was working on a problem — which was most of the time — he was entirely absorbed in it, and the people around him were expected to meet his level of engagement.

He was also, by several accounts, a person of great personal integrity — honest about his work and about himself, not inclined to take credit he did not deserve, scrupulous in his acknowledgment of the contributions of others. The relationship between him and Simon was genuinely collaborative in the deepest sense: ideas flowed between them, built on each other, were impossible to cleanly separate in the finished work. They gave each other credit with a generosity that was unusual in a field that could be competitive about attribution.

Newell died of prostate cancer on July 19, 1992, at the age of sixty-five. He had been working until close to the end — SOAR continued to be developed after his death by his students, and his final book was published two years earlier in anticipation of the end that was coming. His death was a significant loss for cognitive science and for AI — he had decades of potential work ahead of him, and the unified theory of cognition that he had spent his career pursuing was still incomplete.

Simon survived him by almost a decade, dying on February 9, 2001, at the age of eighty-four. He published important work throughout the 1990s, remained intellectually active into his final years, and continued to be a presence in the cognitive science and AI communities that he had helped found.


The Intellectual Legacy: What They Built That Lasted

The lasting legacy of Newell and Simon’s partnership is distributed across several fields and is larger, in total, than the legacy of most scientific collaborations.

The methodology of computational cognitive science. The idea that cognitive theories should be expressed in computational form and tested against human behaviour — using protocol analysis and other empirical methods — is now the dominant methodology in cognitive science. It did not exist, as a rigorous methodology, before Newell and Simon.

The concept of bounded rationality. Simon’s theory of bounded rationality has shaped economics, psychology, management, and public policy in profound ways. The behavioural economics that has transformed economics in the past two decades — the work of Kahneman and Tversky, of Thaler and Sunstein — builds directly on the foundation that Simon established. His Nobel Prize was well-deserved, and the field he helped create has continued to grow in importance.

Cognitive architectures. SOAR and ACT-R, the two computational architectures that grew most directly from Newell and Simon’s work, are still active research programmes. They have been used to model human cognition in hundreds of experiments, to design educational software, to develop human-machine interfaces, and to understand how different cognitive processes interact. They represent the most sustained and most empirically productive effort to build unified computational theories of human cognition.

The intellectual culture of Carnegie Mellon. The culture of rigorous, interdisciplinary, empirically grounded research that Newell and Simon established at Carnegie Mellon has made that university one of the world’s leading centres for AI and cognitive science research. The researchers they trained — John Anderson, Herb Clark, David Klahr, and many others — have themselves trained generations of cognitive scientists and AI researchers. The intellectual genealogy of a significant portion of American AI and cognitive science runs through the Carnegie Mellon that Newell and Simon built.

The AI research programme. The basic programme that Newell and Simon established in the 1950s — build programs that exhibit intelligent behaviour, test them against human performance, use the comparison to advance both AI and cognitive science — has been the framework within which AI research has largely operated ever since, even as the specific approaches have changed dramatically. The current deep learning systems are not symbolic AI programs, but the methodology of building and testing systems against human performance is continuous with what Newell and Simon established.


The Honest Assessment: Right and Wrong

Any honest assessment of Newell and Simon must acknowledge both the genuine achievements and the genuine limitations.

They were right that intelligence was information processing — that cognitive processes were transformations of symbolic representations that could be made explicit and implemented computationally. This was a genuinely important insight, and it was far from obvious when they first proposed it.

They were right that heuristic search was a central mechanism of intelligence — that bounded rationality involved searching for good-enough solutions guided by evaluation heuristics. This insight has been productive in both AI and cognitive science.

They were right that AI programs could serve as models of human cognition — that the methodology of building programs and testing them against human behaviour was a productive way to advance the science of mind. Cognitive science, as it has developed, has validated this methodology even where it has revised the specific models that Newell and Simon proposed.

But they were wrong, or at least too narrow, in their commitment to the symbolic, rule-based approach as the necessary form of intelligence. The learning-based, connectionist approaches they were sceptical of turned out to be more powerful and more general than they believed. The Physical Symbol System Hypothesis was too strong — it ruled out, as implementations of intelligence, the very systems that have proved most capable of achieving it.

They were also wrong — though they were not alone in this — about the difficulty and the timeline of achieving general machine intelligence. The ambitious predictions of the 1960s, in which Newell and Simon participated enthusiastically, were wrong by decades, and the damage done by those predictions to the field’s credibility was real.

These were honest errors, made in good faith, by scientists who were genuinely trying to understand one of the hardest problems in the history of science. The field of AI and cognitive science would not look the way it does today without Newell and Simon’s contributions — both the contributions that proved correct and the contributions that proved wrong.


The Enduring Question They Posed

At the heart of Newell and Simon’s work is a question that they posed more clearly than anyone before them and that the field has been working on ever since: what is the relationship between intelligence and computation?

They believed that intelligence was a form of computation — that the processes underlying intelligent behaviour were computational processes in a rigorous and specific sense. This belief drove their programme and produced their results. It was also, in its strongest form, controversial, and the controversy has not been resolved.

Modern AI has provided evidence that something like their belief was right. The systems that exhibit the most impressive AI capabilities — large language models, deep reinforcement learning systems, protein structure predictors — are all computational in the relevant sense. They transform representations according to explicit processes. They achieve results that previously seemed to require intelligence.

But the question of whether these systems are intelligent — in the sense that Newell and Simon cared about, the sense that connected AI to cognitive science and to the understanding of the human mind — is still open. The systems are impressive. Whether they illuminate the nature of intelligence, whether they help us understand what human minds do and how they do it, is less clear.

Newell and Simon wanted to understand intelligence, not just build systems that behaved intelligently. They cared about the science as much as the engineering. The questions they were asking — what are the processes of thought? what is the architecture of the mind? how does bounded rationality work? — are still being asked, and the answers are still incomplete.

That incompleteness is not a failure. It is the condition of science working at the frontier of understanding. Newell and Simon pushed that frontier further than anyone before them. The frontier is further still — further than they pushed it, further than they could see. But you can see further when you stand on the shoulders of people who reached as high as they did.


Further Reading

  • “Human Problem Solving” by Allen Newell and Herbert Simon (1972) — The definitive statement of their information processing theory of cognition. Dense, rigorous, and essential for understanding the intellectual programme they pursued.
  • “Models of My Life” by Herbert Simon (1991) — Simon’s autobiography. One of the most candid and self-aware intellectual autobiographies in the scientific literature.
  • “Unified Theories of Cognition” by Allen Newell (1990) — Newell’s final synthesis. The most ambitious attempt to develop a comprehensive computational theory of human cognition.
  • “The Sciences of the Artificial” by Herbert Simon (1969, 3rd ed. 1996) — Simon’s most accessible and most philosophical work. A remarkable exploration of the nature of artificial systems and what distinguishes them from natural systems.
  • “Models of Thought” by Herbert Simon (1979) — A collection of Simon’s papers on cognitive simulation, showing the development of the information processing framework through his career.

Next in the Profiles series: P9 — Joseph Weizenbaum: The Man Who Built ELIZA and Regretted It — How creating the world’s first chatbot turned its creator into AI’s most passionate critic. The full story of a man who understood, earlier and more clearly than almost anyone, what was dangerous about our relationship with intelligent machines — and who spent the rest of his life trying to make the world hear the warning.


Minds & Machines: The Story of AI is published weekly. If the story of Newell and Simon — the achievements, the limits, the enduring questions — illuminates something about how science works at its best and its most human, share it with someone who would find the example worth thinking about.