Cambridge, Massachusetts. 1970. A journalist from Life magazine is visiting the MIT Artificial Intelligence Laboratory to write a profile of the man who has become the public face of AI research in America. The man is Marvin Minsky — forty-three years old, already a legend, already the person that newspapers call when they want a quote about thinking machines and the future of the human mind.
The journalist asks the question that journalists always ask: when will computers be as intelligent as humans?
Minsky does not hesitate. “From three to eight years,” he says.
It is 1970. The machines that will eventually approach human-level intelligence on specific tasks are forty years away. The ones that will do it across a wide range of tasks are more than fifty years away. Minsky is wrong by a factor that is almost impossible to comprehend in human terms — like predicting that a child born today will walk on the moon by next Thursday.
But here is the thing: Minsky is not a fool. He is, by the assessment of almost everyone who ever worked with him, one of the most intelligent people they had ever encountered. A man of extraordinary breadth and depth, capable of illuminating almost any intellectual problem he turned his attention to, genuinely creative in the most demanding sense of the word. He is also, on the question of AI timelines, spectacularly, persistently, almost heroically wrong.
How do you get so much right and so much wrong simultaneously? That is the question that Marvin Minsky’s life and career pose — and the answer reveals something important not just about one man but about the entire field he helped create.
The Making of a Mind: Brooklyn to Harvard
Marvin Lee Minsky was born on August 9, 1927, in New York City, the son of Henry Minsky, an ophthalmologist, and Fannie Reiser, a Zionist activist. He grew up in a household that valued intellectual achievement, where ideas were currency and education was the primary investment a family could make in its children’s future.
Minsky showed an extraordinary range of intellectual interests from childhood. He was drawn to music — he was a serious pianist throughout his life, capable of performing at a high level, and his understanding of musical structure would influence his later theories of mind. He was drawn to mathematics — not just arithmetic but the deeper questions of logical structure and proof that most children never encounter. He was drawn to science — to the question of how things worked at a fundamental level, what the underlying mechanisms were that produced the phenomena of the natural world.
He attended the Ethical Culture Fieldston School in Manhattan, a private school with an unusually rich intellectual environment, and then the Phillips Academy in Andover, Massachusetts. Both institutions recognised and nurtured a student who was, by most accounts, in a class by himself intellectually.
He enrolled at Harvard in 1945, initially studying physics, before shifting to mathematics. At Harvard he encountered the circle of ideas about cybernetics and computation that Norbert Wiener was developing, and the mathematical models of neural function that McCulloch and Pitts had published in 1943. These ideas captivated him: the possibility that the brain was, at some level of description, a computational system — that thought could be understood mathematically — became the animating question of his intellectual life.
His undergraduate years at Harvard overlapped with an important moment in the development of computing: the late 1940s, when the first electronic computers were being built and when the theoretical foundations of computation that Turing and von Neumann had established were being given physical form. Minsky was in the right place at the right time, intellectually and institutionally, to be swept up in the excitement of a new science being born.
After Harvard, he went to Princeton for his PhD, working in mathematics and particularly in the theory that would eventually be called computational neuroscience — the use of mathematical models to understand how neural circuits produced behaviour. His doctoral dissertation, completed in 1954, was titled “Neural Nets and the Brain Model Problem” — a title that captures both his mathematical approach and his central intellectual commitment.
At Princeton, he built the SNARC — the Stochastic Neural-Analog Reinforcement Calculator — a machine that used forty vacuum tubes and hundreds of other components to simulate a network of forty neurons. The SNARC was not a computer in any conventional sense: it did not execute programs, did not perform arithmetic, did not do anything useful in any practical sense. It was a physical model of a neural network — an attempt to see whether a machine that worked like a brain could learn to navigate a maze.
The SNARC could, in fact, learn to navigate the maze. The network’s connections were adjusted based on whether the simulated animal reached the reward or not, and after sufficient experience, it reliably found the goal. This was, at the time, one of the most impressive demonstrations of machine learning that had been produced — a concrete physical implementation of the idea that neural networks could learn from experience.
The SNARC was also, in retrospect, a road not taken. Minsky had built a neural network machine as a student, demonstrated that it could learn, and then spent the next two decades as one of the most influential critics of neural networks. The turn was not immediate — it developed over years, through arguments at conferences and in papers and in the accumulating evidence that simple neural networks had real mathematical limitations. But the trajectory of his career — from neural network builder to neural network critic to eventual, late-life, partial reconciliation — is one of the most interesting and most consequential intellectual journeys in the history of AI.
The MIT AI Lab: Building the Cathedral
In 1958, Minsky joined the faculty at MIT. In 1959, he and John McCarthy co-founded MIT’s Artificial Intelligence Laboratory — the institution that would become the most influential AI research centre in the world for the next three decades.
The founding of the AI Lab was an act of institutional creation as important as the intellectual ideas that the lab would subsequently produce. Before the lab existed, AI research was scattered — individual researchers at different institutions, working on related problems without the community and the critical mass that a dedicated institution could provide. The AI Lab provided that community and that critical mass.
Minsky was the lab’s intellectual heart in a way that McCarthy, who moved to Stanford in 1963, was not. McCarthy was a founder but an early departure. Minsky remained at MIT for the rest of his career — decades of extraordinary productivity, influence, and controversy. He shaped the lab’s culture, attracted its most talented students, set its research agenda, and embodied its characteristic combination of intellectual ambition and sometimes breathtaking overconfidence.
The culture Minsky created at the MIT AI Lab was distinctive and, for those who thrived in it, exhilarating. It was intensely meritocratic — what mattered was the quality of your ideas, not your credentials or your seniority. It was open to unconventional approaches — hackers who had not followed the standard academic path were as welcome as PhD students from elite programmes, if their work was interesting. It was demanding — Minsky expected the people around him to think hard, to engage seriously with difficult problems, to meet his own extraordinary standard of intellectual engagement.
It was also, by many accounts, chaotic, disorganised, and occasionally harsh. Minsky could be brutal in his assessments of work he found uninteresting or poorly done. He was not a careful administrator or a patient mentor. He was brilliant and he knew it, and the knowledge sometimes expressed itself in ways that were not kind to the people around him who were less brilliant.
The graduate students who thrived in this environment — who could hold their own in arguments with Minsky, who could produce work interesting enough to command his sustained attention — went on to shape the field. Among the people who were influenced by Minsky’s lab were Terry Winograd (SHRDLU), Gerald Sussman (Scheme programming language and debugging AI systems), Patrick Winston (visual scene analysis and learning from examples), Joel Moses (symbolic mathematics systems), and dozens of others who contributed foundationally to AI and computer science.
The lab was not just a place where research was done. It was a place where a generation of AI researchers was formed — where the field’s culture, its values, its characteristic ways of thinking about problems, were transmitted to the people who would eventually lead it.
The Perceptrons Book: The Most Consequential Error
In 1969, Minsky and his MIT colleague Seymour Papert published “Perceptrons: An Introduction to Computational Geometry.” The book was the most technically rigorous treatment of neural networks that had been written — a careful mathematical analysis of the computational properties of single-layer perceptron networks, the neural network architecture that Frank Rosenblatt had championed.
The book was also, in the judgment of many subsequent researchers, one of the most consequential intellectual errors in the history of AI.
The perceptron, as Rosenblatt had developed it, was a simple neural network: an input layer of units connected to a single output unit by adjustable weights, with a learning rule that strengthened the connections that led to correct outputs and weakened those that led to incorrect outputs. The perceptron could learn to classify inputs — to distinguish, say, images containing certain patterns from images that did not — by adjusting its weights based on experience.
Rosenblatt had been enthusiastic about the perceptron’s prospects, and his enthusiasm had attracted funding and attention. The perceptron seemed to demonstrate that machines could learn — that neural network architectures could be trained to perform recognition tasks that had previously seemed to require human intelligence. Some of Rosenblatt’s claims had been extravagant, and the subsequent performance of perceptrons on real-world problems had not lived up to the most optimistic predictions.
Minsky and Papert chose to subject the perceptron to rigorous mathematical analysis. What they showed, with mathematical precision, was that single-layer perceptrons had fundamental limitations. The most famous limitation was the XOR problem: a single-layer perceptron cannot learn to compute the XOR function (which outputs 1 when exactly one of two inputs is 1, and 0 otherwise). The XOR function is not linearly separable — you cannot draw a straight line in the input space that separates the inputs that produce output 1 from the inputs that produce output 0. And single-layer linear classifiers, which is what perceptrons were, could only learn linearly separable functions.
This was mathematically correct and an important result. Single-layer perceptrons were indeed limited in the ways Minsky and Papert described. The limitation was real.
The problem was what Minsky and Papert said — or implied, or allowed readers to infer — about the implications of this result. The book included a discussion of multi-layer networks — networks with one or more hidden layers between the input and output — but concluded that while such networks might overcome the limitations of single-layer perceptrons, there was no efficient way to train them. The training problem for multi-layer networks, they suggested, was intractable.
This conclusion was wrong. Backpropagation — the algorithm that allows multi-layer neural networks to be trained efficiently — was not published in its modern form until 1986, but the mathematical ideas underlying it had been available earlier. The training of multi-layer networks was not intractable. It was merely unsolved at the time Minsky and Papert were writing.
The damage done by this wrong conclusion — or by the way it was read and interpreted — was enormous. “Perceptrons” was widely regarded as having demonstrated that neural networks were a dead end, that the approach was fundamentally limited, that the future of AI lay elsewhere. Research funding for neural networks dried up. Graduate students who might have worked on neural networks worked on something else. The neural network approach, which had been developing momentum since McCulloch-Pitts in 1943 and Rosenblatt’s perceptron in 1957, was effectively shut down for more than a decade.
The researchers who kept working on neural networks through the 1970s — Hinton, LeCun, Bengio, and their colleagues — did so against the prevailing assessment of the field, at reduced funding, with limited recognition. When they eventually published the backpropagation algorithm in 1986 and demonstrated that multi-layer networks could learn complex functions, they were overturning a conclusion that Minsky and Papert’s book had made the field’s received wisdom for fifteen years.
Minsky himself, in later years, expressed some regret about the impact of the Perceptrons book. He acknowledged that it had been read as a more definitive refutation of neural networks than he had intended, and that the research programme it had discouraged — deep learning — had eventually proved to be the most productive direction in AI. But he was also, at various points, defensive about the book’s conclusions and resistant to the view that it had caused significant harm.
The truth is probably that both things are right: the mathematical results in the book were correct, and the effect of the book on the field was harmful. The error was not in the mathematics but in the failure to adequately address the potential of multi-layer networks, and in allowing the book’s conclusions to be read as a definitive judgment on the whole neural network approach rather than a specific critique of a specific architecture.
The Predictions: How Wrong Was He?
Minsky’s predictions about AI timelines were, by any objective standard, spectacularly wrong. The “three to eight years” prediction to Life magazine in 1970 has already been noted. But this was not an isolated instance of overconfidence. It was part of a pattern that ran throughout his public statements about AI.
In 1967, in his book “Computation: Finite and Infinite Machines,” Minsky wrote: “Within a generation, I am convinced, few compartments of intellect will remain outside the machine’s realm.” A generation is roughly thirty years, which would put the confident prediction at approximately 1997. The statement was wrong by multiple decades for most intellectual tasks and remains partially wrong today.
In various lectures and interviews through the 1960s, Minsky made predictions about the near-term achievement of machine intelligence that, in retrospect, seem almost wilfully optimistic. He was not alone in this — as we have seen in the Dartmouth article and the articles on the first AI winter, overconfident predictions were characteristic of the early AI field. But Minsky was the most public figure in American AI for a generation, the person most often quoted, the most frequently interviewed, the face of the field to the general public. His overconfidence had more impact than most.
Why was he so wrong? Several explanations have been offered, and several are probably part of the answer.
The most sympathetic explanation is that the difficulty of the problem was genuinely not visible from where Minsky stood in the 1960s. The first AI programs had been impressive. The combinatorial explosion that would defeat rule-based AI had not yet made itself fully felt. The problems of perception, common sense, and real-world complexity that would prove so intractable had not yet been encountered in their full difficulty. From inside the early AI community, surrounded by brilliant people making rapid progress on hard problems, it was genuinely difficult to see how far there was still to go.
A less sympathetic explanation is that Minsky, like many charismatic and intellectually dominant figures, had developed a tendency to conflate his predictions with his desires — to treat what he expected to happen with what he wanted to happen, and to underestimate the extent to which his optimism was a performance as well as a belief. His predictions about AI timelines served the purpose of attracting funding, exciting students, and maintaining the public visibility of the field. Whether he fully believed them, or believed them to the extent he implied, is less clear.
The most interesting explanation, and probably the most accurate, is that Minsky’s predictions reflected a genuine theory of intelligence — a theory that turned out to be wrong in important respects. Minsky believed that intelligence was a matter of the right representations and the right processes operating on those representations. He believed that if you got the representations right, intelligence would follow. He believed that the representations could be made explicit, could be programmed, could be understood by humans and captured in formal structures. This theory, broadly associated with the symbolic AI tradition, was wrong in its implications about what intelligence required and how difficult it was to achieve.
The representations that intelligence requires — particularly the vast, implicit, contextual knowledge that underlies common sense — could not be made explicit in the way Minsky’s theory assumed. The processes that intelligence requires — particularly the flexible, adaptive, learning-based processes of neural networks — were not the processes that Minsky’s approach championed. His predictions were optimistic because his theory was wrong, and his theory was wrong in ways that his extraordinary intelligence did not help him see.
The Society of Mind: The Most Original Idea
If the Perceptrons book was Minsky’s most consequential error, the Society of Mind was his most original contribution — the work that, more than anything else, reveals the quality of his mind and the depth of his engagement with the problems of intelligence.
The Society of Mind, published in 1986, was the culmination of years of thinking about what it would take to explain intelligence — not just to build AI, but to understand what intelligence was at a fundamental level. It was a theory of mind — a comprehensive account of how human intelligence worked — expressed in a form that was accessible to general readers, illustrated with Minsky’s own drawings, and argued with a combination of rigour and playfulness that was characteristic of his best work.
The central idea of the Society of Mind was that intelligence was not a single thing but an interaction of many things — that the mind was not a unified, coherent system but a society of many small, relatively simple agents, each capable of doing some limited thing, and that intelligence emerged from their interactions.
Each agent in the society of mind was, by itself, unintelligent — capable of performing only simple, specific operations. A “memory” agent might store and retrieve a specific kind of information. A “builder” agent might direct the construction of a spatial structure. A “recognizer” agent might identify a specific pattern in perceptual input. None of these agents, by itself, was intelligent in any interesting sense.
But when many such agents interacted — when their outputs became the inputs of other agents, when they competed and cooperated and activated and inhibited each other in complex patterns — the emergent behaviour of the whole system was intelligent. Intelligence was not in any single agent. It was in the interactions.
This idea — that intelligence is emergent, that it arises from the interaction of simpler parts rather than residing in any single component — was not entirely new when Minsky proposed it. Wiener had gestured toward something similar in cybernetics. McCulloch and Pitts’s neural networks embodied a similar intuition. But Minsky developed the idea with a richness and specificity that was genuinely new, and he extended it into an account of specific cognitive phenomena — memory, learning, imagination, reasoning, emotion — that was more comprehensive than anything that had been attempted before.
The Society of Mind was also, in a way that Minsky did not always make explicit, a response to the failure of symbolic AI — an acknowledgment that intelligence was harder and more distributed than the symbolic approach had assumed. If intelligence was a society of many interacting agents rather than a unified reasoning system, then looking for the single right representation or the single right inference mechanism was looking in the wrong place. The field needed to understand how simple agents could be organised to produce complex behaviour, not how to encode human knowledge in a formal structure.
This was, in some ways, closer to the neural network approach than to the symbolic AI approach that Minsky had long championed. Neural networks were precisely systems in which simple processing units interacted to produce complex behaviour. The Society of Mind was a high-level, conceptual version of the neural network idea — not specified mathematically, but pointing in the same direction.
Minsky was not willing to concede this connection fully. He continued to be sceptical of the specific neural network approaches that Hinton and LeCun were developing in the 1980s and 1990s, continuing to argue that they lacked the right kinds of representations and processes to achieve general intelligence. But the Society of Mind showed that his thinking had evolved — that he had moved, however partially, toward a view of intelligence that was more consistent with the learning-based, distributed approaches that would eventually prove most productive.
The Society of Mind remains one of the most stimulating books about the theory of mind ever written. Its specific proposals have not been vindicated in detail — cognitive science and neuroscience have not confirmed that the mind works exactly as Minsky described. But its central insight — that intelligence is emergent, that it arises from the interaction of many simpler processes, that understanding the mind requires understanding how simple components organise to produce complex behaviour — has proved durable and productive.
Minsky and the Students: The Complicated Mentor
Minsky’s relationship with his students and younger colleagues was one of the most discussed and most contested aspects of his legacy. By most accounts, he was both the most inspiring and the most difficult mentor that the field produced.
He was inspiring because of the quality of his attention. When Minsky was interested in what you were doing, he gave you his full, extraordinarily capable mind. He would listen, ask sharp questions, make unexpected connections, point you toward aspects of the problem that you had not seen. His ability to find the interesting angle on any problem was remarkable, and the experience of having that ability directed at your work was, for many students, transformative.
He was inspiring also because of the breadth of his interests and the boldness of his thinking. He was not a man who stayed in his lane or confined his intellectual ambition to respectable, fundable, incrementally publishable research. He pursued the big questions — what is intelligence, what is consciousness, what is the relationship between mind and brain — with a directness and a willingness to speculate that could feel liberating to students who had been trained to be more cautious.
He was difficult because his attention was inconsistent. When Minsky became excited about a new idea — and he was frequently excited about new ideas — he could become completely absorbed in it, to the exclusion of the student or colleague he had been working with before. People who had spent months developing a research direction under Minsky’s enthusiastic guidance sometimes found that the enthusiasm had evaporated, that Minsky had moved on to something else, and that the student was left to continue or abandon the project without support.
He was difficult also because his criticism, when it came, could be brutal. He had a gift for identifying the weakest point of an argument and pressing on it mercilessly, and he was not particularly concerned about the emotional effect of this on the person whose argument he was dissecting. Students who could handle this — who found the ruthless criticism productive rather than demoralising — often produced some of their best work under his influence. Those who could not handle it sometimes had their confidence damaged in ways that took years to recover from.
The MIT AI Lab under Minsky had, in consequence, a Darwinian quality. It attracted brilliant people and selected for those who could survive in an environment of intense intellectual competition and occasional harsh criticism. The culture it produced was, in some respects, a microcosm of Minsky himself: brilliant, creative, sometimes cruel, ultimately productive in ways that the gentler cultures of other institutions were not.
The Later Years: Evolving and Unchanging
Minsky’s later career was marked by a gradual evolution of his views — an evolution that was often painful and always incomplete.
The publication of the Society of Mind in 1986 represented a genuine departure from his earlier work. The distributed, emergent conception of intelligence it proposed was more consistent with the neural network approach he had long criticised than with the symbolic AI tradition he had championed. But he did not fully acknowledge this consistency, and he continued to be critical of specific neural network approaches — criticisms that became increasingly difficult to sustain as the power of deep learning became apparent.
His 2006 book “The Emotion Machine” extended the Society of Mind framework to include emotions — proposing that emotions were not separate from cognition but were specific ways in which the society of mind organised its resources, specific modes of cognitive processing that arose in response to specific kinds of situations. This was an interesting and underappreciated contribution to the theory of emotion, but it did not have the impact of the Society of Mind.
In his final years, Minsky became increasingly interested in the question of what it would take to build a machine with human-level intelligence — not the narrow, domain-specific AI that was producing impressive results, but the genuine article: a machine that could think about anything, that could learn anything, that could understand the world in the broad, flexible, contextually sensitive way that humans did. He was disappointed with the direction the field was taking — the narrow, applications-focused, machine-learning-dominated AI of the 2000s and 2010s — and he said so publicly and frequently.
His criticism of contemporary AI focused on what he saw as its lack of ambition and its lack of genuine understanding. Machine learning systems, however powerful, were learning to perform specific tasks without any general understanding of what they were doing. They were, in his view, very sophisticated lookup tables — systems that had memorised patterns without developing the flexible, causal, representational understanding that genuine intelligence required. The field was making progress on specific metrics — accuracy on specific benchmarks, performance on specific tasks — without making progress on the central problem: building machines that understood the world.
This criticism was not entirely wrong. The deep learning systems that were dominating AI research in the 2010s were indeed impressive on specific tasks and genuinely limited in their generality and their causal understanding. Whether they represented a step toward genuine intelligence or a very impressive implementation of something categorically different is a debate that continues.
But Minsky’s criticism was also somewhat ironic, given his own history. He had criticised neural networks in 1969 for their limitations and had been proved wrong when deeper networks, trained with backpropagation, overcame those limitations. He was criticising them again in the 2010s, and it remains to be seen whether this second round of criticism will prove similarly premature.
Minsky died on January 24, 2016, at the age of eighty-eight. He had been admitted to a hospital in Boston following a brain haemorrhage and died two days later. He had been working until close to the end — still thinking, still arguing, still engaged with the problems that had defined his intellectual life.
The Assessment: What Minsky Got Right
Any honest assessment of Minsky must begin with what he got right, because the list is long and genuinely impressive.
He got right that intelligence was the right subject — that building machines that could think was a worthy and central goal for a science of mind, not a frivolous engineering exercise or an inappropriate ambition. At a time when many scientists thought the question was too vague or too ambitious to be a proper scientific topic, Minsky’s insistence that intelligence could be studied rigorously and built computationally was a genuine intellectual contribution.
He got right that intelligence was multiply realised — that it did not require biological implementation, that the relevant level of description was functional rather than physical. This was the right answer to those who argued that machines could not be intelligent because they were not made of the right stuff. Minsky, consistently and correctly, argued that intelligence was about what a system did, not what it was made of.
He got right that intelligence was complex — that understanding it required more than a simple algorithm or a clever architecture. The Society of Mind, for all its specific errors, was right that intelligence involved the interaction of many processes, that no single mechanism was sufficient, that the whole was more than the sum of its parts. This insight remains relevant as researchers try to understand why large language models can do some things brilliantly and fail at others that seem simpler.
He got right, partially and belatedly, that learning was important — that you could not hand-code all the knowledge that intelligence required and that systems needed to learn from experience. The Society of Mind was a step toward this recognition, even if Minsky never fully embraced the specific learning-based approaches that vindicated it.
He got right that the MIT AI Lab was worth building. The institution he co-founded has trained more important AI researchers than any other in the world, has produced more landmark results, has had more influence on the development of the field than any comparable institution. This is not a small thing. Building institutions is as important as doing research, and Minsky built one of the most important institutions in the history of science.
The Assessment: What Minsky Got Wrong
The list of what Minsky got wrong is also long, and the items on it are consequential.
He got wrong the timelines — spectacularly, persistently wrong. The predictions he made about when AI would achieve human-level performance were wrong by decades, and the overconfidence they expressed damaged the field’s credibility when the predictions were not met.
He got wrong the Perceptrons book — not in its mathematical results, which were correct, but in the conclusions drawn from them and in the effect those conclusions had on neural network research. The field lost a decade or more of productive work on multi-layer networks because of the way the book was read and absorbed. Minsky cannot be held entirely responsible for how readers interpreted his work. But he also did not adequately address the limitations of his analysis or prevent the damage from being done.
He got wrong, for most of his career, the relative promise of symbolic AI versus learning-based approaches. He was, for decades, more committed to the symbolic approach than the evidence warranted, and less open to the neural network approaches than the evidence warranted. The reversal that deep learning represented — the vindication of the approach he had done so much to discredit — was the most significant intellectual disappointment of his career.
He got wrong, perhaps most fundamentally, the nature of the difficulty. He believed, for too long, that the problem of intelligence was primarily a problem of representation — that if you got the knowledge structures right, the rest would follow. The evidence accumulated over decades that the problem was also a problem of learning, of perception, of embodiment, of the vast implicit knowledge that humans acquire through development and experience and that cannot be captured by any explicit representation. This was the hardest lesson, and Minsky never fully learned it.
The Paradox of the Brilliant Pessimist-Optimist
There is a paradox at the heart of Minsky’s legacy. He was simultaneously the field’s greatest optimist — endlessly confident that machine intelligence was achievable, endlessly enthusiastic about its prospects, endlessly bold in his predictions — and the field’s most effective pessimist about one of its most important approaches.
His optimism about AI in general helped build and sustain the field through periods when it would have been easy to give up. His pessimism about neural networks specifically helped shut down the approach that would eventually prove most productive. The optimism and the pessimism were not consistent with each other in any simple way — they coexisted in a single, extraordinary mind that saw some things with unusual clarity and was blind to others in ways that his brilliance did not help him see.
This is a pattern that appears in the careers of many great scientists. The qualities that make a person capable of important original contributions — strong intuitions, willingness to commit to a position against the evidence, intellectual confidence that sustains work through periods of difficulty — are also the qualities that can produce spectacular errors. The conviction that allows you to keep working on an approach that others have abandoned can be either the key to an eventual breakthrough or the source of a catastrophic misdirection. Minsky experienced both.
The lesson of Minsky’s career, if there is one, is not that extraordinary intelligence protects you from extraordinary error. It does not. The lesson is perhaps that intellectual diversity — a community of researchers with different approaches, different intuitions, different willingness to commit to specific positions — is more robust than the dominance of any single perspective, however brilliant. If Minsky’s view of neural networks had been one influential view among many rather than the consensus of the field, the cost of his error would have been lower. The field concentrated too much authority in too few minds, and paid the price when those minds were wrong.
The Human Being Behind the Legend
It would be a disservice to reduce Minsky to a catalogue of contributions and errors. He was, by all accounts, a person of remarkable warmth and playfulness alongside the intellectual intensity and the occasional harsh criticism.
He was a devoted family man — married to Gloria Rudisch, a physician, for more than sixty years, with three children who described him as an engaged and enthusiastic parent. He was a musician of real quality — his piano playing was a genuine artistic achievement, not a hobby. He was a lover of science fiction, a friend of Isaac Asimov and Arthur C. Clarke, a person who found the imaginative literature of the future genuinely inspiring rather than merely entertaining.
He was also, in his way, a deeply ethical thinker who took seriously the question of what it meant to build intelligent machines and what responsibility that placed on the builders. His concerns were different from Wiener’s — he was less focused on the social and economic disruption of automation and more focused on the philosophical question of what intelligent machines would mean for human self-understanding. But the seriousness with which he engaged with these questions was genuine.
He was, finally, irreplaceable. There has been no one quite like Marvin Minsky in the history of AI — no one who combined his breadth of vision, his intellectual courage, his willingness to engage with the hardest questions, his capacity to inspire and infuriate in roughly equal measure. The field is poorer for his absence and richer for his presence.
The brilliant optimist who got it wrong. The devastating critic who was wrong in his critique. The theorist of mind who proposed one of the most original accounts of intelligence ever developed. The institution-builder who created the environment in which a generation of AI researchers was formed.
All of these things, simultaneously. That was Marvin Minsky.
Further Reading
- “The Society of Mind” by Marvin Minsky (1986) — The most accessible and the most original of Minsky’s books. Even if the specific proposals are not all correct, the quality of thinking displayed on every page is remarkable. Essential reading.
- “Perceptrons” by Marvin Minsky and Seymour Papert (1969) — Read it to understand both the quality of the mathematics and the nature of the error. The introduction, in which Minsky and Papert explain their goals, is particularly revealing.
- “The Emotion Machine” by Marvin Minsky (2006) — His late-career extension of the Society of Mind framework. Less celebrated than its predecessor but contains important ideas about the relationship between emotion and cognition.
- “Hackers: Heroes of the Computer Revolution” by Steven Levy — Captures the culture of the MIT AI Lab that Minsky built, with vivid portraits of the people who worked there and the intellectual atmosphere he created.
- “Artificial Intelligence: A Modern Approach” by Stuart Russell and Peter Norvig — The standard AI textbook, which provides essential context for evaluating Minsky’s contributions against the broader development of the field.
Next in the Profiles series: P8 — Allen Newell & Herbert Simon: The Dynamic Duo of Early AI — The Logic Theorist, the General Problem Solver, the cognitive revolution, and the Nobel Prize. The most consequential scientific partnership in the history of AI — two men who were convinced they had found the key to human intelligence, built programs that demonstrated it, and spent the rest of their careers working out what they had actually discovered.
Minds & Machines: The Story of AI is published weekly. If Minsky’s story — the brilliance and the error, the optimism and the misdirection — raises questions about how we should assess the great figures in any field, share this with someone who would find those questions worth exploring.