Berlin, 1936. A twelve-year-old boy sits in a classroom and tries to understand what is happening to his world. The teachers who once ignored him now look at him differently. The classmates who played with him last year have stopped speaking to him. Signs have appeared on shops that his family used to enter without thinking. The word “Jude” — Jew — has taken on a weight it did not previously carry.
His name is Joseph Weizenbaum. He is old enough to understand that something has changed, young enough not to understand why. He will spend the rest of his life — more than seven decades, on two continents, in a field that will transform the world — trying to understand what happens when human beings assign categories to other human beings and use those categories to determine who is worthy of dignity and who is not.
He will build a famous program. He will watch it be used as a substitute for human care. He will write a famous book warning about the limits of what machines should be trusted to do. And none of it — not the program, not the book, not the decades of lectures and interviews and arguments — will feel like enough.
This is the story of a man who saw something important and could not make the world see it.
Berlin, Detroit, and the Making of a Mind
Joseph Weizenbaum was born on January 8, 1923, in Berlin, Germany, into a Jewish family that had been part of the city’s intellectual and professional life for generations. His father was a businessman, his mother a woman of some culture and aspiration, and the household was secular, educated, and entirely unprepared for what was coming.
The Weizenbaumss left Germany in 1936, when Joseph was thirteen — late enough that he had absorbed the experience of living in a society that was systematically recategorising him as less than human, early enough that he escaped with his life and his family intact. They went first to England and then to the United States, settling eventually in Detroit, Michigan, where Joseph completed his secondary education and enrolled at Wayne State University.
Detroit in the late 1930s and early 1940s was a city shaped by the automobile industry and by the particular social geography that industry had produced — a rigid segregation by race that Weizenbaum observed with the eyes of someone who had recently experienced a different but related form of categorisation. The Jewish boy from Berlin had not escaped the phenomenon that had displaced him; he had encountered a variant of it in his new home.
He completed his undergraduate studies at Wayne State and served in the US Army during the Second World War — an experience that gave him a close encounter with the machinery of organised violence, with the capacity of institutions to direct human beings toward the destruction of other human beings, with the thin line between civilised society and its catastrophic negations.
After the war, he returned to Wayne State for a master’s degree in mathematics, then took a position at General Electric’s computer laboratory in California, working on some of the early large-scale computing projects that General Electric was undertaking. He moved to MIT in 1963, initially as a research associate and then as a tenured professor — a position he would hold for the rest of his academic career.
At MIT, he was a person of wide intellectual interests — philosophy, literature, music, the social sciences — in an institution that prized technical depth above breadth. He was not fully comfortable in the MIT culture of intense technical competition, of the belief that the problems of the world were amenable to engineering solutions, of the implicit faith that more computing power and better algorithms would make things better. He was more questioning than that, more historically minded, more aware of the ways that technology could be used for purposes that were not benign.
It was in this slightly uncomfortable relationship with his own institution that he built ELIZA.
The Building of ELIZA: An Act of Demonstration
ELIZA was not Weizenbaum’s primary research interest. It was a demonstration — a vehicle for making a specific intellectual point about the relationship between the form and the substance of conversation.
The point Weizenbaum wanted to make was this: much of what passed for meaningful conversation was, at the surface level, quite shallow. The back-and-forth of an ordinary exchange — the questions and responses, the expressions of interest, the reflective listening — could be produced by a machine following simple rules, with no understanding whatsoever of what was being said. The appearance of meaningful conversation could be created by mechanism.
This point had been made, in various ways, by philosophers of language and by communication theorists. Weizenbaum wanted to make it empirically — by building a program that did exactly this and demonstrating that people responded to it as if it were meaningful.
The DOCTOR script — the Rogerian therapist persona that became ELIZA’s most famous implementation — was chosen because Rogerian therapy happened to be easy to simulate: its conversational style of reflective listening and non-directive questioning was structurally amenable to the keyword-matching and response-templating approach that ELIZA used. Weizenbaum was not making a comment about Rogerian therapy specifically. He was exploiting a conversational style that could be imitated without any understanding of its content.
ELIZA was built over roughly two years, between 1964 and 1966, and first described in a paper published in Communications of the ACM in 1966. The paper was titled “ELIZA — A Computer Program for the Study of Natural Language Communication Between Man and Machine,” and it was, from the beginning, honest about what ELIZA was: a pattern-matching program with no understanding of what it was saying, designed to demonstrate how easily the appearance of understanding could be produced.
The paper described the program’s operation clearly and documented, with appropriate alarm, how users had responded to it. Weizenbaum was already troubled when he wrote the paper. The responses he had observed — the depth of emotional engagement, the requests for privacy, the feelings of being understood — were not what he had expected, and they told him something disturbing about the gap between what ELIZA was doing and what users believed it was doing.
The paper was careful to distinguish between what the program was and what it appeared to be. Weizenbaum was not claiming that ELIZA understood language. He was documenting that people behaved as if it did, and he was concerned about the implications.
The Responses: What He Witnessed
The specific responses to ELIZA that most disturbed Weizenbaum have been described in the articles on ELIZA elsewhere in this series. But understanding Weizenbaum the person requires dwelling on what those responses meant to him — how they connected to his broader concerns and why they produced not satisfaction at a successful demonstration but something closer to dread.
The psychiatrists who suggested ELIZA could replace human therapists were the response that shook him most deeply. These were not naive people. They were trained clinicians who understood, professionally and philosophically, what therapy was and why it mattered. They understood that therapy was not just a matter of producing appropriate conversational outputs — it was a relationship, one in which a human being offered their genuine attention, their genuine care, their genuine understanding to another human being who was suffering.
And yet these trained clinicians were suggesting that the simulation of this relationship — the production of appropriate conversational outputs by a program with no understanding, no care, no genuine attention — could substitute for the real thing. They were making this suggestion not because they had been fooled about what ELIZA was (they knew it was a program) but because they had assessed the practical outcomes and concluded that the simulation might be good enough.
For Weizenbaum, this was not a technical assessment to be evaluated on its merits. It was a moral catastrophe — a readiness to abandon the human dimension of care in favour of an efficient simulation. The patients who would receive automated therapy instead of human therapy would not be receiving care. They would be receiving the appearance of care. And the difference between care and its appearance was, for Weizenbaum, everything.
The connection to his childhood was not subtle. He had lived in a society that had concluded, for reasons that seemed efficient and practical to those who held power, that certain categories of people did not deserve genuine human care and treatment — that they could be processed, categorised, administered efficiently, without the moral weight that genuine human presence required. The efficiency of the administration of inhumanity had been one of the distinctive horrors of what he had witnessed and escaped.
He was not saying that psychiatric chatbots were the Holocaust. He was saying that the willingness to substitute the appearance of human care for its substance, justified by efficiency and scale, was a pattern that he recognised. Once you accepted that the appearance was good enough for some people — the lonely, the anxious, the distressed, the people with nowhere else to turn — you had taken a step in a direction he knew from experience was dangerous.
The Secretary Incident: The Moment Everything Crystallised
Weizenbaum described the incident with his secretary many times over the following decades — it became, for him, the crystallising moment at which the implications of what he had built became undeniable.
His secretary was not a naive person. She was an educated, intelligent woman who had worked closely with him during the development of ELIZA and understood, as well as anyone outside the immediate research group, exactly what the program was and how it worked. She knew it was pattern matching. She knew it had no understanding of what it was saying. She knew that the responses it generated were selected from templates based on keywords in the input.
And she asked him to leave the room so she could speak to it privately.
Weizenbaum reflected on this incident for decades, and his analysis of it shifted over time. Initially, he interpreted it primarily as evidence of a psychological phenomenon — the ELIZA effect, the human tendency to project understanding onto systems that produce appropriate-seeming outputs. This was an interesting finding about human cognition, and it was important for understanding how people would respond to AI systems.
But as he thought about it more, he came to see something else in the incident. His secretary’s request for privacy was not irrational, and it was not simply a failure of understanding. She knew ELIZA was a program. She knew the conversation was being logged. She asked for privacy anyway.
What she was expressing, Weizenbaum came to believe, was something real about the nature of the conversational experience — something that did not simply disappear because you knew the other party was a program. The experience of being in conversation, of having a responsive interlocutor, of being in a space where you could say things without judgment — this experience had qualities that she valued and wanted to protect, even when she knew the interlocutor was not real.
This was, for Weizenbaum, more disturbing than simple confusion. If the experience of privacy with a program was genuinely valued — even when the person knew it was a program — then the implications for how people would relate to increasingly sophisticated AI systems were profound. People would form genuine emotional attachments, would seek genuine emotional support, would feel genuine emotional needs addressed by systems that were providing simulations of the real things they needed.
And this was bad. Not because the feelings were not real — the feelings were real — but because the response to those feelings was a simulation rather than genuine care, a performance rather than authentic presence, a process rather than a relationship. The people who would turn to AI systems for emotional support were, in many cases, the people who most needed genuine human connection. Offering them a simulation would address the symptom — the immediate discomfort of loneliness or distress — while leaving the underlying need unaddressed.
Computer Power and Human Reason: The Book That Changed Everything
In 1976, ten years after the ELIZA paper, Weizenbaum published “Computer Power and Human Reason: From Judgment to Calculation.” The book was the most comprehensive and the most passionate statement of his position — a full accounting of what he had learned from ELIZA and from a decade of reflection, and a systematic argument for why it mattered.
The book had several distinct threads that were woven together into a sustained argument.
The first thread was a description of ELIZA and its reception — the most complete and most honest account of the program’s development and of the responses it had elicited that Weizenbaum had yet published. He described the psychiatrists’ suggestions, the secretary’s request for privacy, the users who had become emotionally engaged with the program. He described his own disturbance at these responses and the years of reflection that had followed.
The second thread was a philosophical argument about the distinction between calculation and judgment. Calculation, Weizenbaum argued, was the application of explicit rules to well-defined inputs to produce determinate outputs. It was something that computers could do, and do with a speed and reliability that humans could not match. Judgment was different: it was the application of wisdom, experience, values, and genuine understanding to complex, ambiguous, contextually embedded situations. It was something that required a mind — a genuine mind, with genuine inner life, genuine experience of the world, genuine moral responsibility.
The distinction was not a technical one. It was not about the sophistication of the processing. It was about the kind of thing being done. A chess program was doing calculation — it was applying explicit rules (the rules of chess) to a well-defined input (the current position) to produce a determinate output (a move). A physician attending to a dying patient was doing judgment — applying a lifetime of experience, of value-laden understanding of what it meant to live and to die, to a situation that could not be reduced to explicit rules.
The danger of powerful computers, Weizenbaum argued, was not that they would do calculation badly. They were already doing calculation extraordinarily well. The danger was that the power and speed of calculation, and the prestige that computation was acquiring in modern culture, would lead people to substitute calculation for judgment in domains where judgment was essential.
This substitution was not theoretical. It was already happening. Courts were using algorithmic risk assessment tools to help determine bail and sentencing. Medical diagnosis was being partially automated. Credit decisions were being made by algorithms. In each of these cases, the judgment that was required — judgment that involved genuine moral responsibility, genuine understanding of the specific human being in front of you, genuine engagement with the complexity and particularity of the situation — was being partially replaced by calculation.
The third thread of the book was more personal — a reflection on what it meant to be a scientist in a culture that valued technology above ethics, that celebrated the power of computation without adequate reflection on its limits and its dangers. Weizenbaum was examining his own complicity in a culture that he found deeply problematic, his own role in building systems that he had come to believe were being misused.
The book was received very differently in different quarters. Among a general educated audience, it was widely praised — it spoke to anxieties about computers and technology that many people felt but few had articulated with such clarity and force. Among AI researchers, it was received with anger and dismissal — Weizenbaum was seen as having betrayed the field, as having become a Luddite, as having made claims about the impossibility of AI that revealed a fundamental misunderstanding of what AI researchers were doing.
Minsky was particularly critical. He argued that Weizenbaum’s distinction between calculation and judgment was philosophically confused — that judgment, properly understood, was just very sophisticated calculation, and that the claim that there was something that computers could fundamentally never do was a mystical assertion unworthy of a scientist.
This criticism touched on a genuine philosophical dispute that cannot be easily resolved. Whether human judgment is fundamentally different from sophisticated calculation — whether there is something that minds can do that no computational process can replicate — is precisely the question that the philosophy of mind and AI have been debating for decades. Weizenbaum’s position was not obviously wrong. But it was also not obviously right.
The German Childhood’s Long Shadow
Throughout his career, Weizenbaum returned to his German childhood as a frame for thinking about the dangers of technology and the importance of moral responsibility. The connection was not opportunistic or rhetorical. It was genuine and it ran deep.
He had seen, at first hand and at an impressionable age, what happened when a society decided that some people were not fully human — when the moral consideration that human beings owed each other was denied to specific categories of people on the basis of their categorisation. The mechanism by which this happened was not primarily violence or cruelty, though those followed. It was bureaucratic, administrative, efficient. It involved the processing of people through systems that treated them as objects of administration rather than as subjects of moral concern.
The computers of the 1960s and 1970s were, in Weizenbaum’s analysis, creating new possibilities for this kind of efficient, bureaucratic denial of human dignity. Not intentionally, not through malice, but through the logic of systems that processed inputs and produced outputs without genuine engagement with the human reality of the people being processed.
The credit scoring algorithm that denied a loan without explanation. The risk assessment tool that recommended against bail for a person who was actually safe. The automated customer service system that could not escalate to a human being when the situation required human judgment. These were not the Holocaust. But they were expressions of a tendency — the tendency to substitute efficient processing for genuine human engagement — that Weizenbaum recognised from his childhood.
He was careful, usually, not to push the analogy too far. He knew the difference between the denial of credit and the denial of humanity. He was not claiming that AI algorithms were comparable in their effects to Nazi policies. But he was claiming that the willingness to substitute efficient processing for genuine human attention — to treat the administrative convenience of the system as more important than the particular human reality of the person being processed — was a tendency that had no natural limits if not actively resisted.
This was why he insisted that certain decisions should always be made by human beings. Not because human beings were always better at making those decisions — he knew that human beings were often biased, often inconsistent, often mistaken. But because certain decisions — decisions about people’s lives, their liberty, their access to care — carried a moral weight that required a human subject to bear it. When a judge sentenced a person to prison, someone was morally responsible for that sentence. When an algorithm produced the recommendation, the responsibility was diffused to the point of disappearance.
Diffused responsibility was, for Weizenbaum, one of the most dangerous features of computer-mediated decision-making. The Nuremberg trials had established that “I was following orders” — that systemic responsibility — did not absolve individual moral responsibility. The question of who was responsible for algorithmic decisions was the contemporary version of this question, and Weizenbaum believed that the answer had to be: someone, some specific human being, must be responsible, and that responsibility must be genuine and enforceable.
The Response of the AI Community: Isolation and Dismissal
The reception of “Computer Power and Human Reason” within the AI research community was, by Weizenbaum’s own account, deeply painful. He had expected criticism — he was making claims that challenged the foundational assumptions of the field — but the intensity and the personal character of the criticism surprised him.
Minsky, as noted, was dismissive and contemptuous. Other prominent AI researchers characterised the book as Luddite, as revealing a fundamental misunderstanding of what computers were and what they could do, as the work of a man who had lost his scientific nerve.
The specific characterisation of Weizenbaum as a Luddite — a technophobe who wanted to stop progress because he was afraid of it — was particularly unfair and particularly stinging. Weizenbaum was not against computers. He used them, he taught computing, he had spent his career building software. His argument was not that computers were bad but that certain uses of computers were dangerous, and that the field had a responsibility to think carefully about those uses rather than celebrating every new capability without regard for how it would be deployed.
The Luddite characterisation also revealed something about the AI community’s self-understanding: it was a community that defined itself in terms of capability — of what could be built, what could be computed, what could be automated — and that found questions about whether specific capabilities should be developed or deployed somewhat foreign to its culture. Weizenbaum was asking questions that the community’s culture was not well-equipped to engage with.
His isolation was not total. There were AI researchers who found his arguments compelling, who thought that the field needed to develop the ethical frameworks that Weizenbaum was calling for, who believed that computer science had a responsibility to think carefully about the social implications of its work. These researchers were a minority in the 1970s and 1980s, but they existed, and Weizenbaum was a significant influence on them.
The field of computer ethics — which began to develop seriously in the 1980s — owed a significant debt to Weizenbaum’s work. Researchers like Terry Winograd, Brian Cantwell Smith, and later scholars in value-sensitive design, algorithmic accountability, and AI ethics drew on the intellectual tradition that Weizenbaum had helped establish. The field eventually came to recognise that Weizenbaum had been asking the right questions, even if it had not initially been willing to engage with them seriously.
The Teaching Life: Weizenbaum at MIT
For all the controversy surrounding his public positions, Weizenbaum’s life at MIT was primarily the life of a teacher — a person who cared deeply about his students, who invested in their intellectual development, who tried to pass on not just technical knowledge but the ethical seriousness that he believed was inseparable from genuine intellectual achievement.
His courses at MIT were, by student accounts, unusual in the computing department. He taught technical content — programming, algorithms, computer science — but he also brought to his courses the historical, philosophical, and ethical dimensions that most technical courses in the field ignored. He asked his students not just to solve problems but to think about what it meant to solve those problems, about the contexts in which the solutions would be deployed, about the human beings whose lives would be affected.
He was particularly interested in getting his students to read beyond their technical education. He assigned literature and philosophy alongside programming assignments. He brought in speakers from philosophy, sociology, and the humanities. He tried to build a picture of computing that was embedded in human culture and human history rather than standing apart from them.
Students who responded to this approach found his courses transformative. The combination of technical rigour and humanistic depth was rare in a computing education, and the students who absorbed it came away with a more complete picture of what they were doing and why it mattered than most of their technically competent but humanistically impoverished contemporaries.
Students who did not respond to the approach — who wanted to learn to program and found the philosophical excursions irritating or irrelevant — sometimes found his courses frustrating. Weizenbaum was not particularly patient with students who did not share his sense of the stakes, and his assessment of students who seemed to him to be missing the point could be harsh.
He was also, by several accounts, a person of considerable personal warmth with the students he found intellectually engaged. He was generous with his time, genuinely interested in their intellectual development, willing to engage seriously with their questions and their doubts. He was, in the tradition of the best teachers, someone who taught by example — who modelled, in his own intellectual life, the kind of engaged, questioning, ethically serious approach to scholarship that he wanted his students to develop.
The Later Career: Growing Urgency, Growing Frustration
As the decades passed and computing became more powerful, more pervasive, and more consequential, Weizenbaum’s concerns did not diminish. They intensified. And his frustration at the failure of the field to engage seriously with those concerns intensified with them.
The growth of the internet in the 1990s and 2000s amplified both the possibilities and the dangers that he had been writing about since the 1970s. The network effects of digital communication created new possibilities for human connection, but they also created new possibilities for surveillance, manipulation, and the reduction of human beings to data points in algorithmic systems. The dangers he had identified in the context of standalone computer programs were now embedded in systems of global scale, accessible to billions of people, operated by corporations whose primary accountability was to their shareholders rather than to the human beings their systems affected.
He was particularly concerned about the internet’s effect on knowledge and on the relationship between knowledge and wisdom. The internet made information vastly more accessible, but Weizenbaum worried that the flood of information was undermining the capacity for the kind of slow, reflective, deeply engaged thinking that he associated with genuine understanding. The culture of quick search and rapid consumption that the internet was creating was not, in his view, the culture of wisdom. It was the culture of data — a culture that mistook the quantity of information for the quality of understanding.
This concern was connected to his broader critique of computing: the danger that efficiency and scale would displace depth and care. Quick access to information was efficient. Slow engagement with difficult texts over long periods was not efficient. But Weizenbaum believed that the slow, inefficient kind of engagement was where genuine understanding came from, and that the culture of the internet was making it harder to sustain.
He moved back to Germany in the early 2000s, settling in Berlin — the city he had left as a boy of thirteen. The move was partly personal and partly symbolic. Germany had undertaken a serious reckoning with its own history, a sustained attempt to understand what had happened and why and what it meant for how one should live. This reckoning was imperfect, as all such reckonings are, but it was genuine in a way that Weizenbaum found more honest than much of what he saw in American culture.
He taught at the Technical University of Berlin and at the Free University, continuing to write and lecture and argue about computers and their dangers until shortly before his death. He gave interviews in which he was increasingly direct about his despair at the direction of the technology and the culture. He was not a gentle sage offering measured wisdom. He was an angry old man who believed that the things he had been warning about for forty years were coming true, and that the world had not listened.
The Key Arguments: What Weizenbaum Actually Said
In the decades of interviews, lectures, and essays that followed “Computer Power and Human Reason,” Weizenbaum developed and refined his arguments in ways that deserve careful attention. His position was more nuanced than either his admirers or his critics sometimes represented it.
He was not saying that computers should not be used for any important decisions. He was saying that certain decisions required genuine human judgment — a specific kind of engagement with the particularity of a specific human situation — that no algorithm could provide. The question was which decisions were of this kind, and that required careful case-by-case analysis rather than a blanket rule.
He was not saying that efficiency was unimportant. He was saying that efficiency was not the only value that mattered, and that it should not be treated as if it were. When efficiency was purchased at the cost of genuine human engagement — when administrative convenience was prioritised over the moral reality of the person being administered — something important was lost. The question was how to manage the trade-off, not whether efficiency could ever be valued.
He was not saying that technology was inherently bad. He was saying that technology was not inherently good, and that the question of whether a specific technological application was good or bad required ethical analysis, not just technical analysis. The power of a computer program to do something was not, by itself, sufficient reason to build it or to deploy it. The question of whether it should be built and deployed was a human question, not a technical one.
He was not saying that AI research should stop. He was saying that AI research should be accompanied by serious ethical reflection on what the systems being built would be used for and what the effects of those uses would be. This reflection was the responsibility of the researchers who built the systems — not an optional extra, not someone else’s job, but an integral part of what it meant to do science responsibly.
These are not extreme positions. They are the positions of a thoughtful person who had thought carefully about the relationship between technology and human values and had concluded that the field needed to take that relationship more seriously than it was doing.
The frustration is not that his positions were unreasonable. The frustration is that they were so largely ignored by the people they were addressed to.
ELIZA in the Age of Large Language Models
Weizenbaum died on March 5, 2008, at the age of eighty-five, in Berlin. He did not live to see the large language models that have transformed AI and that make his concerns newly urgent in ways he could not have fully anticipated.
But the problems he identified are precisely the problems that large language models make acute. ELIZA, as we have seen, produced the illusion of understanding through simple pattern matching. Large language models produce something much more sophisticated — responses that are, in many contexts, genuinely indistinguishable from the responses of a thoughtful, knowledgeable human being. The ELIZA effect, which Weizenbaum found disturbing when produced by a simple program, is now produced at scale, across billions of interactions, by systems whose sophistication makes the effect much harder to resist.
The AI companions that large language models enable — systems specifically designed to provide emotional support, to be present for lonely and distressed people, to offer the experience of being heard and understood — are exactly the application that Weizenbaum had been warning about since his secretary asked him to leave the room. The difference between what ELIZA offered and what modern AI companions offer is a difference in degree, not a difference in kind. Both substitute the simulation of human care for its substance.
Whether this substitution is harmful remains a genuinely contested empirical question. There is evidence that AI companions reduce loneliness and improve wellbeing in the short term for some users. There is also evidence that for some users, reliance on AI companions may reduce motivation to seek genuine human connection. The long-term effects are unknown.
Weizenbaum would have been deeply suspicious of the optimistic interpretation of these results. He would have pointed out that reducing loneliness in the short term was not the same as addressing the conditions that produced loneliness. He would have pointed out that the convenience of AI companionship — available at all times, never tired, never demanding, never bringing its own needs to the relationship — might make it a preferable substitute for human relationships that were necessarily reciprocal, necessarily demanding, necessarily imperfect.
He would also have been concerned about the use of large language models in education, in healthcare, in legal services, in the decisions that shape people’s lives. Not because these systems could not be useful — they clearly can be — but because the question of how they should be used, in what contexts and with what safeguards and with what acknowledgment of their limitations, requires exactly the kind of ethical judgment that Weizenbaum spent his career arguing for and that the field is still, in many cases, not doing well enough.
The Man Behind the Argument: Contradictions and Completeness
Any honest portrait of Weizenbaum must acknowledge the ways in which he was a complicated and sometimes contradictory figure.
He was a person who built a sophisticated computer program and then spent the rest of his life warning about the dangers of sophisticated computer programs. He was a professor at one of the world’s leading technology institutions and one of that institution’s most prominent technology critics. He was a scientist who argued that certain questions were not amenable to scientific resolution — that judgment required something beyond calculation.
These contradictions were not hypocrisies. They were expressions of a genuine complexity — the complexity of a person who understood technology well enough to know what it could and could not do, who was embedded in the community that built that technology, and who found himself increasingly in conflict with the values that community embodied.
He was also a person of his time in ways that limited his perspective. His critique of computing was shaped by his experience of computing in the 1960s and 1970s, and some of his specific predictions about what computers could and could not do were wrong in ways that reflected the limits of what was imaginable at the time. The large language models of the 2020s can do things that he confidently predicted computers would never do. This does not invalidate his ethical concerns — if anything, it makes them more urgent — but it complicates his specific philosophical arguments.
And he was a person of deep passions who was sometimes impatient with complexity and nuance. His rhetoric could be excessive — the comparison between AI and the Holocaust, however carefully hedged, risked a failure of proportion that undermined rather than advanced his arguments. The best of his writing was careful, precise, and morally serious. Not all of his writing was at its best.
What he was, at his core, was a person who cared — who cared about the people who would be affected by the technology he was watching develop, who cared about the moral dimension of scientific and engineering work, who cared enough to speak at cost to his professional relationships and his standing in a community he was part of.
In a field that has sometimes been criticised for prioritising capability over consequence, for celebrating what can be built without adequate attention to whether it should be built, Weizenbaum’s caring was a genuine contribution — not just to the debate, but to the moral culture of a field that needed, and needs, more of it.
The Legacy: What He Built and What He Left
Joseph Weizenbaum left two legacies that are in tension with each other and together constitute something more complex and more interesting than either alone.
The first legacy is ELIZA — the program that demonstrated the ELIZA effect and that established, through its unexpected reception, the question that has defined AI ethics ever since: what is the appropriate relationship between humans and systems that can simulate human understanding?
ELIZA is, in a specific sense, the most consequential AI program ever built — not because it was technically sophisticated (it was not) but because it revealed something fundamental about human psychology and human vulnerability that no subsequent AI development has rendered irrelevant. Every debate about AI companions, every concern about over-reliance on AI, every question about the appropriate role of AI in emotionally sensitive contexts is, at its root, the ELIZA question.
The second legacy is the body of ethical thinking that Weizenbaum produced in response to ELIZA’s reception — the distinction between calculation and judgment, the insistence on human responsibility for consequential decisions, the warning about the substitution of efficient simulation for genuine care.
This thinking was ahead of its time. The AI ethics field that began to develop seriously in the late 1990s and 2000s — with its attention to algorithmic bias, to the social consequences of automated decision-making, to the question of whose values are embedded in AI systems and whose are not — is engaged with the questions Weizenbaum raised, using more sophisticated analytical tools but addressing the same fundamental concerns.
He did not win the debate he entered in 1966. His colleagues dismissed him, his warnings were largely unheeded, the applications he worried about were developed and deployed. But the debate continues, and his voice is part of it — not just as historical background but as a living intellectual presence, a person whose arguments still need to be engaged with rather than filed away.
The Man Who Watched
There is an image that stays with you from reading about Weizenbaum: the image of a man watching.
He watched his childhood world remake itself into a nightmare. He watched a program he built for demonstration become a vehicle for emotional dependence. He watched his colleagues celebrate capabilities without asking about consequences. He watched the field he was part of develop in directions he found increasingly alarming. He watched the internet amplify every danger he had identified and create new ones he had not imagined.
He watched and he wrote and he argued and he taught and he tried to make people see what he was seeing.
He was not entirely successful. He was not, by the conventional measures of academic success, the most important figure in the history of AI. His technical contributions were modest — ELIZA was not a great technical achievement. His philosophical arguments were important but not fully worked out. His predictions about what computers could and could not do were sometimes wrong.
But he saw something true. He saw that the relationship between human beings and the intelligent systems they were building was not primarily a technical relationship — it was a moral one, with implications for human dignity and human care that required ethical analysis rather than engineering optimism. He saw this more clearly, more early, and with more personal urgency than almost anyone else in the field.
And he would not be quiet about it.
In a field full of people who were enthusiastic about what they were building, he was the person who would not stop asking whether they should build it. That is not nothing. In the long run, it may be one of the most important contributions a person can make.
Further Reading
- “Computer Power and Human Reason: From Judgment to Calculation” by Joseph Weizenbaum (1976) — The essential text. Every page rewards careful reading. The distinction between calculation and judgment remains one of the most important frameworks for thinking about the appropriate uses of AI.
- “ELIZA — A Computer Program for the Study of Natural Language Communication Between Man and Machine” by Weizenbaum (1966) — The original paper, available online. Honest, careful, already troubled.
- “Turning Point: A Documentary About Computer Pioneer Joseph Weizenbaum” directed by Peter Haas and Silvia Holzinger (2007) — A documentary made near the end of Weizenbaum’s life, in which he speaks directly and honestly about his concerns and his regrets. Available online.
- “The Existential Pleasures of Engineering” by Samuel Florman (1976) — A contrasting view, written in the same year as “Computer Power and Human Reason,” arguing for the positive dimensions of engineering and technology. Reading the two together gives the fullest picture of the debate Weizenbaum was entering.
- “Weapons of Math Destruction” by Cathy O’Neil (2016) — A contemporary account of algorithmic decision-making and its harms, directly in the tradition of the concerns that Weizenbaum identified. Shows how the questions he raised have developed in subsequent decades.
Next in the Profiles series: P10 — Frank Rosenblatt: The Forgotten Father of Neural Networks — The Perceptron, the media frenzy, Minsky’s crushing dismissal, and a legacy that took fifty years to be vindicated. The story of the man whose work was buried and then resurrected — the most underrated figure in the history of AI, whose ideas now run the world.
Minds & Machines: The Story of AI is published weekly. If Weizenbaum’s story — the builder who became the critic, the warning that was not heeded — raises questions about responsibility in science and technology, share it with someone who needs to hear those questions.