There is a thought experiment that philosophers sometimes use to test intuitions about personal identity. Suppose you could take the mind of one person from history — their knowledge, their methods, their characteristic way of approaching problems — and drop it into the present. Which mind, encountering the world as it is today, would be most at home? Which mind would find the most that was familiar, most that connected directly to what they had spent their life working on?

The answer, by many reckonings, is Alan Turing.

Not Einstein, whose work is fundamental but whose domain — relativistic physics — has remained somewhat separate from the digital revolution. Not von Neumann, whose computer architecture is everywhere but whose deepest intellectual loves were mathematics and physics rather than computation itself. Not any of the great programmers or systems builders who came after, because they were building on a foundation that Turing laid.

Turing. Because Turing’s questions are our questions. The questions he asked — what can be computed? what does it mean for a machine to think? how does complexity emerge from simple rules? what is the relationship between information and life? — are the questions that define the intellectual landscape of our era. He did not just contribute to the foundations of computing. He imagined the entire intellectual project, from first principles, before the technology existed to pursue it.

This article is not a biography. The Profile in this series covers Turing’s life — his childhood, his mathematics, Bletchley Park, his prosecution and death. This is a thematic synthesis: an attempt to trace the single unified vision that ran through all of Turing’s seemingly disparate work, and to understand why that vision was so extraordinarily prescient.

The short version: Alan Turing was not just a mathematician who happened to work on computing. He was a philosopher of mind who used mathematics as his primary tool, and whose philosophical vision was so far ahead of its time that it is only now, seventy years after his death, beginning to be fully appreciated.


The Unified Vision: What Turing Was Really Trying to Do

When you survey Alan Turing’s published work — the 1936 paper on computable numbers, the wartime work on cryptanalysis, the 1950 paper on machine intelligence, the 1952 paper on morphogenesis — it can appear scattered. A young mathematician solves a foundational problem in mathematical logic. A wartime codebreaker develops statistical methods for breaking enemy ciphers. A philosopher proposes a test for machine intelligence. A biologist develops a mathematical theory of pattern formation. What connects these?

The connection is a single, fundamental question that ran through everything Turing did: how does complex, purposive behaviour arise from simple, mechanical rules?

This is the question of our age. It is the question that computing poses — how does a machine following simple binary rules produce the complex, flexible behaviour of useful software? It is the question that AI poses — how does a neural network adjusting numerical weights produce something that looks like understanding, creativity, and reasoning? It is the question that biology poses — how does a genome following simple chemical rules produce the extraordinary complexity of a living organism? And it is the question that neuroscience poses — how does a network of neurons firing according to electrochemical rules produce the richness of conscious experience?

Turing posed this question first, and most clearly, and in the most general terms. He did not have the vocabulary of modern cognitive science or AI — he was working with the mathematics available to him in the 1930s and 1940s. But the question was his, and all his work was, in different ways, an exploration of it.

Understanding this unified vision is the key to understanding why Turing mattered — and why he matters now more than ever.


The 1936 Paper: Drawing the Map of What Is Possible

Everything begins with “On Computable Numbers, with an Application to the Entscheidungsproblem,” published in 1936 when Turing was twenty-three years old. We have discussed this paper in the Profile and in earlier articles in this series. But its significance for the unified vision deserves special attention here.

The paper solved a foundational problem in mathematics — whether there could exist a general procedure for deciding the provability of any mathematical statement. The answer was no: some problems were undecidable, and no algorithm could solve them. This was a result about the limits of mathematics, and it was important in that domain.

But the method Turing used to prove this result was, in retrospect, the more important contribution. To prove the result, Turing needed a precise definition of what it meant to follow a rule mechanically — what it meant to compute something. And the definition he gave was the Turing Machine: an abstract device that could read symbols from a tape, write symbols, and move the tape, following a finite set of rules.

The Turing Machine was not proposed as a practical computing device. It was proposed as a mathematical model of mechanical computation — a precise definition of what “computing” meant, stripped of all inessential features. A Turing Machine was the minimal thing that could meaningfully be called computing.

What Turing then showed — through what is now called the Church-Turing thesis, developed in parallel with the logician Alonzo Church — was that this minimal model was also maximal. Any computation that could be performed by any mechanical means could be performed by a Turing Machine. There was nothing that a more powerful machine could do that a Turing Machine could not, in principle, do too.

This result drew a map. It divided all possible problems into two categories: those that could be solved by algorithm — by Turing Machine — and those that could not. It established the boundary of computation, precisely and rigorously, for the first time.

For AI, this map is foundational. Any AI system — any system for machine intelligence, regardless of its architecture or implementation — is, at the level of mathematical analysis, a form of computation. It operates within the boundary that Turing drew. It can do anything that is computable. It cannot do anything that is not.

This means that Turing’s 1936 paper sets the outer limits of what AI can aspire to. Not in a discouraging way — the class of computable problems is vast, and includes essentially all the problems that matter for practical intelligence. But in a clarifying way: it tells us what kind of thing AI is. It is a computational process, bounded by the limits of computation, powerful within those limits.

Understanding this is essential for evaluating the grand claims and grand fears about AI. Some things AI cannot do, not because the technology is not advanced enough but because they are not computable — they fall outside Turing’s boundary. Other things AI can in principle do, even if current systems cannot — they are within the boundary, waiting for the right algorithm and sufficient computational resources. Turing’s map distinguishes these cases with mathematical precision.


The Universal Machine: The Idea That Made Software Possible

Buried within the 1936 paper, almost as a technical lemma in the proof of the undecidability result, is an idea of even greater practical importance than the result it was used to prove. Turing showed that there existed a universal Turing Machine — a single machine that could simulate any other Turing Machine, given an appropriate description of that machine as input.

The universal Turing Machine is the conceptual foundation of software.

Before the universal machine, computing devices were special-purpose. A mechanical calculator could do arithmetic. A Jacquard loom controlled by punched cards could weave patterns. A Difference Engine could compute polynomial functions. Each device was built to do a specific thing, and doing a different thing required building a different device.

The universal Turing Machine dissolved this constraint. A single device, given different inputs — different programs — could compute anything that was computable. The same machine could add numbers, simulate weather patterns, play chess, generate text, recognise faces. Not simultaneously — the universal machine would need to load different programs for different tasks. But it was the same machine, the same hardware, performing different computations depending on what program it was running.

This is, in conceptual essence, the stored-program computer. Von Neumann’s architecture, which made this concept physically concrete in electronic hardware, drew heavily on the theoretical foundation Turing had laid. The operating system, the software ecosystem, the entire infrastructure of modern computing — the fact that a single device in your pocket can be a camera, a telephone, a map, a library, a notebook, a bank, a game console — all of this flows from the universal machine.

And for AI, the universal machine concept is what makes the project coherent. There is not a special “intelligence machine” that needs to be built from scratch for each cognitive task. There is one kind of machine — the computer, the universal machine — and the question is what programs to run on it. The quest for AI is the quest for programs that produce intelligent behaviour when run on universal machines. Turing, in 1936, created the theoretical framework within which that quest takes place.


Bletchley Park: Intelligence Applied to Intelligence

The years between 1939 and 1945 took Turing from pure mathematics to the most urgent applied problem imaginable: breaking the codes that were directing a war that threatened the survival of the world he knew.

The work at Bletchley Park — the codebreaking operation whose full story was not publicly known until decades after Turing’s death — was, in the terms of this synthesis, a massive applied exercise in the question that had always driven him. How do you extract information from noise? How do you find the hidden pattern in apparent randomness? How do you build a machine that can, systematically and at speed, search the space of possible solutions to find the right one?

These were, at bottom, the same questions as the 1936 paper, applied to a concrete domain. The Enigma cipher was a complex rule-governed system. Breaking it required finding the specific rule — the key — that had been used to encode a particular message. The space of possible keys was enormous — hundreds of quintillions of possibilities. Exhaustive search was impossible. The challenge was to find methods that constrained the search, that eliminated impossible keys, that guided the search toward the right answer.

Turing’s key contribution — the statistical approach to codebreaking that he developed and which was implemented in the Bombe machine — was a form of what computer scientists now call heuristic search. Rather than searching all possibilities, he exploited structural features of the Enigma cipher and the German messages to dramatically reduce the search space. The approach used known patterns in German military language, known structural weaknesses in how the cipher was used, and statistical analysis of the cipher outputs to identify promising key regions.

This approach — using domain knowledge and statistical analysis to guide search — is one of the central techniques of AI. It is how chess engines reduce the astronomical space of possible chess positions to something manageable. It is how machine learning systems find patterns in high-dimensional data. It is how language models generate plausible continuations of text. The specific techniques are different from Turing’s Bombe, but the fundamental strategy — use what you know about the problem to constrain the search — is the same.

Turing was also, at Bletchley, deeply engaged with the question of what it meant for a machine to do what a human expert does. The cryptanalysts at Bletchley were not just applying rules — they were exercising judgment, drawing on experience, making intuitive leaps that they could not always articulate. Turing was trying to capture enough of this expert judgment in mechanical form to make the Bombe effective. He was doing, in the domain of cryptanalysis, what Newell and Simon would later try to do in the domain of theorem proving: mechanising human expertise.

The limitations he encountered at Bletchley — the ways in which mechanical systems fell short of the flexible, contextual judgment of experienced human analysts — fed directly into his post-war thinking about what machine intelligence would and would not be able to do.


The 1950 Paper: The Most Important Question in AI

We have discussed “Computing Machinery and Intelligence” in the Turing Profile and in the article on the Turing Test. But in the context of the unified vision, the paper deserves another look — because what it was doing philosophically is as important as what it said about AI specifically.

The paper was, at its deepest level, an argument about the right way to ask questions about mind. Turing recognised that the question “Can machines think?” was not well-formed — it was entangled in definitional disputes that would generate heat without light. So he proposed to replace it with a better question: a question that was precise, empirically tractable, and cut to the heart of what we actually cared about when we asked whether machines could think.

The Imitation Game — the precise question with which Turing replaced the vague one — was designed to test for the ability to behave as if intelligent, in the context of the kind of behaviour that most clearly required intelligence: flexible, contextually appropriate, natural language conversation.

But there was a deeper philosophical move embedded in this proposal. Turing was arguing that the right way to investigate questions about mind was empirically — by looking at behaviour — rather than by armchair philosophical analysis of concepts. The question of whether a machine really thought, in some deep metaphysical sense, was unanswerable because “really thought” was too poorly defined. But the question of whether a machine could do what thinking beings do, in observable ways, was answerable. And the latter question was the one that mattered.

This move — from metaphysical questions to empirical questions, from “what is intelligence?” to “what does intelligence do?” — was enormously productive for AI. It gave the field a program: build systems that behave intelligently, test them against behavioural criteria, use the failures to identify what capabilities are missing. The repeated cycle of demonstration, failure analysis, and improved design that has driven AI progress for seventy years follows directly from Turing’s empirical approach.

The 1950 paper also, in its nine objections and responses, identified the major philosophical challenges to machine intelligence with a comprehensiveness and precision that has not been substantially improved upon since. Every major objection that philosophers have raised against AI — consciousness, intentionality, creativity, the limits of formal systems — was anticipated and addressed in Turing’s paper. He did not resolve all of them. But he articulated them more clearly than anyone before him, and his responses defined the terms in which subsequent debates were conducted.

The paper is, in this sense, as much a work of philosophy as of computer science. Turing was doing something that pure philosophers rarely do: proposing a precise, testable criterion for resolving a philosophical dispute, and then engaging seriously with every major objection to the criterion. The scientific approach applied to philosophical questions. The empirical method brought to bear on the nature of mind.


Between the 1950 Paper and Morphogenesis: The Unbuilt Bridge

There is a period in Turing’s intellectual life — the early 1950s, between the publication of the 1950 paper and the 1952 morphogenesis paper — that is less well documented than it should be, partly because of the disruption caused by his prosecution in 1952 and partly because much of his thinking from this period survives only in fragmentary form.

What we know suggests that Turing was, in this period, moving toward a synthesis of his different areas of work that would have been remarkable if he had lived to complete it. He was thinking about the relationship between computation, learning, and biological development. He was thinking about how brains developed — how the embryological processes that built a nervous system were related to the computational processes that the nervous system subsequently performed. He was thinking about the origins of intelligence in development, not just its operation in the adult.

He was also, in correspondence and in conversation with colleagues, thinking about what he called the “unorganised machine” — a kind of computing system that was not pre-programmed but was initially unstructured, capable of being trained through experience into a specific organisation. This is, in modern terms, the concept of a learning machine — a neural network before the term existed, before the mathematics of backpropagation had been developed, before anyone had demonstrated that such systems could learn useful functions.

The unorganised machine concept appears in a 1948 report titled “Intelligent Machinery” — one of the most extraordinary documents in the history of AI, written for the National Physical Laboratory and largely ignored during Turing’s lifetime. In the report, Turing distinguished between two approaches to machine intelligence: the programmed machine, whose behaviour was fixed by explicit instructions, and the unorganised machine, whose behaviour would emerge through a process of training and experience.

He was clearly more interested in the second approach. He saw the programmed machine as fundamentally limited — it could only do what it was told, and what it was told was limited by what its programmers understood. The unorganised machine, by contrast, could in principle develop capabilities that its designers had not explicitly specified, could learn from experience in ways that produced genuinely new behaviour, could — in the language of his response to Lady Lovelace’s Objection — genuinely surprise its creators.

This distinction — between programmed and learning systems, between explicit knowledge and learned representations, between symbolic AI and neural networks — became the central fault line in AI research for the next sixty years. Turing identified it in 1948, correctly located the more promising approach, and did not live to see that approach vindicated.

The vindication came, in stages, with the connectionist revival of the 1980s, the development of deep learning in the 2000s, and the large language models of the 2020s. All of these — all of the systems that have come closest to achieving something like flexible machine intelligence — are, in the terms of Turing’s 1948 distinction, unorganised machines: systems whose behaviour emerges from training rather than from explicit programming.

Turing was right. He was right sixty years before the technology caught up.


The Morphogenesis Paper: The Third Turing

In 1952, two years after “Computing Machinery and Intelligence” and the same year as his prosecution for gross indecency, Turing published “The Chemical Basis of Morphogenesis” — a paper that had nothing to do with computing, artificial intelligence, or cryptography. It was a paper about biology: specifically, about how the pattern of spots on a leopard, the stripes on a zebra, and the spiral arrangement of seeds in a sunflower could arise from simple chemical processes during embryonic development.

The paper is, by most accounts, one of the most beautiful and most original papers in the history of mathematical biology. It is also, in the context of the unified vision, perfectly continuous with everything else Turing did.

The question of morphogenesis is: how does a complex, structured pattern — a spotted coat, a striped skin, a spiral arrangement — arise from an initially undifferentiated collection of cells? At the moment of fertilisation, the embryo is essentially uniform. By the time the organism is born, it has an intricate, species-specific pattern that is reproduced with remarkable consistency across individuals. Where does the pattern come from?

Turing’s answer was elegant and now famous: the pattern could arise spontaneously from the interaction of chemical processes — specifically, from the diffusion and reaction of chemical signals (which he called morphogens) that were initially distributed uniformly across the developing tissue. Under the right conditions, small random fluctuations in the initial distribution would be amplified by the interaction of the chemical processes, producing stable spatial patterns — spots, stripes, spirals — without any template or external instruction.

This is the mechanism of spontaneous symmetry breaking: a uniform, symmetric initial state developing spontaneously into a patterned, asymmetric final state, driven by the internal dynamics of the system rather than by external specification.

The connection to Turing’s other work is immediate. The morphogenesis paper was asking, in biological terms, the same question that the 1936 paper and the 1950 paper were asking in mathematical and cognitive terms: how does complex, structured behaviour arise from simple, rule-governed processes? In 1936, the processes were the mechanical operations of a Turing Machine. In 1950, they were the information-processing operations of a brain or a computer. In 1952, they were the chemical reactions of developing tissue.

The answer, in each case, was the same: the complexity was not pre-specified. It was not encoded in the initial state. It emerged from the interaction of simple processes, operating over time, in ways that were in principle predictable from the rules but not in practice predictable without running the process.

Emergence — the arising of complex, high-level properties from simple, low-level interactions — is the connecting theme of all of Turing’s work. The computability of mathematics from simple symbol manipulation rules. The intelligence of minds from simple neural firing rules. The patterns of biological form from simple chemical reaction rules. In each domain, the complexity that seems to require a special explanation — seems to require design, or intention, or a special vital principle — turns out to be a natural consequence of the dynamics of simple systems.

This is Turing’s deepest contribution: the demonstration, across multiple domains, that complexity is natural. That it does not require a designer. That it emerges.


What Morphogenesis Means for AI

The connection between Turing’s morphogenesis paper and modern AI is not merely historical or metaphorical. It is direct and technical.

The mechanisms Turing described — reaction-diffusion systems, spontaneous pattern formation, the emergence of order from undifferentiated initial conditions — have become an important framework for understanding how biological neural networks develop their organisation. The brain does not start out organised. A newborn brain is a relatively undifferentiated mass of neurons. The specific organisation of the adult brain — the visual cortex, the language areas, the motor strips — emerges through a process of development and learning that has significant parallels to the morphogenetic processes Turing described.

The question of how neural organisation emerges — how the brain develops from an undifferentiated initial state to a highly structured, functionally specialised adult state — is one of the central questions of neuroscience. And the mathematical tools Turing developed for morphogenesis — the analysis of reaction-diffusion systems, the study of how uniform states become patterned — have been directly applied to this question.

For AI, the morphogenesis paper reinforces the insight of the “unorganised machine” concept. If biological intelligence emerges through a developmental process — through the interaction of genetic instructions with environmental experience, in a process that is more like Turing’s morphogenetic pattern formation than like programming — then artificial intelligence might best be achieved by a similar approach. Not programming the intelligence in, but creating conditions under which intelligence can emerge.

This is, in broad outline, the approach that has worked. Deep learning systems are not programmed with knowledge. They are exposed to data, and their internal organisation — the weights and connections that encode their capabilities — emerges through the training process in ways that are not directly specified by their designers. The designers set the conditions: the architecture, the training objective, the dataset. The capabilities emerge.

Turing’s morphogenesis paper, read in this light, is not a detour from the AI work. It is a generalisation of it — an exploration of the same emergence phenomenon in a different domain, contributing to the understanding of how intelligence could be grown rather than built.


The Man Who Imagined Everything: A Portrait

Having traced the unified vision through Turing’s work, it is worth stepping back to consider what kind of mind was capable of it.

Turing was not, by the accounts of those who knew him, an obvious prophet. He was not a charismatic visionary who inspired followers with the force of his personality and the grandeur of his claims. He was awkward, direct, sometimes tactless, not interested in the social performance of importance. He wore his trousers with a piece of string because he kept forgetting to buy a belt. He padlocked his tea mug to the radiator at Bletchley. He ran marathon distances for training rather than recreation.

He thought about hard problems the way other people breathe — continuously, naturally, without apparent effort. His colleagues at Bletchley described the experience of working with him as occasionally frustrating (he was not always good at explaining his reasoning) and frequently illuminating (he had a habit of cutting through complexity to the essential point in ways that could make other people feel they had been thinking about the problem wrong all along).

He was, according to multiple accounts, possessed of an unusual combination of mathematical rigour and willingness to speculate. He did not confine his thinking to what he could prove. He entertained possibilities freely, followed them wherever they led, and then asked what it would take to demonstrate them. The 1950 paper is a perfect example: it is full of speculation — predictions about what machines will be able to do, arguments about the nature of mind, challenges to received opinion — but the speculation is always disciplined, always connected to things that could in principle be tested, always distinguished from what was merely possible and what was probable.

He was also — and this is sometimes underemphasised in accounts that focus on his mathematical genius — a deeply moral thinker. The 1950 paper’s argument against using consciousness as a criterion for intelligence — the argument that we have no direct evidence for consciousness in any other being, and that our attribution of minds to other humans is based on behavioral evidence alone — is not just a technical move in an argument about AI. It is a moral argument, implicitly. It argues against a double standard: the standard that requires behavioral evidence for human minds but demands something more for machine minds.

In the context of Turing’s life — in the context of a man who was told, by his society, that his own inner life was criminal, that the way he was made was deviant, that his experience did not deserve the same moral consideration as others’ — this argument carries a personal weight that goes beyond the technical. He was arguing for empirical standards, against prejudice. He was arguing that what mattered was what beings could do and experience, not what category they fell into. He was arguing, in the most abstract possible terms, for fairness.


The Legacy: What Turing Built Without Knowing It

When Turing died in 1954, the world he had done so much to create barely existed. There were a handful of working computers. There was no software industry. There was no AI research community — the Dartmouth Conference that would name the field was still two years away. The word “computer” still referred, primarily, to a person who performed calculations. The word “cybernetics,” not “artificial intelligence,” was the term most commonly used for the study of intelligent machines.

And yet, in the seven years between the end of the war and his death, Turing had:

  • Designed the detailed specifications for the Automatic Computing Engine, one of the first stored-program computer designs
  • Helped establish and work at the Computing Machine Laboratory in Manchester, where some of the first programs were run on some of the first working computers
  • Written “Computing Machinery and Intelligence” — the paper that gave AI its central question and its central research program
  • Written “Intelligent Machinery” — the internal report that identified the distinction between programmed and learning machines and correctly predicted that the latter was the more promising approach
  • Written “The Chemical Basis of Morphogenesis” — the paper that founded mathematical biology and contributed to the understanding of how complex structures emerge from simple processes

In seven years, working under the shadow of his prosecution and the requirement to report regularly to a psychiatrist as a condition of his probation, Turing produced the foundational documents of computing, AI, and mathematical biology.

The waste of it — the loss represented by his death at forty-one, with decades of potential work unwritten — is one of the great tragedies in the history of science. Not just because of the personal tragedy, though that is real and terrible. But because the thread he was pulling — the unified theory of emergence, complexity, and intelligence across biological and computational domains — was not pulled further by him, and took decades to be picked up by others working without the benefit of his vision.

What Turing might have done in another twenty or thirty years of work is one of the great counterfactuals of intellectual history. He was, in 1954, at exactly the age when scientists often do their deepest and most integrative work — when the early technical contributions have been made, the broader significance has become clear, and the mind that made those contributions turns to synthesis and to the hardest remaining questions. He was at the beginning of what might have been the most important period of his career.

Instead: the apple, the cyanide, the inquest, the verdict of suicide.


The Questions He Left

Turing left behind four great questions — questions he posed with precision, made progress on, but did not answer. They are the four questions that define modern AI research.

Can machines compute everything that is computable? The Church-Turing thesis says yes: any computable function can be computed by a Turing Machine, and therefore by any modern computer. This is widely believed but not proved — it is a thesis, not a theorem. And the question of whether quantum computers or other exotic architectures can compute things that classical Turing Machines cannot is an active area of research with significant implications for AI.

Can machines think? The Turing Test question. Still unanswered. Modern language models pass versions of the test in many contexts. Whether they genuinely think — whether there is understanding behind the outputs, experience inside the process — is still as open as it was in 1950. The question has become more urgent, not less, as the outputs have become more convincing.

Can intelligence emerge from training rather than programming? The unorganised machine question. The answer is clearly yes — deep learning has demonstrated that. But the deeper question — what kinds of intelligence can emerge from what kinds of training, on what kinds of architecture — is still being worked out. We are still discovering the limits and the possibilities of the emergence approach that Turing identified as the right one.

How does biological complexity emerge from simple rules? The morphogenesis question. Turing’s specific mechanism — reaction-diffusion systems — has been confirmed in many biological systems but is not the complete answer. The question of how the full complexity of biological form and function emerges from genetic and developmental processes is one of the central questions of contemporary biology. And the related question — how the complexity of brain function emerges from the development and organisation of neural circuits — is the central question of neuroscience.

These four questions are connected. They are different aspects of the single question that drove all of Turing’s work: how does complexity emerge from simplicity? How does flexible, purposive, intelligent behaviour arise from mechanical, rule-following processes?

That question is not answered. We have made extraordinary progress on the AI aspects of it — we have built systems that exhibit something that looks very much like flexible, purposive intelligence, even if we are uncertain whether it is genuine. But the deeper questions — of consciousness, of understanding, of what is actually happening inside these systems — are still open.

Turing knew they would be. He was too honest and too rigorous to pretend that his answers were complete. He proposed a test, not a solution. He identified a research program, not its outcome. He drew the map of the territory and marked the hard questions with care, knowing that the mapping would take longer than his lifetime.


Why Turing Is More Relevant Now Than When He Was Alive

There is a paradox in Turing’s historical position. He is celebrated as a founder of computer science and AI — and he is — but he is often celebrated in a way that places him firmly in the past, as a historical figure whose work was important for establishing the foundations of a field that has since moved far beyond what he imagined.

This reading is exactly wrong.

Turing’s work is not foundational in the sense of being superseded. It is foundational in the sense of being the deepest level of analysis of the things we are now doing and debating. The questions he asked are not historical curiosities. They are the live questions — the questions that the AI field is actively trying to answer, that philosophers of mind are actively debating, that AI safety researchers are actively wrestling with.

Can machines think? More urgent now than in 1950, because the machines are much better and the answer is much less clear.

Can intelligence emerge from training? Not just yes in principle — yes, spectacularly, in the systems we are now deploying at scale. But the questions about what kind of intelligence has emerged, how robust it is, how aligned it is with human values — these are Turing’s questions in contemporary form.

How does complexity emerge from simple rules? The deep learning revolution has demonstrated emergence at a scale Turing could not have imagined. But we understand it poorly — the interpretability problem, the challenge of understanding what is happening inside these systems, is the morphogenesis problem applied to artificial neural networks.

What can be computed, and what cannot? As AI systems become more powerful, the question of their fundamental limits — what they can and cannot do, not just in practice but in principle — becomes more important, not less. Turing’s theory of computability is still the right framework for asking this question.

Turing was not ahead of his time in the sense of being irrelevant to it. He was ahead of his time in the sense of seeing, clearly, what the important questions were — and seeing that they would still be the important questions far into a future he could not fully envision. He was right about that. The questions he identified are still the questions. The vision he had — of computation, intelligence, and emergence as a unified intellectual project — is the vision that the 21st century is working out.


The Debt That Cannot Be Repaid

Alan Turing was destroyed by the country he had saved. This is not a metaphor or a simplification. The codebreaking work at Bletchley — work that Turing did more than any other individual to make possible — shortened the Second World War and saved millions of lives. The British government that prosecuted him for his sexuality was the direct beneficiary of that work. The country that chemically castrated him had survived partly because of his genius and his sacrifice.

This cannot be adequately addressed by posthumous pardons, or by putting his face on banknotes, or by naming awards and programming languages after him. These are gestures, and gestures are inadequate to the scale of what was done.

What is adequate — what honors Turing’s memory in the way that would actually matter — is taking his questions seriously. Engaging with the depth and the difficulty of what he was working on. Not reducing him to an icon or a symbol but engaging with the actual work: the mathematics, the philosophy, the biology, the engineering. Understanding what he was trying to do and why it mattered and why it still matters.

His questions are our questions. His vision is the vision we are pursuing. His methods — mathematical rigour applied to the biggest problems, empirical testing applied to philosophical questions, willingness to speculate and to be wrong — are the methods we need.

He did not finish the work. We are finishing it. We do not know when it will be finished, or what the finished version will look like. But we know, because Turing told us, what the important questions are. We know, because Turing showed us, that the right approach is empirical rather than metaphysical, precise rather than vague, willing to follow the argument wherever it leads.

He was a man who imagined everything. We are living in the world he imagined. The least we can do — the very least — is to keep imagining, to keep asking the questions he left us, and to remember who asked them first.


Further Reading

  • “Alan Turing: The Enigma” by Andrew Hodges — The definitive biography, covering the life and the work in full. Essential reading for anyone who wants to understand Turing fully.
  • “The Annotated Turing” by Charles Petzold — A line-by-line guide to the 1936 paper, making Turing’s most technical contribution accessible to non-mathematicians.
  • “Computing Machinery and Intelligence” (1950) — The original paper, freely available online. Read it. Its clarity and depth have not dated.
  • “Intelligent Machinery” (1948) — Turing’s internal report proposing the unorganised machine concept. Less widely known than the 1950 paper, but arguably more important for understanding where AI was going to go.
  • “The Chemical Basis of Morphogenesis” (1952) — Available online. More technical than the 1950 paper, but accessible if you are willing to follow the mathematics at a high level. One of the most beautiful scientific papers ever written.

Next in the Articles series: A6 — The Summer That Named AI — The 1956 Dartmouth Conference in full narrative detail: who was in the room, what they argued about in the evenings, what they agreed on, what they were wrong about, and how a single summer’s conversation gave a field its name, its mission, and the overconfidence that would eventually send it into its first winter.


Minds & Machines: The Story of AI is published weekly. If this synthesis of Turing’s vision opened up the work in new ways for you, share it with someone who thinks they already know who Turing was.