Versailles, 1738. The court of Louis XV is assembled in a gallery of the palace, dressed in silk and powder, gathered around something that has drawn murmurs and whispers across Europe for weeks. On a low table sits a duck — or what appears to be a duck. It is made of gilded copper, roughly the size of a real bird, and it sits on a small platform surrounded by a curtained cabinet that conceals whatever mechanism drives it.
Its creator, a thirty-year-old French engineer named Jacques de Vaucanson, makes a small adjustment. The duck comes to life.
It moves its head. It stretches its neck. It turns toward a small dish of grain, dips its beak, and appears to eat. Its wings flutter. Its tail wags. And then — to the astonished delight and faint disgust of the assembled courtiers — it produces, from its rear end, what appears to be digested waste.
The court erupts. Voltaire, who sees the demonstration and writes about it, declares that without the Digesting Duck and the Flute Player, there would be nothing to remind us of the glory of France. The duck tours Europe. It is discussed in scientific journals. It forces a question that nobody can quite answer: if a machine can eat, digest, and excrete — if it can replicate the most basic biological processes of a living creature — what, exactly, is the difference between a machine and an animal?
What, for that matter, is the difference between a machine and a human being?
The World Before Electricity
To fully appreciate the automata era, you need to hold in your mind a world without electricity. Not just without computers or smartphones — without electric light, without telegraphs, without any of the technologies that have made the harnessing of electrical energy so fundamental to modern life that we barely notice it.
In the world of the 17th and 18th centuries, the most sophisticated energy sources available were water, wind, and the tension of coiled metal springs. The most precise manufactured objects were clocks — mechanical timepieces whose intricate arrangements of gears, escapements, and springs had been refined over centuries into instruments of extraordinary accuracy. The most advanced manufacturing was watchmaking, a craft that had developed in Switzerland, France, and Germany to the point where skilled craftsmen could work with components measured in fractions of a millimeter, assembled under magnification, requiring years of apprenticeship to master.
This was the technological foundation of the automata era. The automata builders — the men who created the mechanical marvels that astonished 18th century Europe — were, almost without exception, trained clockmakers. They took the techniques developed for making timepieces and applied them to a different and more ambitious problem: building machines that could replicate the behavior of living things.
The result was an extraordinary flowering of mechanical ingenuity that lasted roughly from the late 17th century through the early 19th century — a period in which craftsmen across Europe competed to produce increasingly sophisticated, increasingly lifelike mechanical beings, and in doing so forced the philosophers and scientists of the age to confront questions about the nature of life, mind, and mechanism that had never before been so urgently posed.
These questions are our questions. The automata are our ancestors.
The Clockwork Tradition: Where It Came From
The story of automata in early modern Europe cannot be understood without understanding the clock — specifically, the mechanical clock as it developed in Europe from the 13th century onward.
The mechanical clock was one of the great inventions of the medieval world. Before it, time was measured by sun, shadow, and water — methods that worked well enough for most purposes but could not provide the kind of precise, continuous measurement that an increasingly complex commercial and religious life required. The mechanical clock, driven by a falling weight and regulated by an escapement mechanism that released the energy in controlled increments, changed this. For the first time, time could be measured mechanically, automatically, without ongoing human attention.
What made the clock philosophically interesting — what made it, in the words of many historians, the first true machine of the modern era — was not just its function but its character. A clock was a mechanism that ran on its own. It did not need to be constantly attended, constantly operated. Once wound, it went. It measured time not because someone was watching it but because it was built to do so. It was, in a meaningful sense, autonomous.
This autonomy was philosophically startling. For most of human history, the things that moved on their own were either alive or divine. The stars moved, but they were heavenly bodies. Animals moved, but they were alive. The wind moved, but it was a force of nature. A clock was none of these things. It was made by human hands, from metal and springs, and it moved on its own.
Once the clock existed, it was almost inevitable that craftsmen would start asking: what else could be made to move on its own? What other behaviors of living things could be replicated by mechanisms of gears and springs?
The first answers were modest. Cathedral clocks in the 13th and 14th centuries incorporated moving figures — knights that struck bells, processions of saints that emerged on the hour, figures that bowed or turned. These were simple, repetitive, driven by the same mechanism as the clock itself. But they were, in embryo, automata. They were mechanical figures that moved in apparently purposeful ways.
Over the following centuries, the mechanisms grew more sophisticated, the movements more complex, the ambitions more daring. By the 17th century, European craftsmen were building automata that could perform extended sequences of actions — that could write, play instruments, perform acrobatic tricks. The age of the great automata was beginning.
Descartes and the Mechanical Animal
Before we meet the great automata builders themselves, we need to pause with René Descartes — because Descartes provided the philosophical framework that made the automata era not just a curiosity but a serious intellectual enterprise.
Descartes, the French philosopher and mathematician who gave us “I think therefore I am,” was fascinated by the relationship between machines and living things. He had observed dissections, studied anatomy, and was deeply impressed by the complexity of the human body’s mechanisms — the way blood circulated, the way muscles contracted, the way the nervous system transmitted sensation and generated movement.
His conclusion, controversial then and controversial now, was that animals were essentially machines. Biological machines of extraordinary complexity, but machines nonetheless — their behavior fully explicable in terms of physical mechanisms, without any need to invoke an immaterial soul or vital principle. A dog running from a fire was not making a conscious decision. It was executing a mechanical response to a mechanical stimulus, the same way a clock’s hands move when the spring unwinds.
This was Cartesian mechanism, and it was one of the most influential and most debated ideas of the 17th century. If animals were machines, then in principle a sufficiently skilled craftsman could build a machine that behaved exactly like an animal — that moved, reacted, produced the same outputs — without any of the mysterious inner life that we tend to attribute to living creatures.
Descartes thought he could tell the difference between a real animal and a perfect mechanical replica by applying two tests. First, language: a machine might be made to utter words in response to specific stimuli, but it could never use language flexibly, in the varied and contextually appropriate way that even a simple conversation requires. Second, general reasoning: a machine might do one particular thing very well, but it could not apply reason across different domains the way an intelligent being can.
For animals — dogs, horses, birds — Descartes believed these tests were not even necessary. Animals simply did not use language or general reason. They were mechanisms. But for humans, the tests mattered. Humans, he argued, could not be explained by mechanism alone, because no mechanism could pass his two tests. Humans had rational souls.
What is remarkable about Descartes’s framework, from the perspective of AI history, is not just what he argued but what he left open. He drew a line: mechanisms on one side, rational souls on the other. Animals were on the mechanism side. Humans were on the soul side. But the line was drawn on the basis of observed capabilities — language use, general reasoning — not on any direct evidence about what was or was not inside the machine. And he offered specific tests.
Alan Turing, three hundred years later, would design exactly such a test. He would take Descartes’s framework, strip out the reference to souls, and propose that if a machine could pass the language test — if it could converse in a way indistinguishable from a human — then we would have no reasonable grounds to deny it intelligence. This is the Turing Test, and its intellectual roots go directly back to Descartes’s 1637 argument.
The automata builders of the 18th century were, consciously or not, doing experimental philosophy. Each automaton they built was an attempt to see how far mechanism could go — how much of living behavior could be replicated without recourse to anything beyond gears, springs, and the ingenuity of the craftsman. They were testing Descartes’s line, pushing against it, seeing where it would hold and where it would give way.
Vaucanson: The Genius Who Built a Duck
Jacques de Vaucanson was born in Grenoble in 1709, the tenth child of a glove-maker. He showed an early aptitude for mechanical things — at eleven, he reportedly built mechanical figures to entertain himself while waiting for his mother during confession — and eventually made his way to Paris, where he absorbed the latest developments in anatomy, physiology, and mechanics.
By his mid-twenties he had conceived an audacious project: to build a series of mechanical figures that would not merely imitate the outward appearance of life but would replicate its underlying processes. He was not interested in pretty tricks. He was interested in mechanism. He wanted to understand how living things worked by building machines that worked the same way.
His first great automaton, completed around 1737, was a life-sized figure of a flute player. This was not a figure that merely appeared to play a flute — it actually played a flute, producing real musical notes through a real flute held in its mechanical hands. The figure could play twelve different melodies. Its fingers moved to cover and uncover the flute’s holes. Its lips changed shape to alter the embouchure — the mouth position that controls the character of the sound. It breathed — or rather, a bellows mechanism in its chest pushed air through the instrument with appropriate timing and pressure.
The Flute Player was extraordinary. Previous musical automata had produced sound through internal mechanisms — hidden pipes or strings — while the external figure merely mimed playing. Vaucanson’s figure actually played, using the same physical mechanism that a human flautist uses. He had not just imitated playing. He had mechanized it.
He followed the Flute Player with a Tabor Pipe Player — a figure playing a different instrument, demonstrating that the mechanisms could be adapted — and then, most famously, with the Digesting Duck.
The duck was the most ambitious and the most philosophically provocative of his creations. It was built at a scale of roughly double the size of a real duck, and it contained, according to Vaucanson’s own description, over four hundred moving parts in each wing alone. The full mechanism ran to thousands of components.
What it could do was extraordinary. It could move its head and neck with the fluid, naturalistic motion of a real bird. It could drink water, with the water appearing to flow into its body through a visible swallowing motion. It could eat grain, taking the grain from the dish with its beak in a convincingly birdlike manner. Its wings could spread and fold. It could, apparently, digest the grain and produce waste material from its other end.
This last capability — the digestion — was the feature that most excited and disturbed observers. Vaucanson claimed that the duck actually digested the grain it ate, dissolving it through chemical action in a miniature artificial stomach and producing a material that resembled the real thing. He described it in terms that suggested a genuine replication of the digestive process, not just a theatrical effect.
In fact, the duck was cheating. Later analysis revealed that the digestion was a trick — the grain went into one compartment, and pre-prepared waste material was stored in another, released separately when the duck performed its most dramatic trick. The digestion was simulated, not real.
But here is the interesting thing: Vaucanson apparently knew this was not truly satisfying. His ambition had been to replicate the digestive process mechanically — to build a genuine artificial stomach that could dissolve food as a biological stomach does. He had not achieved this. The duck’s digestion was theater, not mechanism. And he was not at peace with the gap between what he had made and what he had intended to make.
This gap — between the appearance of a biological process and its actual mechanical replication — is a tension that runs through the entire history of AI. When a language model produces text that sounds exactly like a thoughtful human being, is it actually doing what a thoughtful human being does? Or is it, like the duck, producing the output through a different and in some ways more limited process that happens to achieve the same external appearance?
The duck forced the question in 1738. We are still arguing about the answer.
The Reception and Its Meaning
The response to Vaucanson’s automata was enormous and illuminating. They were discussed, debated, and analyzed across Europe. Philosophers wrote about them. Scientists examined them. Poets celebrated them. Kings summoned them. The public paid to see them.
The Académie des Sciences in Paris — the most prestigious scientific institution in France — formally admitted Vaucanson as a member in recognition of his work, an extraordinary honor for a craftsman from the provinces. The king of France commissioned him to design improvements to the silk weaving industry, which eventually led to Vaucanson building an automated loom controlled by punched cards — a technology that would, a generation later, be incorporated into Charles Babbage’s Analytical Engine and from there into the first computers.
That chain of influence is worth pausing on. Vaucanson’s duck → Vaucanson’s punched card loom → Babbage’s Analytical Engine → Ada Lovelace’s programs → modern computing. The thread from a gilded mechanical bird defecating before the court of Louis XV to the computer you are reading this on is direct, if indirect. The automata era was not a curious dead end. It was the beginning of something.
But the most interesting responses to Vaucanson’s duck were the philosophical ones. The duck forced people to ask, with new urgency: what is the difference between a machine and an animal?
Descartes had argued there was no important difference — animals were machines. But now here was a machine that was acting like an animal with uncanny precision. If there was no important difference between animals and machines, and this machine could replicate animal behavior so convincingly, then what did that say about animals? What did it say about the nature of life?
The Enlightenment philosopher Julien Offray de La Mettrie went further than Descartes in his 1747 book L’Homme Machine — “Man a Machine.” La Mettrie argued that not just animals but humans were machines. The soul, he claimed, was not an immaterial substance distinct from the body. It was a function of the body — specifically of the nervous system and the brain. Take away the body and there was no soul. The mind was what the brain did.
This was a radical and deeply controversial claim. It was also, from the perspective of modern neuroscience and AI, a remarkably prescient one. The dominant view in cognitive science and AI research today is something very like La Mettrie’s position: that mind is a function of physical processes in the brain, and that if you can replicate those physical processes in a different substrate — silicon rather than neurons — you can in principle replicate the mind.
La Mettrie was writing in the shadow of Vaucanson’s duck. The duck made his argument feel less like speculation and more like a demonstrated possibility. If a machine could digest, could it not also think?
The Jaquet-Droz Family: Three Marvels in a Box
While Vaucanson was the most philosophically provocative of the great automata builders, the most technically accomplished were arguably a Swiss father and son: Pierre Jaquet-Droz and his son Henri-Louis.
Pierre Jaquet-Droz was born in 1721 in the Swiss watchmaking town of La Chaux-de-Fonds, and he absorbed the precision watchmaking tradition of his region as completely as anyone of his generation. By the 1770s, he had produced what many historians consider the pinnacle of the automata art: three mechanical figures so sophisticated that they still function today, nearly two hundred and fifty years after they were made, and are now on permanent display in the Museum of Art and History in Neuchâtel, Switzerland.
The three figures — known as the Draughtsman, the Musician, and the Writer — represent three different domains of human creative activity: visual art, music, and writing. Each is a child-sized figure, roughly the size of a small child, seated at a desk or before a keyboard, built with extraordinary anatomical detail.
The Draughtsman can draw four different pictures — a portrait of Louis XV, a dog, a couple in a carriage, and a butterfly that can be made to move its wings. The mechanism inside the figure — a cam system, a set of programmed wheels that encode the path of each drawing — controls the movement of the arm with enough precision to produce recognizable images. The Draughtsman periodically lifts his pencil from the paper, blows on it to remove imaginary dust, and tilts his head to examine his work. These details — the blowing, the head tilt — serve no mechanical purpose. They are pure theater, pure illusion of life.
The Musician actually plays music on a real organ keyboard. Her fingers press the keys. The music is not produced internally — it comes from the keys she presses. She breathes — her chest rises and falls — and her head and eyes move to follow her fingers, then glance at the audience as if seeking approval. When she finishes playing, she takes a bow. The mechanism inside her encodes the sequence of key presses needed for five different musical pieces, changeable by swapping components.
But the Writer is the most remarkable of the three — and the most directly relevant to the history of computing.
The Writer is a small boy, seated at a desk, holding a quill pen. He can be programmed — and that word is used deliberately — to write any text up to forty characters long. Inside his body is a set of cams, one for each character, whose profiles encode the movements required to write that character. The cams are interchangeable and can be arranged in any sequence, allowing any message up to the maximum length to be written.
The Writer can write in French or English. He dips his pen in the inkwell. He follows the letters of the text with his eyes as he writes. When he reaches the end of a line, he pauses, moves his arm to the beginning of the next line, and continues.
What the Writer has is, in a meaningful sense, programmability. Not in the full modern sense — he cannot perform logical operations, cannot branch based on conditions, cannot be made to do anything other than write characters in sequence. But the separation of the mechanism from the specific content it will produce — the fact that the Writer can be made to write different messages by changing the configuration of its internal components without rebuilding the machine — is the essential insight of programmability.
This is the same insight that runs from the Jacquard loom’s punched cards through Babbage’s Analytical Engine to the stored-program computers of the 1940s and to every piece of software ever written: that a machine can be general-purpose, its specific behavior determined not by its fixed construction but by a separable, changeable program.
Pierre Jaquet-Droz built this insight into a mechanical boy in the 1770s. It would take another century and a half for the full implications of the idea to be worked out in computing.
The Turk Revisited: Intelligence as Performance
No discussion of the automata era is complete without returning to the machine we encountered in the previous article: Wolfgang von Kempelen’s Chess-Playing Turk.
We know, as we discussed, that the Turk was a hoax — a human chess master hidden inside the cabinet, controlling the mechanical arm through a system of levers and magnets. But the Turk deserves more attention here, because in the context of the automata era it raises questions that are more subtle and more interesting than simply “was it real or fake?”
Von Kempelen built the Turk in 1770, in the same decade that the Jaquet-Droz figures were being completed. He was a Hungarian inventor and engineer, a man of genuine mechanical skill who had built other impressive devices — a speaking machine that could produce intelligible syllables and simple words. He was not primarily a fraud. He was a performer, and a philosopher of performance.
What von Kempelen understood, and what the Turk demonstrated with uncomfortable clarity, was that intelligence is to a large extent a performance — that what we perceive as intelligent behavior is the product of inputs and outputs, of observed actions and their apparent appropriateness to context, rather than of any direct observation of the underlying mechanism.
When observers watched the Turk play chess, they did not see the mechanism of intelligence. They saw its product: appropriate moves, strategic responses, the behavior of a skilled player. And from those products, they inferred intelligence. They could not help it. We are wired to infer minds from behavior — it is one of the most basic cognitive tendencies of our species, essential for navigating a social world populated by other minds.
The Turk exploited this tendency ruthlessly. The hidden chess master’s intelligence was genuine — great players like Johann Allgaier and William Lewis sat inside the cabinet at various points in the Turk’s long career — but it was attributed to the mechanism, not the man. The audiences were not wrong that something intelligent was happening. They were wrong about what was doing it.
This is the central philosophical puzzle of the automata era, and it is the central philosophical puzzle of AI today. When we observe a machine producing intelligent-seeming behavior, how do we determine whether there is genuine intelligence underlying the behavior, or whether the appearance of intelligence is being produced by something that has no inner life, no understanding, no genuine mind?
The Turk’s audiences could not open the machine and inspect it — or when they could, they were shown the complex arrangement of gears that concealed the human operator. The deception was maintained through careful design of what was visible and what was hidden.
When we interact with a modern AI system, we face a similar epistemological problem. We can observe the outputs. We cannot directly observe the process that produced them. We can study the architecture of the neural network, examine the training data, analyze the weights — but whether any of this amounts to genuine understanding, genuine intelligence, genuine inner experience, is a question that cannot be answered by inspection of the mechanism alone.
The Turk did not answer this question. It sharpened it. And it taught anyone willing to think carefully about it a lesson that remains essential: do not confuse the appearance of intelligence with its presence. Ask not just what it does, but how and why.
Mechanical Animals and the Question of Life
Beyond the great showpiece automata of Vaucanson and Jaquet-Droz, the 18th century saw an explosion of smaller, cheaper, more widely distributed mechanical figures — animals and humans made to move by clockwork, sold as toys for the wealthy, displayed at fairs for the public, given as diplomatic gifts between courts.
These lesser automata were not philosophically serious in the way that the Digesting Duck or the Writer were. They were entertainment, novelties, demonstrations of craft. A mechanical bird that sang in a golden cage. A dancing figure on a music box. A tiny coach driven by a tiny coachman, turning in circles on a tabletop.
But their very proliferation had a cultural effect that was perhaps more important than the effect of the great showpieces. They made mechanical life familiar. They made it comfortable. They accustomed people to the idea that machines could move in lifelike ways, could sing, could seem to breathe and gesture. They normalized the category of the artificial living thing.
This normalization was preparation. It was preparing the European mind, over the course of the 18th century, for a world in which the boundary between the living and the mechanical was not fixed and clear but permeable and shifting. By the time the Industrial Revolution arrived, with its machines that could not just replicate individual human movements but perform sustained, complex labor — weaving fabric, pumping water, printing text — the conceptual framework for thinking about mechanical behavior as a substitute for biological behavior was already in place.
The automata era was, among other things, a cultural education. It taught the 18th century to take seriously the possibility that mechanism could go further than anyone had previously imagined.
The Speaking Machine: Toward Language
One of the most intriguing threads in the automata era is the persistent attempt to build machines that could speak — that could produce human language through mechanical means.
This ambition was ancient. There are medieval legends of brazen heads — bronze mechanical heads that could answer questions and prophesy — attributed to figures like Albertus Magnus and Roger Bacon. These were almost certainly myths, or at most crude novelties, but the persistence of the legend reveals the deep human fascination with the idea of a speaking machine.
In the 18th century, serious attempts were made. Von Kempelen — the builder of the Turk — spent years developing a speaking machine that could produce intelligible syllables and words. His machine used a bellows to force air through a vibrating reed, with tubes and chambers that could be manipulated to modify the sound into vowels and consonants. It was limited — only certain sounds could be produced, and the speech was imperfect and difficult to understand — but it worked. It could produce recognizable words.
Wolfgang von Kempelen published a detailed description of his speaking machine in 1791, and his work was extended and improved by subsequent researchers. A young Joseph Fourier — later famous for the mathematical transform that bears his name — heard one of the speaking machines and was reportedly fascinated. In the early 19th century, Joseph Faber built a speaking machine he called Euphonia that could produce connected sentences in several languages, operated by a keyboard that controlled the different vocal elements.
What drove these attempts? Partly the same impulse that drove all automata building — the desire to see how far mechanism could go, to test the limits of what craft and ingenuity could replicate. But partly something more specific: speech was the boundary that Descartes had drawn. He had argued that no machine could use language flexibly and appropriately. The speaking machine builders were, consciously or not, trying to cross that boundary.
They did not, of course. Their machines could produce sounds that resembled words, but they could not use language — could not understand, respond, converse. The gap between producing speech sounds and using language meaningfully was not mechanical. It was something else entirely: a matter of knowledge, of meaning, of the ability to connect words to the world and to other words in a web of understanding.
But the attempt was important. By trying to build speaking machines, the automata era helped clarify what speech actually required — helped reveal that the hard part was not the sound production but the meaning, not the mechanism of the voice but the intelligence that guided it. Every failure to build a machine that truly used language was a lesson in what language actually was.
This is a pattern that recurs throughout the history of AI. The attempt to build something often teaches you more about what you are trying to build than any amount of purely theoretical analysis. You find out where the hard parts are by trying and failing. The automata era was full of useful failures.
The Industrial Revolution and the End of an Era
The automata era ended — not suddenly, but gradually — with the Industrial Revolution. This seems counterintuitive. Surely an era of increasing mechanization would have been good for the makers of mechanical figures? In fact, the opposite was true.
The Industrial Revolution brought machines that were powerful, practical, and productive. Machines that wove fabric, pumped water from mines, transported goods across distances. These machines did not look like living things. They did not pretend to be alive. They were openly, proudly mechanical — engines of production rather than demonstrations of craft.
In this context, the automata — the singing birds, the writing boys, the digesting ducks — began to seem irrelevant. They were curiosities, anachronisms, items for private collections and fading exhibitions. The great craftsmen who had built them were dying, and the skills required to replicate their work were not being transmitted to a new generation. There was no economic case for investing years of precise labor in a mechanical figure that would demonstrate nothing more practically useful than human ingenuity.
The Jaquet-Droz figures were eventually sold to a collector, then to a museum. Vaucanson’s duck was lost — its whereabouts became uncertain in the early 19th century, and though it was reportedly still in existence for some years, it eventually disappeared from the historical record. The Turk was destroyed in a fire in 1854. The speaking machines were exhibited, discussed, and then shelved.
But the automata era left behind something more durable than any of its individual creations. It left a set of ideas — ideas about mechanism, about the relationship between living and non-living things, about what it meant for a machine to appear to think and feel — that would prove enormously influential in the century to come.
It left Vaucanson’s punched card loom, which became the technological ancestor of the programmable computer. It left the concept of the programmable machine — the insight that behavior could be separated from mechanism, that the same device could do different things depending on a changeable configuration. It left a philosophical tradition — running from Descartes through La Mettrie and beyond — that took seriously the possibility of mechanical intelligence and worked to understand what such intelligence would require.
And it left a cultural inheritance: the idea, normalized over a century of gilded birds and writing boys, that the boundary between the living and the mechanical was not absolute — that human craft could, in principle, approach and perhaps cross it.
Charles Babbage Meets the Silver Dancer
There is a story — well documented, not apocryphal — about Charles Babbage and an automaton that brings the automata era and the computing era into direct, personal contact.
Sometime in the 1820s, Babbage visited an exhibition and saw a mechanical dancer, about a foot tall, made of silver, that had belonged to a famous automaton-collecting family. The figure danced and moved with what observers described as remarkable grace. Babbage was mesmerized. He bought the figure, or obtained it somehow, and kept it in his study for the rest of his life.
He reportedly said that the dancing silver figure inspired in him the vision of what the Analytical Engine could be. The automata had shown that complex behavior could emerge from mechanism — from the right arrangement of cams and gears and springs. His engine would show that complex computation could emerge from a different kind of mechanism — from the right arrangement of gears encoding arithmetic operations. The principle was the same.
The story is a neat emblem of the transition from the automata era to the computing era. Babbage — trained in the mathematical tradition of the early 19th century, deeply aware of the automata tradition of the 18th — saw in the mechanical dancer not just a curiosity but a proof of concept. Mechanism could produce behavior of extraordinary complexity and apparent purposiveness. If you chose the right domain — not dancing but calculating — and built the mechanism with sufficient precision and scale, there was no obvious limit to what could be achieved.
Ada Lovelace saw the Analytical Engine and went further: the mechanism was not just a calculator but a general-purpose symbol manipulator. Not just arithmetic but anything that could be formalized. Not just calculating Bernoulli numbers but, as she imagined, composing music.
From Vaucanson’s duck to Babbage’s engine to Lovelace’s programs is a surprisingly short intellectual journey. The automata era was not a detour in the history of computing. It was the first part of the road.
Why the Automata Still Matter
We live in a world of digital automata. Every chatbot, every recommendation algorithm, every self-driving car, every AI-generated image is a kind of automaton — a device that produces behavior that appears purposive, that mimics in some dimension the behavior of a thinking being.
The questions the 18th century automata raised are the questions we face today, wearing new clothes.
When the Digesting Duck appeared to eat and digest, observers asked: is this really digesting? Does the mechanism understand what it is doing? Or is it producing the outputs of digestion through a process that has no inner relationship to what digestion actually is? We ask the same question about language models: when GPT-4 writes a coherent, moving paragraph, is it understanding anything? Or is it producing the outputs of understanding through a process — statistical pattern matching at enormous scale — that has no inner relationship to what understanding actually is?
When the Writer programmed its text in interchangeable cams, it demonstrated the separation of content from mechanism — the insight that the same device could produce different outputs depending on a changeable configuration. This is the insight of software. The automata era gave it mechanical form; the computing era gave it electronic form; the AI era is working out its implications for mind and intelligence.
And when the Turk played chess — when a hidden intelligence was attributed to a mechanical figure because its outputs were intelligent — it posed the problem that Alan Turing would formalize as the Turing Test. How do we know if a machine is intelligent? We observe its behavior. We cannot directly observe its inner workings, or if we can, we may not understand what we see. We infer mind from behavior, and behavior can deceive.
The automata era was an era of beautiful deceptions and genuine insights. It was an era when human ingenuity pushed against the boundary between mechanism and life, found that the boundary was not where everyone had assumed, and pushed again. The boundary moved. It has been moving ever since.
The craftsmen who built the digesting duck, the writing boy, the chess-playing Turk were not building AI. They did not have the concepts, the mathematics, the electronics, or the theory. But they were asking the AI question — the question of what it would take for a machine to be intelligent, or to appear to be, or to compel us to ask whether appearance and reality were, in this domain, even meaningfully different.
They were the dreamers before the scientists. And without the dream, there is no science.
Further Reading
- “Edison’s Eve: A Magical History of the Quest for Mechanical Life” by Gaby Wood — A beautifully written history of automata from the 18th century to the early 20th, including a wonderful account of the Jaquet-Droz figures.
- “The Turk: The Life and Times of the Famous Eighteenth-Century Chess-Playing Machine” by Tom Standage — The definitive popular account of the chess automaton, placing it in its full historical and philosophical context.
- “Vaucanson and His Contemporaries” — various academic papers available online — For the historically curious, the scholarly literature on Vaucanson is rich and accessible.
- Museum of Art and History, Neuchâtel, Switzerland — The Jaquet-Droz figures are on permanent display and still demonstrated regularly. If you are ever near Switzerland, this is worth a visit.
- “Descartes: Discourse on the Method” (1637) — The philosophical foundation for the automata era’s central questions, surprisingly readable for a 17th century philosophical text.
Next in the Articles series: A3 — The Philosophers Who Asked “Can Machines Think?” — Before the engineers came the philosophers. Leibniz dreamed of a calculus of thought. Pascal built the first mechanical calculator. Descartes asked whether mechanism had limits. The thinkers who laid the conceptual groundwork for everything that followed — and the questions they left unanswered that we are still wrestling with today.
Minds & Machines: The Story of AI is published weekly. If this piece made you see the AI conversation differently, share it with someone who would appreciate it.