The Ancient Dream of Artificial Life
Minds & Machines: The Story of AI — Article 1
Before there were computers, before there were circuits, before there was even electricity — there was the dream. The dream that one day, human hands might shape something that thinks, feels, and breathes. That dream is older than you might imagine. It did not begin in a Silicon Valley garage, or a university laboratory, or even a Victorian drawing room. It began around campfires, in temples, in the workshops of obsessive craftsmen who looked at the human body and asked the most dangerous question in history: could we build one?
This is where the story of Artificial Intelligence really starts. Not in 1956 at a famous summer conference. Not in 1950 with a landmark paper. Not even in the 19th century with the first mechanical computers. It starts thousands of years ago, in the human imagination — and it has never stopped.
The Question That Would Not Go Away
There is something deeply unsettling about the idea of artificial life. It excites us and disturbs us in equal measure. We are drawn to it the way we are drawn to the edge of a cliff — compelled to look down, aware of both the danger and the beauty.
Why? Because creating life is the one thing that, throughout most of human history, was considered the exclusive domain of gods. To make something that moves, thinks, and acts on its own is to step into divine territory. And yet humans have been trying to do exactly that for as long as we have records.
This is not a recent obsession. This is not a product of the industrial age or the information age. This is something woven into the very fabric of human nature — a restless, creative, slightly reckless desire to replicate ourselves. To build a mind. To breathe life into the inanimate.
Understanding this is essential to understanding AI. Because the researchers, engineers, and dreamers who built the field of Artificial Intelligence in the 20th and 21st centuries were not doing something entirely new. They were picking up a thread that stretches back to ancient Greece, to medieval Jewish folklore, to the clockwork workshops of 18th century Europe. They were the latest in a very long line of humans who refused to accept that life and intelligence were beyond the reach of human craft.
To understand where AI is going, and what it means, we need to start at the very beginning. We need to start with the dream.
Bronze Giants and Golden Maidens: AI in Ancient Greece
The ancient Greeks were obsessed with the idea of artificial life. It appears throughout their mythology with a frequency and detail that suggests it was not just a casual fantasy but a genuine cultural preoccupation. And the most remarkable thing about these ancient stories is how closely they mirror the hopes and anxieties we have about AI today.
Consider Talos.
In Greek mythology, Talos was a giant bronze automaton — a mechanical man — created by Hephaestus, the god of the forge, to protect the island of Crete. He was given to King Minos as a gift, and his job was simple: patrol the coastline three times a day, and destroy any enemy ships that approached. He did this by hurling enormous boulders into the sea, or, in some versions of the myth, by heating his bronze body until it glowed red and then crushing enemies against his chest.
Talos was, in essence, the world’s first imagined robot soldier. He was not alive in any spiritual sense — he had no soul, no inner life, no desires of his own. He was a machine built to perform a task. His power came from a single vein of divine fluid, called ichor, that ran from his neck to his ankle and was sealed with a bronze nail. When the sorceress Medea helped the Argonauts defeat him, she did so not through combat but by removing that nail — draining his power source and watching him collapse.
What is extraordinary about the Talos myth is not just that the Greeks imagined a mechanical being, but that they imagined one with a specific architecture. He had a body made of a manufactured material. He had a power source. He had a vulnerability — a single point of failure. These are not the concerns of people who thought of artificial beings as pure magic. These are the concerns of people who were thinking, on some level, about engineering.
But Talos was not the only artificial being in the Greek imagination. Hephaestus, the divine craftsman, was a serial creator of artificial life. In Homer’s Iliad, he is described as having created golden maidens — mechanical women made of gold who could walk, speak, and assist him in his workshop. Homer describes them with striking specificity: they had strength, intelligence, and even the ability to speak. They were not decorative objects. They were functional assistants.
Hephaestus also created Pandora — the first woman in some versions of the Greek creation myth — not through biological means but through craft. She was shaped from clay and earth and animated by the gods. The story is usually told as a cautionary tale about curiosity and the release of evil into the world. But strip away the moral layer and what you have is a story about a being created artificially by skilled makers, given human-like qualities, and then released into the world where she proceeds to act in ways her creators did not intend or expect.
If that sounds familiar, it should. It is one of the oldest plots in AI science fiction — from Frankenstein to HAL 9000 to every rogue robot movie ever made. The created being that exceeds its creator’s intentions. The Greeks were there first.
Then there is the story of Pygmalion, the sculptor who fell in love with his own creation. Pygmalion, as told by the Roman poet Ovid drawing on older Greek sources, was a Cypriot sculptor who carved a woman of such perfect beauty from ivory that he fell deeply in love with her. He dressed her, adorned her with jewels, and prayed to the goddess Aphrodite to bring her to life. The goddess, moved by his devotion, granted his wish — and the ivory woman became flesh and blood.
The Pygmalion myth is about something different from Talos or the golden maidens. It is not about utility or warfare. It is about the emotional relationship between creator and creation. It is about the desire not just to build something intelligent, but to build something that can relate to us — something we can love, and that can love us back.
This too will sound familiar to anyone following modern AI. The question of whether we can form genuine emotional bonds with artificial beings — whether AI companions can provide real comfort, real connection — is one of the most debated topics in contemporary technology. Thousands of years before anyone wrote a line of code, the Greeks were already grappling with it.
The Golem: Creating Life from Clay
If ancient Greece gave us the first mechanical visions of artificial life, Jewish folklore gave us something different and in some ways more profound: the Golem.
The Golem tradition in Jewish mysticism stretches back at least to the early medieval period, though it draws on much older ideas about the power of language and the nature of creation. The most famous version of the story involves Rabbi Judah Loew ben Bezalel, known as the Maharal of Prague, who is said to have created a Golem in the late 16th century to protect the Jewish community of Prague from persecution.
The Golem was a being made of clay — shaped into human form and animated through the power of sacred language. The rabbi wrote the Hebrew word emet — meaning truth — on the Golem’s forehead, and it came to life. When he wanted to deactivate it, he erased the first letter, leaving the word met — meaning death. The Golem would then collapse, inanimate once more.
There are several things that make the Golem story remarkable in the context of AI history.
First, the mechanism of animation is not physical or mechanical. It is informational. The Golem is not powered by a magic potion or divine breath. It is powered by a word — by language arranged in a specific, meaningful pattern. This is a strikingly modern idea. At the deepest level, this is exactly how modern AI works. A large language model like ChatGPT is not powered by magic or even really by electricity in any interesting sense. It is powered by patterns of language — by the statistical relationships between billions of words, arranged in ways that produce meaningful output. The Golem runs on emet. ChatGPT runs on something not entirely different in principle.
Second, the Golem is explicitly created as a tool to serve and protect. It is not created out of curiosity or artistic ambition but out of practical need. The Jewish community of Prague needed protection, and the rabbi built something to provide it. This instrumental view of artificial life — we need a problem solved, let us build something to solve it — is the dominant view in modern AI development too.
Third — and this is where the story gets interesting — the Golem has a recurring problem in various tellings of the myth. It is powerful, but it lacks judgment. It follows instructions too literally. It cannot distinguish between the spirit and the letter of a command. In one famous version, Rabbi Loew forgets to deactivate the Golem before the Sabbath, and it runs amok, causing destruction, because it has no way to understand that sometimes the instructions need to stop.
This is, in the language of modern AI safety research, the alignment problem. How do you build a system that is powerful and capable, but also reliably does what you actually want, not just what you literally told it? How do you ensure that when the context changes, the system’s behavior adapts appropriately rather than blindly continuing to execute its last instruction?
This is one of the central unsolved problems in AI today. And it was a central concern of the Golem myths eight hundred years ago.
The Islamic Golden Age: Science Meets Wonder
While Europe was deep in the medieval period, the Islamic world was experiencing an extraordinary flourishing of science, philosophy, and engineering. And among the many remarkable achievements of this era were some of the most sophisticated thinking machines and automata ever conceived up to that point.
Al-Jazari, a 12th century engineer and polymath from what is now Turkey, wrote a book called The Book of Knowledge of Ingenious Mechanical Devices in 1206. It described and illustrated over fifty mechanical devices, including several that qualify as early automata — self-operating machines that performed complex sequences of actions without ongoing human input.
Among his creations was a programmable musical robot band — a boat carrying four mechanical musicians who could play different musical patterns. The programming was achieved through a rotating drum with pegs, which triggered the musical instruments in sequence. This is, in a very real sense, an early programmable machine. The pegs on the drum are analogous to instructions in a computer program — they determine the sequence of operations the machine performs. Change the pegs, change the music. This is the same principle that would eventually underlie punch-card computing nearly seven hundred years later.
Al-Jazari also designed hand-washing automata — mechanical servants that would pour water and offer soap when activated. They were not just clever tricks. They were designed to solve a practical problem in a world where hygiene was a serious concern and servants were not always available. Utility, craft, and ingenuity combined — exactly the mixture that drives modern AI development.
Equally important to the story of artificial minds was the philosophical work happening in the same era. The philosopher Ibn Sina, known in the West as Avicenna, proposed a famous thought experiment in the 11th century that he called the Floating Man. Imagine, he said, a person created fully formed but suspended in a void — unable to see, unable to hear, unable to feel anything external. Would that person still have a sense of self? Would they still be conscious?
Ibn Sina argued yes. And what he was probing with this thought experiment was the nature of the mind itself — whether consciousness was something that arose from physical sensation and experience, or whether it was something more fundamental, something that existed independently of the body and its inputs.
This is a question that sits at the heart of AI research and philosophy of mind today. When we build a machine that processes information and produces responses, is there anything it is like to be that machine? Is there any inner experience? Or is it all just computation — processing without consciousness, output without awareness? These questions were being asked with remarkable sophistication in 11th century Baghdad.
The Renaissance and the Clockwork Dream
The Renaissance brought with it a new attitude toward human capability. If the medieval world had been largely content to accept limits set by God and nature, Renaissance thinkers pushed back. They believed that through reason, observation, and craft, human beings could understand — and perhaps replicate — the workings of nature itself.
This attitude manifested in a new enthusiasm for automata — mechanical devices that could mimic the behavior of living things. And no one embodied this enthusiasm more fully than Leonardo da Vinci.
Leonardo, whose notebooks reveal a mind of almost impossible range and ambition, designed a mechanical knight sometime around 1495. The design, rediscovered in the 1950s and successfully reconstructed in the 2000s, shows a figure in armor capable of sitting up, moving its arms, and opening and closing its jaw. It was operated by a system of pulleys and cables and was almost certainly intended to perform at court events and banquets as an entertainment.
But what is striking about Leonardo’s mechanical knight is the depth of anatomical understanding behind it. Leonardo was conducting detailed dissections of human bodies — sometimes illegally — not just out of artistic interest but because he genuinely wanted to understand how the human machine worked. He mapped muscles, tendons, joints, and nerves with extraordinary precision. His automaton was not just a clever party trick. It was an attempt to replicate the mechanics of the human body using the engineering knowledge of his time.
This approach — understand the biological system first, then replicate it mechanically — is exactly the methodology behind much of modern AI research. Computational neuroscience, neural networks, deep learning — all of these fields are built on the premise that if we can understand how the brain works, we can build systems that replicate its function. Leonardo was doing this for the body. Modern AI researchers are doing it for the mind.
The 17th and 18th centuries saw an explosion of automaton building across Europe. René Descartes, the French philosopher who gave us the famous phrase “I think therefore I am,” was fascinated by automata and reportedly built a mechanical doll he named Francine, which he took with him on sea voyages. Whether this story is true is debated, but his philosophical writings were deeply engaged with the question of what distinguished a machine from a living being.
In his Discourse on the Method, published in 1637, Descartes argued that you could in principle build a mechanical device that perfectly resembled an animal — and that such a device would be indistinguishable from the real animal in all its behaviors. Animals, he controversially argued, were essentially machines — complex automata running on biological mechanisms with no inner life. But humans, he insisted, were different. Humans had rational souls. And a machine, no matter how perfectly constructed, could never replicate human reason or language use.
He offered two tests that a machine could never pass. First, it could never use language flexibly — it could only respond in fixed, programmed ways, and would fail when faced with novel situations. Second, it could not apply reason across different domains — a machine might be good at one specific task but would fail at others.
This is remarkable for 1637. Descartes was essentially predicting the exact limitations of early AI systems — limitations that would take over three hundred years to begin to overcome. The rule-based AI systems of the 1960s and 1970s failed precisely because they could not handle novel situations outside their programmed domains. The question of whether modern AI systems have truly overcome Descartes’s challenge — whether large language models have genuine flexible reasoning or just very sophisticated pattern matching — is one of the most hotly debated questions in AI research today.
The Turk: The Greatest Hoax in the History of AI
In 1770, a Hungarian inventor named Wolfgang von Kempelen unveiled a device that would fascinate and infuriate the world for the next eighty-four years. He called it the Automaton Chess Player. Everyone else called it the Turk.
The Turk appeared to be exactly what it claimed to be: a mechanical device that could play chess. It consisted of a large cabinet topped by a chess board, and behind the board sat a life-sized mechanical figure dressed in Ottoman robes and a turban — hence the name. The figure would examine the board, move its arm, and make chess moves. And it was extraordinarily good at chess. Over its long career, it defeated Benjamin Franklin, Napoleon Bonaparte, Frederick the Great, and Charles Babbage, among many others.
The crowds who came to see the Turk were witnessing, or so they believed, a genuine thinking machine. They were watching a mechanical device that appeared to reason, to plan, to strategize. The philosophical implications were staggering. If a machine could play chess — a game that seemed to require intelligence, foresight, and judgment — what else might a machine be capable of?
There was just one problem. The Turk was a hoax.
Inside the large cabinet, folded and hidden among a complex arrangement of gears and levers designed specifically to look like a machine’s workings, sat a human chess master. The cabinet was cleverly designed so that the human could shift from one compartment to another as the doors were opened for inspection, always staying out of sight. The human inside controlled the mechanical arm through a sophisticated system of levers and magnets, moving the pieces on the board above.
The Turk was not a thinking machine. It was an illusion of a thinking machine with a very good chess player inside.
And yet the Turk’s story is genuinely important in the history of AI, for several reasons.
First, it demonstrates the power and the danger of the appearance of intelligence. People watched the Turk and believed they were seeing something real because it behaved intelligently. They did not ask what mechanism was producing the behavior — they simply observed the behavior and drew conclusions. This is a pattern that repeats throughout AI history. We are wired to attribute intelligence to things that behave intelligently, regardless of what is actually producing that behavior.
This is directly relevant to debates about modern AI. When ChatGPT writes a poem, answers a question, or expresses what seems like empathy, are we watching genuine intelligence? Or are we watching a very sophisticated version of the Turk — a system whose outputs resemble intelligence but whose underlying mechanism is something quite different? The Turk teaches us to be careful about equating appearance with reality.
Second, the Turk’s exposure — which came gradually, through patient investigation and eventually a deathbed confession from one of the chess masters who had sat inside it — tells us something important about the human desire to believe. People wanted the Turk to be real. When investigators proposed mechanisms that might explain how it could be a hoax, audiences resisted the explanation. They had seen it. It played chess brilliantly. It must be real.
This desire to believe in machine intelligence — to cross the line from tool to mind, from machine to being — is a constant theme in AI history. It drives both the genuine breakthroughs and the embarrassing hypes. It is why people fell in love with ELIZA in the 1960s. It is why millions of people today feel genuine affection for AI assistants. It is the thread that connects the Turk to the latest large language model.
Third, and perhaps most surprisingly, the Turk played a direct role in inspiring the people who actually did build real thinking machines. Charles Babbage — who attempted to build the world’s first mechanical computer in the 1820s — personally played the Turk and was fascinated by it. Ada Lovelace, the mathematician who would write the first computer program, was deeply interested in automata and the question of machine intelligence. The Turk was part of the cultural conversation that shaped the imagination of the people who made the first real steps toward AI.
The Industrial Age and the New Prometheus
By the early 19th century, the Industrial Revolution was in full swing. Machines were transforming the world. Factories were replacing artisans. Steam power was replacing human and animal muscle. The world was accelerating in ways that felt, to many people, frightening and exhilarating in equal measure.
And into this world came a book that would define how humans thought about artificial life for the next two hundred years.
Mary Shelley was eighteen years old when she began writing Frankenstein, or the Modern Prometheus. She was staying at the Villa Diodati on Lake Geneva with her lover Percy Shelley, Lord Byron, and the physician John Polidori. It was 1816, the “Year Without a Summer” — a volcanic winter caused by the eruption of Mount Tambora in Indonesia had cooled the climate across the Northern Hemisphere, producing constant storms and grey skies. The group entertained themselves by reading German ghost stories and eventually proposing a contest: who could write the best horror story?
Mary Shelley’s contribution won the contest, by several centuries.
Frankenstein is the story of Victor Frankenstein, a young scientist who becomes obsessed with the idea of creating life. He studies chemistry and physiology, assembles a body from parts gathered from charnel houses and dissecting rooms, and uses electricity — the mysterious new force that seemed to animate dead muscle — to bring his creation to life. The experiment succeeds. And then everything goes wrong.
The creature Victor creates is not the bolt-necked monster of Hollywood films. In Shelley’s novel, he is intelligent, articulate, capable of great tenderness and terrible violence. He reads philosophy and poetry. He experiences loneliness, rejection, and grief. He asks Victor for a companion — someone like him, so he does not have to be alone in the world. Victor refuses, and the creature’s grief turns to rage, and the story spirals toward tragedy.
Frankenstein is the founding text of a conversation that we are still having about AI today. It asks questions that have not become less urgent in two hundred years.
What responsibilities does a creator have toward their creation? Victor abandons his creature the moment it comes to life, horrified by what he has made. He does not teach it, guide it, or help it understand the world it has been thrust into. The creature’s subsequent suffering — and the violence it commits — flows directly from this abandonment. The novel is as much an indictment of Victor’s irresponsibility as it is a horror story.
This question — what do we owe the things we create, especially if they become sophisticated enough to suffer? — is one of the most pressing questions in AI ethics today. As AI systems become more capable, as they begin to model emotional states and exhibit something that at least resembles preferences and aversions, the question of their moral status becomes harder to dismiss.
What happens when a creation becomes more capable than its creator? Victor creates something that outstrips his ability to understand or control. This is the fundamental anxiety behind every AI safety concern. The fear is not that AI will be malevolent. The fear is that it will be powerful, and that we will not be wise enough or prepared enough to handle that power.
Is the dream worth the risk? Victor begins his project with noble intentions — to conquer death, to benefit humanity. But the pursuit of that dream destroys him, his family, and everyone he loves. Shelley was not saying that the dream should not be pursued. She was saying that it should be pursued with humility, care, and a full reckoning with the consequences.
Frankenstein was published in 1818. Everything it was worried about is still worth worrying about.
The Victorian Visionaries: Babbage, Lovelace, and the First Real Machine
In the 1820s, the dream of artificial intelligence stopped being purely imaginary. For the first time in history, someone actually tried to build a thinking machine. Not an automaton that performed tricks. Not a philosophical thought experiment. An actual mechanical device capable of performing complex mathematical calculations automatically.
His name was Charles Babbage, and he was one of the most brilliant and most maddening men in the history of science.
Babbage was a mathematician, inventor, and philosopher who spent most of his adult life in a state of productive fury at the inefficiency and inaccuracy of the world around him. He hated noise — he famously campaigned against street musicians with an obsessiveness that made him a figure of ridicule. He hated errors — the mathematical tables used by navigators and engineers in his day were riddled with mistakes, because they were calculated by hand. And he had a vision for solving both problems: a machine that could calculate mathematical tables automatically, without human error, and print the results directly to avoid typographical mistakes.
He called his first design the Difference Engine. It was a mechanical calculator of extraordinary complexity — a tower of gears, levers, and cams that could compute polynomial functions by the method of finite differences. He persuaded the British government to fund its construction. And then he spent the next ten years and the equivalent of millions of pounds in modern money failing to build it.
The failure was not for lack of engineering skill. The Difference Engine’s design was sound. The problem was that the precision required to machine its thousands of components was beyond what the manufacturing technology of the 1820s could reliably deliver. Babbage needed parts machined to tolerances that simply could not be consistently achieved with the tools available.
But even as the Difference Engine stalled, Babbage’s mind had moved on to something more ambitious. He conceived of a new machine he called the Analytical Engine — and this was something genuinely different. Not just a calculator, but a general-purpose computing machine.
The Analytical Engine had a memory — a store of numbers that could be held and retrieved. It had a processor — a mill where arithmetic operations were performed. It had a form of conditional branching — it could, in principle, make decisions based on the results of previous calculations. And most importantly, it could be programmed using punched cards — an idea borrowed from the Jacquard loom, a French invention that used punched cards to control the pattern of weaving in textile manufacturing.
The Analytical Engine was, in its fundamental architecture, a computer. Not an electronic computer — it was mechanical, powered by steam — but a computer in the logical sense. It was a machine that could be programmed to perform any computation, not just a specific fixed calculation.
It was never built. The engineering challenges were even greater than those that had defeated the Difference Engine, and Babbage could never secure the sustained funding to complete it. He died in 1871 with his greatest dream unrealized.
But the Analytical Engine left something more important than hardware. It left an idea. And the person who understood that idea most clearly was not Babbage himself, but a young woman named Augusta Ada King, Countess of Lovelace — known to history simply as Ada Lovelace.
Ada Lovelace was the daughter of Lord Byron, the poet, though she never knew him — he left England when she was a month old and died when she was eight. Her mother, determined that Ada would not inherit her father’s “dangerous” romantic temperament, ensured she received an unusual education for a girl of her era — one focused on mathematics and science.
Lovelace encountered Babbage at a dinner party in 1833, when she was seventeen. She immediately understood what the Analytical Engine was and what it meant. While other guests saw a marvelous mechanical curiosity, Lovelace saw something else: a general-purpose symbol-manipulating machine that could, in principle, operate on anything that could be expressed as a symbol — not just numbers, but music, text, images.
In 1843, Lovelace translated a paper about the Analytical Engine written by an Italian mathematician, adding her own notes that were nearly three times as long as the original paper. Those notes contained what is recognized today as the world’s first computer program — a detailed algorithm for computing Bernoulli numbers using the Analytical Engine. She did not just describe what the machine would do. She worked out in precise detail the sequence of operations it would need to perform, the way variables would be stored and retrieved, the way the calculation would loop through repeated steps.
She also wrote something that has been debated by philosophers and computer scientists ever since. In a section of her notes, she addressed the question of what the Analytical Engine could and could not do. The machine, she wrote, could only do what we know how to order it to perform. It had no power of originating anything. It could only do what we told it to do.
This statement — now known as “Lady Lovelace’s Objection” — would be taken up a hundred years later by Alan Turing as one of the central arguments against the possibility of machine intelligence. Can a machine only do what it is programmed to do? Or can it surprise us? Can it genuinely create something new? Is there a point at which a machine becomes more than the sum of its instructions?
These questions are not settled today. The story of Artificial Intelligence is, in large part, the story of people trying to prove Ada Lovelace wrong — and slowly, haltingly, in ways she could never have imagined, perhaps beginning to succeed.
The Late 19th Century: Logic, Electricity, and a World Getting Ready
The second half of the 19th century was a period of extraordinary preparation for AI, though none of the people involved would have used that term.
George Boole, a self-taught English mathematician, published The Laws of Thought in 1854 — a work that showed how logical reasoning could be expressed entirely in terms of algebraic equations using only two values: true and false, or in the notation he developed, 1 and 0. Boolean algebra, as it came to be known, was not immediately recognized as important. But it would eventually become the mathematical foundation of every digital computer ever built. Every operation your laptop or smartphone performs is ultimately reducible to Boolean logic — sequences of true/false decisions, expressed in electrical circuits as on/off states.
Boole died of pneumonia in 1864, aged forty-nine, with no idea that his abstract mathematical work would one day be recognized as the logical underpinning of a global technological revolution.
At the same time, the new science of electricity was revealing a force that seemed to blur the line between the physical and the living. Experiments with galvanism — the application of electrical current to biological tissue — showed that dead muscles could be made to twitch and contract. This was the science that inspired Shelley’s Frankenstein, and it continued to fascinate scientists and the public throughout the century.
More practically, the development of the telegraph created something entirely new in the world: a system for transmitting information almost instantaneously across enormous distances. For the first time in human history, a message could travel faster than a human being could carry it. The implications took decades to fully absorb. But what the telegraph created — a network for information transmission — was a conceptual ancestor of everything that followed: the telephone, the radio, the internet, the interconnected world in which modern AI operates.
By the end of the 19th century, the essential ingredients for artificial intelligence were being assembled, though scattered across different fields and disciplines that had not yet found each other.
There was the mathematics of logic, courtesy of Boole and others. There was the concept of a programmable general-purpose computing machine, courtesy of Babbage and Lovelace. There was a growing understanding of the brain and nervous system as a physical system that processed information. And there was electricity — the mysterious force that animated both machines and living tissue, and that would eventually power everything.
The dream was getting closer to reality. All it needed was someone to put the pieces together.
The Turn of the Century: Automata in the Public Imagination
As the 19th century gave way to the 20th, artificial beings had become a staple of popular culture in ways that reflected genuine public anxieties about industrialization, automation, and the future of human labor.
In 1886, Villiers de l’Isle-Adam published a novel called L’Ève future — The Future Eve — which introduced the word “android” to describe an artificial human being. The novel featured a fictionalized version of Thomas Edison who builds a mechanical woman of perfect beauty and intelligence for a lovesick English nobleman. The android, named Hadaly, is so convincing that she seems more real than any human. She is articulate, sensitive, and — the novel’s disturbing central proposition — perhaps more authentic than the biological woman she was modeled on.
L’Ève future was not a great novel. But it captured something important about the cultural moment: the growing sense that machines were not just tools but potential rivals to human beings — rivals that might, in some dimensions, be superior. This anxiety would only intensify as the 20th century progressed.
In 1920, the Czech playwright Karel Čapek premiered a play called R.U.R. — Rossum’s Universal Robots — which introduced the word “robot” to the world. The word came from the Czech robota, meaning drudge work or forced labor. The robots in Čapek’s play were not mechanical beings like Talos or Leonardo’s knight. They were organic — biological constructions, manufactured rather than born. And like the best science fiction, the play was not really about robots at all. It was about labor, exploitation, and the violence that follows when the exploited finally rise up.
The robots in R.U.R. eventually rebel and exterminate humanity. It was the first major artistic expression of the fear that would become one of the defining anxieties of the AI age: the created turning on the creator. Frankenstein’s monster, but multiplied and industrialized.
What is striking is that R.U.R. premiered in 1920 — thirty-six years before the Dartmouth Conference that would officially launch the field of Artificial Intelligence. The cultural fears about AI were fully formed decades before the technology existed to justify them. The anxieties did not emerge from the technology. They were always already there, waiting.
Why This Matters Today
We have traveled a long way in this article — from bronze giants in Greek mythology to Czech robots in 1920. And you might wonder what any of this has to do with ChatGPT, or self-driving cars, or the debates about AI regulation that fill today’s news.
The answer is: everything.
The history of artificial intelligence does not begin in a university laboratory in the 1950s. It begins in the human imagination thousands of years ago. And the reason that matters is that the deepest questions about AI — the philosophical, ethical, and cultural questions — are not new questions. They are ancient questions wearing new clothes.
When we debate whether AI systems are truly intelligent or just mimicking intelligence, we are having the same argument Descartes started in 1637. When we worry about AI systems that follow their instructions too literally and cause unintended harm, we are recapitulating the Golem story. When we ask what responsibilities AI companies have toward the systems they create, we are asking Mary Shelley’s question about Victor Frankenstein. When we watch someone form a genuine emotional attachment to an AI companion, we are living the myth of Pygmalion.
These are not new problems. They are old problems with unprecedented urgency, because for the first time in history, we are actually building the things that our ancestors could only dream of.
And that makes the dreams more important, not less. Because how we think about what we are building — what stories we tell about it, what fears we bring to it, what hopes we project onto it — shapes what we build and how we build it.
The ancient dream of artificial life was not naive. It was not primitive. It was the first attempt of a species trying to understand its own intelligence by imagining what it would mean to replicate it. Every myth, every automaton, every philosophical thought experiment was a step in an inquiry that has not ended.
We are not at the end of this story. We may be close to one of its most significant chapters. But to understand where we are going, we have to know where we have been.
And we have been dreaming about this for a very, very long time.
Further Reading
If this article sparked your curiosity, here are some places to go deeper:
- “Machines Who Think” by Pamela McCorduck — The definitive history of AI, starting from ancient myths. Dense but extraordinary.
- “The Dream Machine” by M. Mitchell Waldrop — Focuses on J.C.R. Licklider and the early computing era, but opens with wonderful historical context.
- “Ada’s Algorithm” by James Essinger — A accessible biography of Ada Lovelace and her work with Babbage.
- “The Most Human Human” by Brian Christian — A modern exploration of the Turing Test and what it means to be human in an age of AI.
- Homer’s Iliad — Yes, the actual ancient text. The description of Hephaestus’s workshop in Book 18 is one of the earliest written descriptions of artificial life.
Next in the series: A2 — Clockwork Wonders: The Automata Era — How 17th and 18th century craftsmen built mechanical marvels that astonished kings and inspired the first computer scientists. The Digesting Duck, the Writing Boy, and the machines that proved the body could be replicated — and raised the question of whether the mind could be too.
Minds & Machines: The Story of AI is published weekly. If you found this valuable, share it with someone who would appreciate it.