Ask for “the first ever AI” and you run into a trap almost immediately. Most people asking the question picture something like a chatbot, a digital assistant, or a modern model trained on huge volumes of text and images. That picture has almost nothing to do with the earliest history of AI. The first serious contenders were theorem provers, game-playing programs, neural abstractions, and research agendas built by people who were still trying to decide what machine intelligence should even mean.
Table of Contents
If you want the cleanest single answer, Logic Theorist is the strongest one. Allen Newell, Herbert A. Simon, and Cliff Shaw built it in 1955–56 to discover proofs in symbolic logic. It did not talk, paint, translate, or generate prose. It reasoned through formal problems, used heuristics, and helped define a style of AI that would dominate the field’s early years. Yet even that answer needs a footnote. Turing framed the problem earlier. Dartmouth named the field. Christopher Strachey and Arthur Samuel built intelligent game programs. McCulloch, Pitts, and Rosenblatt opened the neural route that now feels the most familiar.
The better question, then, is not “What was the first AI?” as if history owes us one neat artifact. The better question is which early system first deserves the label by the standards we now care about. That is where the story gets interesting, and where the usual popular answers start to wobble.
The question breaks as soon as you ask it
The phrase first AI sounds precise, but it hides at least four different arguments. You might mean the first time someone proposed that machines could think. You might mean the first time the term artificial intelligence appeared in print. You might mean the first runnable program that showed behavior people were willing to call intelligent. Or you might mean the first system that resembles the way current AI works, which usually pushes readers toward neural networks and learning systems. Those are not the same milestone, and they do not point to the same inventor or machine.
John McCarthy later defined AI as “the science and engineering of making intelligent machines,” a definition broad enough to include symbolic reasoning, game playing, machine learning, perception, and language systems. That breadth matters because it explains why the origin story never stays tidy. Early AI was not one invention but a cluster of programs and theories arriving from different directions at roughly the same time. Some researchers cared about formal reasoning. Some cared about learning from experience. Some cared about perception. Some wanted machines to imitate human thinking. Others were satisfied with performance, whether or not the machine thought like a person.
That split still shapes the history. If you think intelligence is best shown by solving formal problems, Logic Theorist looks like the first real AI. If you think intelligence starts when a machine can improve through experience, Arthur Samuel’s checkers work becomes far more important. If you think the heart of modern AI lies in neural computation, then McCulloch and Pitts in 1943, and Rosenblatt’s perceptron in the late 1950s, move much closer to the front of the line.
There is also a public-history problem. People often remember the first AI as the first system that felt eerie or human-like. By that standard, ELIZA gets far more attention than it deserves in origin stories. It mattered enormously for public imagination, but it arrived a decade after the field had already been named and after key AI programs had already demonstrated reasoning, game strategy, and learning. ELIZA changed what people thought AI looked like. It did not start AI.
So before picking a winner, the honest move is to separate the milestones. The first serious intellectual manifesto is one thing. The first use of the term is another. The first convincing program is another still. Once you do that, the field stops looking like a fairy tale with one founder and one machine. It looks more like what it was: a restless argument about whether intelligence could be formalized, mechanized, and reproduced.
Turing planted the idea before AI had a name
Alan Turing did not coin the phrase artificial intelligence, and he did not build the program most historians now call the first AI. He did something earlier and in some ways harder: he made the question unavoidable. In his 1950 paper Computing Machinery and Intelligence, Turing opened with the problem “Can machines think?” and then refused to waste the entire debate on vague definitions. He replaced the metaphysical argument with an operational one, the imitation game, which later became the Turing test. That move gave later researchers a workable public frame for machine intelligence.
Turing’s importance runs deeper than the famous test. Oxford’s historical material on his work points out that even before the 1950 paper, he had already been sketching programs and architectures for machine intelligence in the late 1940s. With hindsight, those writings read like an early manifesto for AI, including thoughts about learning systems and artificial neural structures. He was asking what intelligence in a machine would require before the field had the language to organize itself around that question.
That matters because later histories often flatten Turing into a single thought experiment. He was not just the man who proposed a test. He was one of the first people to treat machine intelligence as an engineering project rather than a philosophical curiosity. The Computer History Museum’s timeline places his work as a prelude to later AI milestones for exactly that reason: he framed intelligence as something computers might display, not merely calculate around.
Still, calling Turing’s 1950 paper the first AI would stretch the term beyond usefulness. A paper is not a runnable system. A conceptual framework is not yet a demonstrated program. Turing deserves credit for shifting the problem from speculation to design, but not for producing the clearest early machine that historians can point to and say, yes, this was AI in action. That distinction is small on social media and huge in serious history.
There is a second reason Turing should not simply be crowned and the case closed. The standard most historians use for the first AI is not “who first imagined it,” but “who first built something that instantiated a recognizable AI method.” On that measure, Turing is the intellectual ancestor, not the final answer. He made later work legible. He did not settle the race.
Dartmouth turned a hunch into a field
The summer workshop at Dartmouth in 1956 is often described as the birth of AI, and that description is fair as long as it is read carefully. Dartmouth was the founding event of the field, not necessarily the moment the first AI program came into existence. John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon proposed the project in 1955, and the proposal appears to mark the first published use of the phrase artificial intelligence.
The proposal matters because it did more than name the field. It laid down an audacious research claim: intelligence, learning, language, abstraction, and self-improvement might all be described precisely enough that a machine could simulate them. You can feel the boldness of mid-century computing in that wager. It was not modest. It was not cautious. It was a declaration that mental activity belonged inside engineering.
That act of naming had real consequences. Before Dartmouth, the intellectual terrain was scattered across cybernetics, automata theory, information processing, formal logic, neuroscience, and operations research. After Dartmouth, those lines did not suddenly merge, but they gained a shared banner. A field exists differently once it has a name, a research program, and a circle of people who believe they are working on the same problem. Dartmouth supplied all three.
Yet Dartmouth is often misremembered as if nothing intelligent had happened before that summer. That is wrong. Turing’s work was already on the table. McCulloch and Pitts had already published their neural model. Strachey and Samuel were already making computers play games. What Dartmouth did was gather the ambitions into a coherent discipline and give later historians a symbolic starting line. It is the right answer to “When did AI become a field?” It is not automatically the right answer to “What was the first AI?”
That distinction also helps explain why Logic Theorist sits so comfortably in the origin story. It appeared at almost the exact moment the new field got its name. Historians love that convergence because it makes the narrative cleaner: the field is christened, and a program arrives that seems to justify the christening. History is rarely that cooperative, but in this case it was cooperative enough to leave a durable legend.
Logic Theorist made the strongest early case
If you strip away the mythology and ask for the first convincing AI program, Logic Theorist is still the best answer. Newell and Simon’s 1956 RAND paper describes “the logic theory machine” as a complex information-processing system capable of discovering proofs for theorems in symbolic logic, and it says explicitly that the system relied on heuristic methods similar to those observed in human problem solving. That sentence alone tells you why historians keep returning to it. This was not a glorified calculator. It was designed to emulate a mental process.
The program’s achievements were not cosmetic. According to later historical accounts and summaries grounded in the early record, Logic Theorist reproduced 38 of the first 52 proofs in a section of Principia Mathematica. That mattered for two reasons. First, theorem proving was intellectually prestigious terrain. Second, the program was not brute-forcing every path blindly. It used heuristics, search, intermediate goals, and feedback. It looked like a machine reasoning under constraints rather than merely grinding through arithmetic.
That word heuristics deserves attention. One of the decisive moves in early symbolic AI was the recognition that intelligence is not just formal correctness. It is selective search. It is knowing where not to look. Herbert Simon’s ACM Turing Award page makes the point cleanly: heuristics are rules of thumb that do not guarantee success, but often get to useful answers far faster. Logic Theorist embodied that shift. It brought intelligence closer to strategy than to certainty.
There is also a human side to the story that reveals how radical the project felt at the time. Simon later recalled the excitement around the program in terms that now sound almost theatrical, announcing that he and Newell had invented “a machine that thinks.” That line survives because it captured the mood perfectly. They were not claiming to have built an electronic brain in the strong sci-fi sense. They were claiming that a machine could carry out a process that respectable adults had long reserved for trained minds.
Logic Theorist also sits at the hinge between computer science and cognitive science. Newell and Simon were not only interested in getting correct proofs. They wanted to model how human beings solve problems. That is why the program keeps showing up in histories of AI and in histories of cognitive modeling. Its significance was not just that it worked, but that it worked in a way that looked psychologically meaningful to its creators.
That last point is why I would not replace Logic Theorist with a slightly earlier game player unless the definition changes. There were earlier or contemporaneous attempts at intelligent-seeming behavior. But Logic Theorist was unusually explicit about its aim, unusually influential in method, and unusually well positioned inside the newly named field. It was a program, not just a proposal. It was about reasoning, not just entertainment. And it shaped the symbolic tradition that dominated AI for years.
None of this makes the case uncontested. It makes it durable. Histories keep returning to Logic Theorist because it satisfies more of the modern checklist than its rivals do. It was built to imitate intelligent reasoning. It ran. It solved hard formal problems. It introduced key ideas like heuristic search. It arrived at the founding moment of the discipline. That combination is why, if someone forces you to give one name, this is the one to give.
Game-playing machines kept the origin story unsettled
The clean symbolic story begins to wobble once you look at game-playing programs. An Oxford history chapter on AI’s emergence states flatly that Christopher Strachey completed the first artificially intelligent computer program in 1952, a checkers program, and that Arthur Samuel later improved on that line of work by adding machine learning. That is not a fringe view. It is a serious reminder that intelligence in machines did not begin only with theorem proving.
Strachey’s case is attractive because a game-playing program feels immediately intelligible as AI. A machine evaluates positions, chooses moves, and competes against a human. That is behavior people intuitively read as intelligent. The problem is that the historical category is fuzzier here. Was Strachey’s program the first artificially intelligent program in a broad sense? Some scholars say yes. Was it the first program that founded AI as a research method? That is harder to defend. It did not anchor the field in the way Logic Theorist did.
Arthur Samuel’s checkers work is even more important because it brings learning into the picture. IBM’s historical material credits Samuel with building a program that improved its play over time and with pioneering what became known as machine learning. IBM’s machine learning overview traces the origin of the term to Samuel’s 1959 paper and quotes his aim clearly: the computer should learn to play a better game than the person who wrote it. That idea feels strikingly modern because it shifts intelligence from handcrafted rules toward performance shaped by experience.
This is where the answer depends on the era of AI you care about. Symbolic AI historians gravitate toward Logic Theorist because it launched heuristic reasoning and problem-solving architectures. Machine learning historians often give Samuel heavier weight because he pushed the idea that a program’s competence could emerge through improvement, not only through design. Both views are defensible. They are defending different bloodlines.
Game-playing systems also remind us that early AI researchers liked tightly bounded worlds. Board games were perfect because the rules were clear, the search space was large but formal, and success was visible. A theorem proof and a winning move look different on the surface, yet both let researchers ask the same question: can a machine choose intelligently within a structured domain? That is why game programs never sat at the edge of AI. They were central to it.
Still, if the original question is the plain-language one — the first ever AI — game programs complicate the story more than they settle it. They prove that Logic Theorist was not the only plausible first. They do not, by themselves, dislodge it as the strongest single answer. What they do is force you to say which property you are rewarding: strategic play, learning, formal reasoning, or disciplinary influence.
Neural AI started on a parallel track
Anyone trying to trace a straight line from early AI to modern deep learning has to leave the symbolic story for a moment. The neural line starts earlier than many people realize. In 1943, Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity, one of the foundational texts in computational neuroscience and the history of neural networks. Their paper argued that neural events and their relations could be treated in logical terms, effectively sketching an abstract model of artificial neurons.
That paper did not create a modern neural network in the engineering sense, and it certainly did not create something like a present-day deep learning system. But it mattered because it showed that cognition might be modeled through networks of simple units rather than only through symbolic rule manipulation. Two of AI’s later great traditions — symbolic reasoning and connectionist learning — were already diverging before the field had fully named itself.
Frank Rosenblatt pushed the neural line into hardware and learning. The Smithsonian’s record of the Mark I Perceptron notes that Rosenblatt described the perceptron in 1957 and that, by the following year, he and his colleagues had constructed the Mark I as a physical embodiment of the idea. IBM’s historical overview places the perceptron in 1957 as an early artificial neural network for pattern recognition, while Cornell’s retrospective makes clear how ambitious Rosenblatt’s claims were and how much his work later came to resemble the foundations of modern AI.
This is where origin stories tend to cheat. People who want a neat symbolic beginning talk as if neural AI came much later. People who want a neat deep-learning prehistory talk as if Rosenblatt made symbolic AI irrelevant. Neither move helps. The truth is that early AI was plural from the start. Symbolic theorem proving, game learning, and neural perception were not sequential replacements. They were overlapping bets on what intelligence might turn out to be.
That pluralism matters for the original question because it changes what first can mean. If you mean the first AI in the lineage that leads most directly to present-day neural systems, Rosenblatt’s perceptron and the McCulloch-Pitts model become hard to ignore. If you mean the first program widely recognized by historians as AI within the field’s own self-understanding, Logic Theorist still holds the stronger claim. Those are different lineages, and they should not be forced into one family tree with one founding child.
ELIZA captured the public long after the first breakthrough
ELIZA tends to hijack casual conversations about firsts because it feels uncannily familiar. Joseph Weizenbaum’s 1966 paper describes a program that made certain kinds of natural language conversation between human and computer possible through keyword-triggered decomposition and reassembly rules. In plain terms, ELIZA looked conversational enough to make users feel as if the machine understood them, even though the mechanism was shallow pattern handling rather than genuine comprehension.
What made ELIZA historic was not that it was the first AI. It was that it exposed how easily people project mind onto language. Weizenbaum himself wrote with a kind of controlled irritation about the way a program’s apparent intelligence dissolves once its procedures are explained. That observation still stings because it keeps recurring. Each new generation discovers a system that feels intelligent, and each generation has to confront the gap between performance and understanding.
Recent archival work highlighted by MIT Press shows that ELIZA was more sophisticated than the simplified legend suggests, and that its development belongs in a richer history of early AI than the usual “primitive chatbot” summary allows. That matters. But it does not alter the chronology. By the time ELIZA appeared publicly, AI already had a name, a workshop mythology, theorem provers, game-playing systems, learning programs, and neural machines. ELIZA belongs to the maturation of AI, not to its first moment.
The reason ELIZA keeps sneaking into origin stories is emotional rather than historical. It gives readers a recognizable ancestor for present-day chat systems. Logic Theorist does not. A theorem prover working through symbolic logic feels distant from today’s interfaces. A program that talks back does not. That makes ELIZA a powerful cultural ancestor even though it is a weak candidate for “first AI.”
So ELIZA deserves a precise label. It was an early and hugely influential chatbot, a landmark in human-computer conversation, and a warning about anthropomorphism. It was not the first AI unless you reduce AI history to the history of conversational illusion, which would erase too much of what the field actually was in its early decades.
The most honest answer and why it still matters
By this point the clean answer and the honest answer are close, but not identical. The clean answer is easy to state: Logic Theorist is usually the best single answer to “what was the first AI?” The honest answer is slightly longer: it depends on whether you mean the first AI concept, the first use of the term, the first strong symbolic AI program, the first intelligent game player, the first learning system, or the first neural machine. Once those categories are separated, the argument becomes much less confused.
A quick map of the competing firsts
| Candidate | Date | Strongest claim | Why it matters |
|---|---|---|---|
| Alan Turing’s early AI writings and 1950 paper | 1948–1950 | First major intellectual framework for machine intelligence | Turned machine thinking into a serious research question |
| McCulloch and Pitts model | 1943 | Early theoretical foundation for artificial neural networks | Opened the connectionist path |
| Dartmouth proposal and workshop | 1955–1956 | First published use of artificial intelligence and founding event of the field | Named the discipline and stated its ambition |
| Logic Theorist | 1955–1956 | First strong symbolic AI program | Used heuristics and proved formal theorems |
| Strachey and Samuel game programs | 1952 onward | Early intelligent play and early machine learning | Showed strategic play and learning from experience |
| Rosenblatt’s perceptron | 1957–1958 | Early neural learning machine | Linked perception, classification, and adaptive weights |
| ELIZA | 1964–1966 | First famous chatbot | Changed public imagination of what AI feels like |
The table works because the rival claims are not nonsense. Each of these systems or events really was first at something important. History gets distorted when one of those “firsts” is smuggled in as if it covered all the others. The symbolic tradition, the neural tradition, the learning tradition, and the public-facing chatbot tradition did not begin on the same day or with the same machine.
That is also why the question still matters. It is not just trivia. The machine you choose as the “first AI” reveals what you believe intelligence is. Choose Turing, and AI begins as a conceptual challenge. Choose Dartmouth, and it begins as a field-building act. Choose Logic Theorist, and AI begins with reasoning and heuristics. Choose Samuel, and it begins with learning. Choose Rosenblatt, and it begins with neural adaptation. Choose ELIZA, and you are telling a story about language, performance, and human projection.
My own answer, if forced to use one sentence, would be this: the first ever AI was probably Logic Theorist, but only after you admit that “first ever AI” is an argument about definitions, not a settled physical object sitting alone at the start of history. That answer is less tidy than the headline version, but it is truer to the record.
And that truth has a contemporary echo. Every generation of AI likes to imagine that it has finally found the real road to intelligence and that earlier detours were quaint. Early AI history says otherwise. The field began as a set of competing intuitions about what minds do: reason, search, learn, classify, speak, adapt. We still have not escaped that argument. We are still living inside it.
Photo of Allen Newell, Herbert A. Simon, and Cliff Shaw

FAQ
Usually, yes — if you mean the first widely recognized AI program in the field’s own history. It was built in 1955–56, used heuristic search, and proved theorems from Principia Mathematica. That combination is why many historians and reference works treat it as the first strong AI program.
John McCarthy coined the term in 1955 in the proposal for the Dartmouth Summer Research Project on Artificial Intelligence, which later became the field’s founding workshop in 1956.
Turing is better described as a founding thinker of AI than as the builder of the first accepted AI program. His 1950 paper framed the central question of machine intelligence and gave later researchers a lasting test and vocabulary, but the strongest “first program” claim usually goes elsewhere.
ELIZA is widely regarded as an early and foundational chatbot, not the first AI overall. It appeared in the mid-1960s, after AI had already been named as a field and after symbolic reasoning, game playing, learning, and neural systems were already underway.
Machine learning enters the early AI story most clearly through Arthur Samuel’s checkers work. IBM traces the term’s origin to Samuel’s 1959 paper, and his program is important because it aimed to improve through experience rather than rely only on fixed hand-coded competence.
Because modern AI feels much closer to the neural lineage than to mid-century symbolic theorem proving. McCulloch and Pitts laid down an early theoretical neural model in 1943, and Rosenblatt’s perceptron turned that line into a learning machine in the late 1950s. If your definition of AI centers on neural learning, those milestones move much closer to the front.
Author:
Jan Bielik
CEO & Founder of Webiano Digital & Marketing Agency
This article is an original analysis supported by the sources cited below
I.—COMPUTING MACHINERY AND INTELLIGENCE
Alan Turing’s 1950 paper that framed the machine intelligence question and introduced the imitation game.
A logical calculus of the ideas immanent in nervous activity
The classic 1943 paper by Warren McCulloch and Walter Pitts that laid foundational neural-network theory.
A proposal for the Dartmouth summer research project on artificial intelligence
The text of the 1955 proposal that introduced the phrase artificial intelligence and stated the field’s early research wager.
Artificial Intelligence AI coined at Dartmouth
Dartmouth’s institutional history of the 1956 workshop and its role in the birth of AI as a field.
What is Artificial Intelligence AI
Stanford HAI’s concise definition of AI and note on John McCarthy’s role in naming the field.
Stanford’s John McCarthy, seminal figure of artificial intelligence is dead at 84
Stanford’s obituary-style profile summarizing McCarthy’s coinage of the term and the Dartmouth proposal.
The Logic Theory Machine
The 1956 RAND report by Allen Newell and Herbert A. Simon describing the logic theory machine and its heuristic approach.
AI and Robotics timeline of computer history
Computer History Museum timeline entry summarizing Logic Theorist’s achievements and early AI context.
Herbert A. Simon A.M. Turing Award laureate
ACM’s historical profile of Simon, including Logic Theorist’s role in heuristic problem solving and early AI.
Artificial Intelligence emerges
Oxford Academic chapter that highlights Strachey’s checkers program, Samuel’s learning program, and Logic Theorist’s proof record.
The games that helped AI evolve
IBM history feature on Arthur Samuel’s checkers work and the practical importance of early game-playing systems.
Some Studies in Machine Learning Using the Game of Checkers
Arthur Samuel’s seminal paper on learning systems built through checkers.
What is Machine Learning
IBM overview tracing the term machine learning back to Samuel’s 1959 paper.
Electronic Neural Network, Mark I Perceptron
Smithsonian object record for Rosenblatt’s Mark I Perceptron and its place in neural-network history.
Professor’s perceptron paved the way for AI — 60 years too soon
Cornell’s historical retrospective on Frank Rosenblatt and the perceptron’s long-term significance.
ELIZA — a computer program for the study of natural language communication between man and machine
Joseph Weizenbaum’s 1966 paper describing ELIZA’s design and the illusion of conversational understanding.
Inventing ELIZA
MIT Press description of archival work that reconstructs ELIZA’s deeper historical and technical context.
The 1956 Dartmouth Workshop and its Immediate Consequences The Origins of Artificial Intelligence
Computer History Museum summary of the Dartmouth proposal and its role in formalizing AI as a discipline.
Artificial Intelligence
Stephen G. Dick’s historical essay in Harvard Data Science Review on the standard origin narrative of AI.



