Timeline of artificial intelligence


This is a timeline of artificial intelligence, also known as synthetic intelligence.

Antiquity, Classical and Medieval eras

1600–1900

20th century

1901–1950

DateDevelopment
1910-1913Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which showed that all of elementary mathematics could be reduced to mechanical reasoning in formal logic.
1912-1914Leonardo Torres Quevedo built an automaton for chess endgames, El Ajedrecista. He was called "the 20th century's first AI pioneer". In his Essays on Automatics, Torres published speculation about thinking and automata and introduced the idea of floating-point arithmetic.
1923Karel Čapek's play R.U.R. opened in London. This is the first use of the word "robot" in English.
1920–1925Wilhelm Lenz and Ernst Ising created and analyzed the Ising model which can be viewed as the first artificial recurrent neural network consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive.
1920s and 1930sLudwig Wittgenstein's Tractatus Logico-Philosophicus inspires Rudolf Carnap and the logical positivists of the Vienna Circle to use formal logic as the foundation of philosophy. However, Wittgenstein's later work in the 1940s demonstrates that context-free symbolic logic is incoherent without human interpretation.
1931Kurt Gödel encoded mathematical statements and proofs as integers and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus, "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,", laying the foundations of theoretical computer science and AI theory.
1935Alonzo Church extended Gödel's proof and showed that the decision problem of computer science does not have a general solution. He developed the Lambda calculus, which will eventually be fundamental to the theory of computer languages.
1936Konrad Zuse filed his patent application for a program-controlled computer.
1937Alan Turing published "On Computable Numbers", which laid the foundations of the modern theory of computation by introducing the Turing machine, a physical interpretation of "computability". He used it to confirm Gödel by proving that the halting problem is undecidable.
1940Edward Condon displayed Nimatron, a digital machine that played Nim perfectly.
1941Konrad Zuse built the first working program-controlled general-purpose computer.
1943Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity", the first mathematical description of an artificial neural networks.
1943Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name was published in 1948.
1945Game theory, which would prove invaluable in the progress of AI, was introduced with the 1944 paper "Theory of Games and Economic Behavior" by mathematician John von Neumann and economist Oskar Morgenstern.
1945Vannevar Bush published "As We May Think", a prescient vision of the future in which computers assist humans in many activities.
1948Alan Turing produces the "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts, including the logic-based approach to problem-solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates the Connectionism AI approach.
1948John von Neumann in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church–Turing thesis, which states that any effective procedure can be simulated by a computer.
1949Donald O. Hebb develops Hebbian theory, a possible algorithm for learning in neural networks.

1950s

1960s

DateDevelopment
1960sRay Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1960"Man–Computer Symbiosis" by J.C.R. Licklider.
1961James Slagle wrote the first symbolic integration program, SAINT, which solved calculus problems at the college-freshman level.
1961In Minds, Machines and Gödel, John Lucas denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
1961Unimation's industrial robot Unimate worked on a General Motors automobile assembly-line.
1963Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
1963Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.
1963Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine-learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.
1964Danny Bobrow's dissertation at MIT, shows that computers can understand natural language well enough to solve algebra word problems correctly.
1964In Stanisław Lem's essay-collection Summa Technologiae Lem discusses "intellectronics".
1964Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
1965In the Soviet Union, Alexey Ivakhnenko and Valentin Lapa develop the first deep-learning algorithm for multilayer perceptrons.
1965Lotfi A. Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic, "Fuzzy Sets".
1965J. Alan Robinson invents a mechanical proof procedure, the Resolution Method, which allows programs to work efficiently with formal logic as a representation language.
1965Joseph Weizenbaum builds ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.
1965Edward Feigenbaum initiates Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966Ross Quillian demonstrates semantic nets.
1966Machine Intelligence workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others.
1966Negative report on machine-translation kills much work in natural language processing for many years.
1966Dendral program demonstrated to interpret mass spectra of organic chemical compounds. First successful knowledge-based program for scientific reasoning.
1967Shun'ichi Amari becomes the first to use stochastic gradient descent for deep learning in multilayer perceptrons. In computer experiments conducted by his student Saito, a five-layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.
1968Joel Moses demonstrates the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics.
1968Richard Greenblatt at MIT builds a knowledge-based chess-playing program, Mac Hack, that was good enough to achieve a class-C rating in tournament play.
1968Wallace and Boulton's program, Snob, for unsupervised classification uses the Bayesian minimum message length criterion, a mathematical realisation of Occam's razor.
1969Stanford Research Institute : Shakey the robot, demonstrates combining animal locomotion, perception and problem-solving.
1969Roger Schank defines conceptual dependency model for natural language understanding. Later developed for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
1969Yorick Wilks hine-translation program, and the basis of many PhD dissertations since held at Stanford.
1969Marvin Minsky and Seymour Papert publish Perceptronss, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for training multilayer perceptrons by deep learning were already known. Significant progress in the field continued.
1969McCarthy and Hayes start the discussion about the frame problem with their essay "Some Philosophical Problems from the Standpoint of Artificial Intelligence".