History of artificial intelligence


The history of artificial intelligence began in antiquity, with myths, stories, and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The study of logic and formal reasoning from antiquity to the present led directly to the invention of the programmable digital computer in the 1940s, a machine based on abstract mathematical reasoning. This device and the ideas behind it inspired scientists to begin discussing the possibility of building an electronic brain.
The field of AI research was founded at a workshop held on the campus of Dartmouth College in 1956
. Attendees of the workshop became the leaders of AI research for decades. Many of them predicted that machines as intelligent as humans would exist within a generation. The U.S. government provided millions of dollars with the hope of making this vision come true.
Eventually, it became obvious that researchers had grossly underestimated the difficulty of this feat. In 1974, criticism from James Lighthill and pressure from the U.S. Congress led the U.S. and British Governments to stop funding undirected research into artificial intelligence. Seven years later, a visionary initiative by the Japanese Government and the success of expert systems reinvigorated investment in AI, and by the late 1980s, the industry had grown into a billion-dollar enterprise. However, investors' enthusiasm waned in the 1990s, and the field was criticized in the press and avoided by industry. Nevertheless, research and funding continued to grow under other names.
In the early 2001s, machine learning was applied to a wide range of problems in academia and industry. The success was due to the availability of powerful computer hardware, the collection of immense data sets, and the application of solid mathematical methods. Soon after, deep learning proved to be a breakthrough technology, eclipsing all other methods. The transformer architecture debuted in 2017 and was used to produce impressive generative AI applications, amongst other use cases.
Investment in AI boomed in the 2020s. The recent AI boom, initiated by the development of transformer architecture, led to the rapid scaling and public releases of large language models like ChatGPT. These models exhibit human-like traits of knowledge, attention, and creativity, and have been integrated into various sectors, fueling exponential investment in AI. However, concerns about the potential risks and ethical implications of advanced AI have also emerged, causing debate about the future of AI and its impact on society.

Precursors

Myth, folklore, and fiction

Mythology and folklore has depictions of automatons and similar human-like artificial life.
In Greek mythology, Talos was a creature made of bronze who acted as a guardian for the island of Crete.
Alchemists in the Islamic Golden Age, such as Jabir ibn Hayyan, attempted Takwin, the artificial creation of life, including human life, although this may have been metaphorical.
In Jewish folklore during the Middle Ages, a Golem was a clay sculpture that was said to have come to life through the insertion of a piece of paper with any of God's names on it into the mouth. 16th century Swiss alchemist Paracelsus described a procedure he claimed would fabricate a homunculus, or artificial man. Brazen heads were a recurring motif in late medieval and early modern folklore.
By the 19th century, ideas about artificial men and thinking machines became a popular theme in fiction. Notable works like Mary Shelley's Frankenstein, Johann Wolfgang von Goethe's, Faust, Part Two, and Karel Čapek's R.U.R. .
Speculative essays, such as Samuel Butler's "Darwin among the Machines", and Edgar Allan Poe's "Maelzel's Chess Player" reflected society's growing interest in machines with artificial intelligence.

Automata

Realistic humanoid automata were built by craftsman from many civilizations, including Yan Shi, Hero of Alexandria, Al-Jazari, Haroun al-Rashid, Jacques de Vaucanson, Leonardo Torres y Quevedo, Pierre Jaquet-Droz and Wolfgang von Kempelen.
The oldest known automata were sacred statues of ancient Egypt and Greece. The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it". English scholar Alexander Neckham asserted that the Ancient Roman poet Virgil had built a palace with automaton statues.

Formal reasoning

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. Philosophers had developed structured methods of formal deduction by the first millennium BCE.
Spanish philosopher Ramon Llull developed several logical machines devoted to the production of knowledge by logical means; Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge. Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.
In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry. Hobbes wrote in Leviathan: "For reason... is nothing but reckoning, that is adding and subtracting". Leibniz described a universal language of reasoning, the characteristica universalis, which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say to each other : Let us calculate."
The study of mathematical logic, such as Boole's The Laws of Thought and Frege's Begriffsschrift, have allowed for the scientific study of artificial intelligence. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in the Principia Mathematica in 1913. Following Russell, David Hilbert challenged mathematicians of the 1920s and 30s to formalize all mathematical reasoning. This question has been addressed by Gödel's incompleteness proof, Turing's machine and Church's Lambda calculus. This work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machine—a simple theoretical construct that captured the essence of abstract symbol manipulation.

Neuroscience

In the 18th and 19th centuries Luigi Galvani, Emil du Bois-Reymond, Hermann von Helmholtz and others demonstrated that the nerves carried electrical signals and Robert Bentley Todd correctly speculated in 1828 that the brain was an electrical network. Camillo Golgi's staining techniques enabled Santiago Ramón y Cajal to provide evidence for the neuron theory: "The truly amazing conclusion is that a collection of simple cells can lead to thought, action, and consciousness".
Donald Hebb was a Canadian psychologist whose work laid the foundation for modern neuroscience, particularly in understanding learning, memory, and neural plasticity. His most influential book, The Organization of Behavior, introduced the concept of Hebbian learning, often summarized as "cells that fire together wire together."
Hebb began formulating the foundational ideas for this book in the early 1940s, particularly during his time at the Yerkes Laboratories of Primate Biology from 1942 to 1947. He made extensive notes between June 1944 and March 1945 and sent a complete draft to his mentor Karl Lashley in 1946. The manuscript for The Organization of Behavior wasn't published until 1949. The delay was due to various factors, including World War II and shifts in academic focus. By the time it was published, several of his peers had already published related ideas, making Hebb's work seem less groundbreaking at first glance. However, his synthesis of psychological and neurophysiological principles became a cornerstone of neuroscience and machine learning.

Computer science

Calculating machines were designed or built in antiquity and throughout history by many people, including Gottfried Leibniz, Joseph Marie Jacquard, Charles Babbage, Percy Ludgate, Leonardo Torres Quevedo, Vannevar Bush, and others. Ada Lovelace speculated that Babbage's machine was "a thinking or... reasoning machine", but warned "It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers" of the machine.
The first modern computers were the massive machines of the Second World War. ENIAC was based on the theoretical foundation laid by Alan Turing and developed by John von Neumann, and proved to be the most influential.

Birth of artificial intelligence (1941–1956)

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals. Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an "electronic brain".
In the 1940s and 50s, a handful of scientists from a variety of fields explored several research directions that would be vital to later AI research. Alan Turing was among the first people to seriously investigate the theoretical possibility of "machine intelligence". The field of "artificial intelligence research" was founded as an academic discipline in 1956.