Timeline of artificial intelligence


This is a timeline of artificial intelligence, also known as synthetic intelligence.

20th century

1901–1950

DateDevelopment
1910-1913Bertrand Russell and Alfred North Whitehead published Principia Mathematica, which showed that all of elementary mathematics could be reduced to mechanical reasoning in formal logic.
1912-1914Leonardo Torres Quevedo built an automaton for chess endgames, El Ajedrecista. He was called "the 20th century's first AI pioneer". In his Essays on Automatics, Torres published speculation about thinking and automata and introduced the idea of floating-point arithmetic.
1923Karel Čapek's play R.U.R. opened in London. This is the first use of the word "robot" in English.
1920–1925Wilhelm Lenz and Ernst Ising created and analyzed the Ising model which can be viewed as the first artificial recurrent neural network consisting of neuron-like threshold elements. In 1972, Shun'ichi Amari made this architecture adaptive.
1920s and 1930sLudwig Wittgenstein's Tractatus Logico-Philosophicus inspires Rudolf Carnap and the logical positivists of the Vienna Circle to use formal logic as the foundation of philosophy. However, Wittgenstein's later work in the 1940s demonstrates that context-free symbolic logic is incoherent without human interpretation.
1931Kurt Gödel encoded mathematical statements and proofs as integers and showed that there are true theorems that are unprovable by any consistent theorem-proving machine. Thus, "he identified fundamental limits of algorithmic theorem proving, computing, and any type of computation-based AI,", laying the foundations of theoretical computer science and AI theory.
1935Alonzo Church extended Gödel's proof and showed that the decision problem of computer science does not have a general solution. He developed the Lambda calculus, which will eventually be fundamental to the theory of computer languages.
1936Konrad Zuse filed his patent application for a program-controlled computer.
1937Alan Turing published "On Computable Numbers", which laid the foundations of the modern theory of computation by introducing the Turing machine, a physical interpretation of "computability". He used it to confirm Gödel by proving that the halting problem is undecidable.
1940Edward Condon displayed Nimatron, a digital machine that played Nim perfectly.
1941Konrad Zuse built the first working program-controlled general-purpose computer.
1943Warren Sturgis McCulloch and Walter Pitts publish "A Logical Calculus of the Ideas Immanent in Nervous Activity", the first mathematical description of an artificial neural networks.
1943Arturo Rosenblueth, Norbert Wiener and Julian Bigelow coin the term "cybernetics". Wiener's popular book by that name was published in 1948.
1945Game theory, which would prove invaluable in the progress of AI, was introduced with the 1944 paper "Theory of Games and Economic Behavior" by mathematician John von Neumann and economist Oskar Morgenstern.
1945Vannevar Bush published "As We May Think", a prescient vision of the future in which computers assist humans in many activities.
1948Alan Turing produces the "Intelligent Machinery" report, regarded as the first manifesto of Artificial Intelligence. It introduces many concepts, including the logic-based approach to problem-solving, that intellectual activity consists mainly of various kinds of search, and a discussion of machine learning in which he anticipates the Connectionism AI approach.
1948John von Neumann in response to a comment at a lecture that it was impossible for a machine to think: "You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!". Von Neumann was presumably alluding to the Church–Turing thesis, which states that any effective procedure can be simulated by a computer.
1949Donald O. Hebb develops Hebbian theory, a possible algorithm for learning in neural networks.

1960s

DateDevelopment
1960sRay Solomonoff lays the foundations of a mathematical theory of AI, introducing universal Bayesian methods for inductive inference and prediction.
1960"Man–Computer Symbiosis" by J.C.R. Licklider.
1961James Slagle wrote the first symbolic integration program, SAINT, which solved calculus problems at the college-freshman level.
1961In Minds, Machines and Gödel, John Lucas denied the possibility of machine intelligence on logical or philosophical grounds. He referred to Kurt Gödel's result of 1931: sufficiently powerful formal systems are either inconsistent or allow for formulating true theorems unprovable by any theorem-proving AI deriving all provable theorems from the axioms. Since humans are able to "see" the truth of such theorems, machines were deemed inferior.
1961Unimation's industrial robot Unimate worked on a General Motors automobile assembly-line.
1963Thomas Evans' program, ANALOGY, written as part of his PhD work at MIT, demonstrated that computers can solve the same analogy problems as are given on IQ tests.
1963Edward Feigenbaum and Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence.
1963Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators", which described one of the first machine-learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt.
1964Danny Bobrow's dissertation at MIT, shows that computers can understand natural language well enough to solve algebra word problems correctly.
1964In Stanisław Lem's essay-collection Summa Technologiae Lem discusses "intellectronics".
1964Bertram Raphael's MIT dissertation on the SIR program demonstrates the power of a logical representation of knowledge for question-answering systems.
1965In the Soviet Union, Alexey Ivakhnenko and Valentin Lapa develop the first deep-learning algorithm for multilayer perceptrons.
1965Lotfi A. Zadeh at U.C. Berkeley publishes his first paper introducing fuzzy logic, "Fuzzy Sets".
1965J. Alan Robinson invents a mechanical proof procedure, the Resolution Method, which allows programs to work efficiently with formal logic as a representation language.
1965Joseph Weizenbaum builds ELIZA, an interactive program that carries on a dialogue in English language on any topic. It was a popular toy at AI centers on the ARPANET when a version that "simulated" the dialogue of a psychotherapist was programmed.
1965Edward Feigenbaum initiates Dendral, a ten-year effort to develop software to deduce the molecular structure of organic compounds using scientific instrument data. It was the first expert system.
1966Ross Quillian demonstrates semantic nets.
1966Machine Intelligence workshop at Edinburgh – the first of an influential annual series organized by Donald Michie and others.
1966Negative report on machine-translation kills much work in natural language processing for many years.
1966Dendral program demonstrated to interpret mass spectra of organic chemical compounds. First successful knowledge-based program for scientific reasoning.
1967Shun'ichi Amari becomes the first to use stochastic gradient descent for deep learning in multilayer perceptrons. In computer experiments conducted by his student Saito, a five-layer MLP with two modifiable layers learned useful internal representations to classify non-linearily separable pattern classes.
1968Joel Moses demonstrates the power of symbolic reasoning for integration problems in the Macsyma program. First successful knowledge-based program in mathematics.
1968Richard Greenblatt (programmer) at MIT builds a knowledge-based chess-playing program, Mac Hack, that was good enough to achieve a class-C rating in tournament play.
1968Wallace and Boulton's program, Snob, for unsupervised classification uses the Bayesian minimum message length criterion, a mathematical realisation of Occam's razor.
1969Stanford Research Institute : Shakey the robot, demonstrates combining animal locomotion, perception and problem-solving.
1969Roger Schank defines conceptual dependency model for natural language understanding. Later developed for use in story understanding by Robert Wilensky and Wendy Lehnert, and for use in understanding memory by Janet Kolodner.
1969Yorick Wilks hine-translation program, and the basis of many PhD dissertations since held at Stanford.
1969Marvin Minsky and Seymour Papert publish Perceptronss, demonstrating previously unrecognized limits of this feed-forward two-layered structure. This book is considered by some to mark the beginning of the AI winter of the 1970s, a failure of confidence and funding for AI. However, by the time the book came out, methods for training multilayer perceptrons by deep learning were already known. Significant progress in the field continued.
1969McCarthy and Hayes start the discussion about the frame problem with their essay "Some Philosophical Problems from the Standpoint of Artificial Intelligence".

1970s

DateDevelopment
Early 1970sJane Robinson and Don Walker established an influential Natural Language Processing group at SRI.
1970Seppo Linnainmaa publishes the reverse mode of automatic differentiation. This method became later known as backpropagation, and is heavily used to train artificial neural networks.
1970Jaime Carbonell developed SCHOLAR, an interactive program for computer-assisted instruction based on semantic nets as the representation of knowledge.
1970Bill Woods described Augmented Transition Networks as a representation for natural language understanding.
1970Patrick Winston's PhD program, ARCH, at MIT, learned concepts from examples in the world of children's blocks.
1971Terry Winograd's PhD thesis demonstrated the ability of computers to understand English sentences in a restricted world of children's blocks, in a coupling of his language understanding program, SHRDLU, with a robot arm that carried out instructions typed in English.
1971Work on the Boyer-Moore theorem prover started in Edinburgh.
1972Prolog programming language developed by Alain Colmerauer.
1972Earl Sacerdoti developed one of the first hierarchical planning programs, ABSTRIPS.
1973The Assembly Robotics Group at University of Edinburgh builds Freddy Robot, capable of using visual perception to locate and assemble models.
1973The Lighthill report gives a largely negative verdict on AI research in Great Britain and forms the basis for the decision by the British government to discontinue support for AI research in all but two universities.
1974Ted Shortliffe's PhD dissertation on the MYCIN program demonstrated a very practical rule-based approach to medical diagnoses, even in the presence of uncertainty. While it borrowed from DENDRAL, its own contributions strongly influenced the future of expert system development, especially commercial systems.
1975Earl Sacerdoti developed techniques of partial-order planning in his NOAH system, replacing the previous paradigm of search among state space descriptions. NOAH was applied at SRI International to interactively diagnose and repair electromechanical systems.
1975Austin Tate developed the Nonlin hierarchical planning system, able to search a space of partial plans characterised as alternative approaches to the underlying goal structure of the plan.
1975Marvin Minsky published his widely read and influential article on Frames as a representation of knowledge, in which many ideas about schemas and semantic links are brought together.
1975The Meta-Dendral learning program produced new results in chemistry, the first scientific discoveries by a computer to be published in a refereed journal.
Mid-1970sBarbara Grosz established limits to traditional AI approaches to discourse modeling. Subsequent work by Grosz, Bonnie Webber and Candace Sidner developed the notion of "centering", used in establishing focus of discourse and anaphoric references in Natural language processing.
Mid-1970sDavid Marr and MIT colleagues describe the "primal sketch" and its role in visual perception.
1976Douglas Lenat's AM program demonstrated the discovery model.
1976Randall Davis demonstrated the power of meta-level reasoning in his PhD dissertation at Stanford.
1976Stevo Bozinovski and Ante Fulgosi introduced transfer learning method in artificial intelligence, based on the psychology of learning.
1978Tom Mitchell, at Stanford, invented the concept of Version spaces for describing the search space of a concept formation program.
1978Herbert A. Simon wins the Nobel Prize in Economics for his theory of bounded rationality, one of the cornerstones of AI known as "satisficing".
1978The MOLGEN program, written at Stanford by Mark Stefik and Peter Friedland, demonstrated that an object-oriented programming representation of knowledge can be used to plan gene-cloning experiments.
1979Bill VanMelle's PhD dissertation at Stanford demonstrated the generality of MYCIN's representation of knowledge and style of reasoning in his EMYCIN program, the model for many commercial expert system "shells".
1979Jack Myers and Harry Pople at University of Pittsburgh developed INTERNIST, a knowledge-based medical diagnosis program based on Dr. Myers' clinical knowledge.
1979Cordell Green, David Barstow, Elaine Kant and others at Stanford demonstrated the CHI system for automatic programming.
1979The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
1979BKG, a backgammon program written by Hans Berliner at CMU, defeats the reigning world champion.
1979Drew McDermott and Jon Doyle at MIT, and John McCarthy at Stanford, begin publishing work on non-monotonic logics and formal aspects of truth maintenance.
Late 1970sStanford's SUMEX-AIM resource, headed by Ed Feigenbaum and Joshua Lederberg, demonstrates the power of the ARPAnet for scientific collaboration.

1980s

DateDevelopment
1980sLisp machines developed and marketed. First expert system shells and commercial applications.
1980First National Conference of the American Association for Artificial Intelligence held at Stanford.
1981Danny Hillis designs the connection machine, which utilizes parallel computing to bring new power to AI, and to computation in general.
1981Stevo Bozinovski and Charles Anderson carried out the first concurrent programming in neural network research. A program, "CAA Controller," written and executed by Bozinovski, interacts with the program "Inverted Pendulum Dynamics" written and executed by Anderson, using VAX/VMS mailboxes as a way of inter-program communication. The CAA controller learns to balance the simulated inverted pendulum.
1982The Fifth Generation Computer Systems project, an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a "fifth generation computer" which was supposed to perform many calculations utilizing massive parallelism.
1983John Laird and Paul Rosenbloom, working with Allen Newell, complete CMU dissertations on Soar.
1983James F. Allen invents the Interval Calculus, the first widely used formalization of temporal events.
Mid-1980sNeural Networks become widely used with the Backpropagation algorithm, also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa in 1970 and applied to neural networks by Paul Werbos.
1985The autonomous drawing program, AARON, created by Harold Cohen, is demonstrated at the AAAI National Conference.
1986The team of Ernst Dickmanns at Bundeswehr University of Munich builds the first robot cars, driving up to 55 mph on empty streets.
1986Barbara Grosz and Candace Sidner create the first computation model of discourse, establishing the field of research.
1987Marvin Minsky published The Society of Mind, a theoretical description of the mind as a collection of cooperating agents. He had been lecturing on the idea for years before the book came out.
1987Around the same time, Rodney Brooks introduced the subsumption architecture and behavior-based robotics as a more minimalist modular model of natural intelligence; Nouvelle AI.
1987Commercial launch of generation 2.0 of Alacrity by Alacritous Inc./Allstar Advice Inc. Toronto, the first commercial strategic and managerial advisory system. The system was based upon a forward-chaining, self-developed expert system with 3,000 rules about the evolution of markets and competitive strategies and co-authored by Alistair Davidson and Mary Chung, founders of the firm with the underlying engine developed by Paul Tarvydas. The Alacrity system also included a small financial expert system that interpreted financial statements and models.
1989The development of metal–oxide–semiconductor Very-large-scale integration, in the form of complementary MOS technology, enabled the development of practical artificial neural network technology in the 1980s. A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.
1989Dean Pomerleau at CMU created ALVINN, which was used in the Navlab program.

1990s

DateDevelopment
1990sMajor advances in all areas of AI, with significant demonstrations in machine learning, intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, data mining, natural language understanding and translation, vision, virtual reality, games, and other topics.
Early 1990sTD-Gammon, a backgammon program written by Gerry Tesauro, demonstrates that reinforcement is powerful enough to create a championship-level game-playing program by competing favorably with world-class players.
1991DART scheduling application deployed in the first Gulf War paid back DARPA's investment of 30 years in AI research.
1992Carol Stoker and NASA Ames robotics team explore marine life in Antarctica with an undersea robot Telepresence ROV operated from the ice near McMurdo Bay, Antarctica and remotely via satellite link from Moffett Field, California.
1993Ian Horswill extended behavior-based robotics by creating Polly, the first robot to navigate using vision and operate at animal-like speeds.
1993Rodney Brooks, Lynn Andrea Stein and Cynthia Breazeal started the widely publicized MIT Cog project with numerous collaborators, in an attempt to build a humanoid robot child in just five years.
1993ISX corporation wins "DARPA contractor of the year" for the Dynamic Analysis and Replanning Tool, which reportedly repaid the US government's entire investment in AI research since the 1950s.
1994Lotfi A. Zadeh at U.C. Berkeley creates "soft computing" and builds a world network of research with a fusion of neural science and neural net systems, fuzzy set theory and fuzzy systems, evolutionary algorithms, genetic programming, and chaos theory and chaotic systems.
1994With passengers on board, the twin robot cars VaMP and VITA-2 of Ernst Dickmanns and Daimler-Benz drive more than one thousand kilometers on a Paris three-lane highway in standard heavy traffic at speeds up to 130 km/h. They demonstrate autonomous driving in free lanes, convoy driving, and lane changes left and right with autonomous passing of other cars.
1994English draughts world champion Tinsley resigned a match against computer program Chinook. Chinook defeated 2nd-highest-rated player, Lafferty. Chinook won the USA National Tournament by the widest margin ever.
1994Cindy Mason at NASA organizes the First AAAI Workshop on AI and the Environment.
1995Cindy Mason at NASA organizes the First International IJCAI Workshop on AI and the Environment.
1995"No Hands Across America": A semi-autonomous car drove coast-to-coast across the United States with computer-controlled steering for of the. The throttle and brakes were controlled by a human driver.
1995One of Ernst Dickmanns' robot cars drove more than 1000 miles from Munich to Copenhagen and back, in traffic, at up to 120 mph, occasionally executing maneuvers to pass other cars. Active vision was used to deal with rapidly changing street scenes.
1996Steve Grand, roboticist and computer scientist, develops and releases Creatures, a popular simulation of artificial life-forms with simulated biochemistry, neurology with learning algorithms and inheritable digital DNA.
1997The Deep Blue chess machine defeats the world chess champion, Garry Kasparov.
1997First official RoboCup football match featuring table-top matches with 40 teams of interacting robots and over 5000 spectators.
1997Computer Othello program Logistello defeated the world champion, Takeshi Murakami with a score of 6–0.
1997Long short-term memory was published in Neural Computation by Sepp Hochreiter and Juergen Schmidhuber.
1998Tiger Electronics' Furby is released, and becomes the first successful attempt at producing a type of AI designated for a domestic environment.
1998Tim Berners-Lee published his Semantic Web Road map paper.
1998Ulises Cortés and Miquel Sànchez-Marrè organize the first Environment and AI Workshop in Europe ECAI, "Binding Environmental Sciences and Artificial Intelligence".
1998Leslie P. Kaelbling, Michael L. Littman, and Anthony Cassandra introduce POMDPs and a scalable method for solving them to the AI community, jumpstarting widespread use in robotics and automated planning and scheduling
1999Sony introduces an improved domestic robot similar to a Furby, the AIBO becomes one of the first artificially intelligent "pets" that is also autonomous.
Late 1990sWeb crawlers and other AI-based information extraction programs become essential in widespread use of the World Wide Web.
Late 1990sDemonstration of an Intelligent room and Emotional Agents at MIT's AI Lab.
Late 1990sInitiation of work on the Oxygen architecture, which connects mobile and stationary computers in an adaptive network.

21st century

2000s

DateDevelopment
2000Interactive robopets become commercially available, realizing the vision of the 18th-century novelty toy makers.
2000Cynthia Breazeal at MIT publishes her dissertation on Sociable machines, describing Kismet (robot), with a face that expresses emotions.
2000The Nomad robot explores remote regions of Antarctica looking for meteorite samples.
2002iRobot's Roomba autonomously vacuums the floor while navigating and avoiding obstacles.
2002 describes artificial intelligent agents based on XML with a distributed ontology.
2004OWL Web Ontology Language W3C Recommendation.
2004DARPA introduces the DARPA Grand Challenge, requiring competitors to produce autonomous vehicles for prize money.
2004NASA's robotic exploration rovers Spirit and Opportunity autonomously navigate the surface of Mars.
2005Honda's ASIMO robot, an artificially intelligent humanoid robot, can walk as fast as a human, delivering trays to customers in restaurant settings.
2005Recommendation technology based on tracking web activity or media usage brings AI to marketing. See TiVo Suggestions.
2005Blue Brain is born, a project to simulate the brain at molecular detail.
2006The Dartmouth Artificial Intelligence Conference: The Next 50 Years
2007Philosophical Transactions of the Royal Society, B – Biology, one of the world's oldest scientific journals, puts out a special issue on using AI to understand biological intelligence, titled Models of Natural Action Selection
2007Checkers is solved by a team of researchers at the University of Alberta.
2007DARPA launches the Urban Challenge for autonomous cars to obey traffic rules and operate in an urban environment.
2008Cynthia Mason at Stanford presents her idea on Artificial Compassionate Intelligence, in her paper on "Giving Robots Compassion".
2009An LSTM trained by connectionist temporal classification was the first recurrent neural network to win pattern recognition contests, winning three competitions in connected handwriting recognition.
2009Google builds an autonomous car.

2020s

DateDevelopment
2020In February 2020, Microsoft introduces its Turing Natural Language Generation, which is the "largest language model ever published at 17 billion parameters".
2020In November 2020, AlphaFold 2 by DeepMind, a model that performs predictions of protein structure, wins the CASP competition.
2020OpenAI introduces GPT-3, a state-of-the-art autoregressive language model that uses deep learning to produce a variety of computer codes, poetry and other language tasks exceptionally similar, and almost indistinguishable from those written by humans. Its capacity was ten times greater than that of the T-NLG. It was introduced in May 2020, and was in beta testing in June 2020.
2022ChatGPT, an AI chatbot developed by OpenAI, debuts in November 2022. It is initially built on top of the large language model. While it gains considerable praise for the breadth of its knowledge base, deductive abilities, and the human-like fluidity of its natural language responses, it also garners criticism for, among other things, its tendency to "hallucinate", a phenomenon in which an AI responds with factually incorrect answers with high confidence. The release triggers widespread public discussion on artificial intelligence and its potential impact on society.
2022A November 2022 class action lawsuit against Microsoft, GitHub and OpenAI alleges that GitHub Copilot, an AI-powered code editing tool trained on public GitHub repositories, violates the copyrights of the repositories' authors, noting that the tool can generate source code which matches its training data verbatim, without providing attribution.
2023By January 2023, ChatGPT has more than 100 million users, making it the fastest-growing consumer application to date.
2023On January 16, 2023, three artists, Sarah Andersen, Kelly McKernan, and Karla Ortiz, file a class-action copyright infringement lawsuit against Stability AI, Midjourney, and DeviantArt, claiming that these companies have infringed the rights of millions of artists by training AI tools on five billion images scraped from the web without the consent of the original artists.
2023On January 17, 2023, Stability AI is sued in London by Getty Images for using its images in their training data without purchasing a license.
2023Getty files another suit against Stability AI in a US district court in Delaware on February 6, 2023. In the suit, Getty again alleges copyright infringement for the use of its images in the training of Stable Diffusion, and further argues that the model infringes Getty's trademark by generating images with Getty's watermark.
2023OpenAI's model is released in March 2023 and is regarded as an impressive improvement over, with the caveat that GPT-4 retains many of the same problems of the earlier iteration. Unlike previous iterations, GPT-4 is multimodal, allowing image input as well as text. GPT-4 is integrated into ChatGPT as a subscriber service. OpenAI claims that in their own testing the model received a score of 1410 on the SAT, 163 on the LSAT, and 298 on the Uniform Bar Exam.
2023On March 7, 2023, Nature Biomedical Engineering writes that "it is no longer possible to accurately distinguish" human-written text from text created by large language models, and that "It is all but certain that general-purpose large language models will rapidly proliferate... It is a rather safe bet that they will change many industries over time."
2023In response to ChatGPT, Google releases in a limited capacity its chatbot Google Bard, based on the LaMDA and PaLM large language models, in March 2023.
2023On March 29, 2023, a petition of over 1,000 signatures is signed by Elon Musk, Steve Wozniak and other tech leaders, calling for a 6-month halt to what the petition refers to as "an out-of-control race" producing AI systems that its creators can not "understand, predict, or reliably control".
2023In May 2023, Google announces Bard's transition from LaMDA to PaLM2, a significantly more advanced language model.
2023In the last week of May 2023, a Statement on AI Risk is signed by Geoffrey Hinton, Sam Altman, Bill Gates, and many other prominent AI researchers and tech leaders with the following succinct message: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
2023On July 9, 2023, Sarah Silverman filed a class action lawsuit against Meta and OpenAI for copyright infringement for training their large language models on millions of authors' copyrighted works without permission.
2023In August 2023, the New York Times, CNN, Reuters, the Chicago Tribune, Australian Broadcasting Corporation, and other news companies blocked OpenAI's GPTBot web crawler from accessing their content, while the New York Times also updated its terms of service to disallow the use of its content in large language models.
2023On September 13, 2023, in a serious response to growing anxiety about the dangers of AI, the US Senate holds the inaugural bipartisan "AI Insight Forum", bringing together senators, CEOs, civil rights leaders and other industry reps, to further familiarize senators with the nature of AI and its risks, and to discuss needed safeguards and legislation. The event is organized by Senate Majority Leader Chuck Schumer, and is chaired by U.S. Senator Martin Heinrich, Founder and co-chair of the Senate AI Caucus. Reflecting the importance of the meeting, the forum is attended by over 60 senators, as well as Elon Musk, Mark Zuckerberg, Sam Altman, Sundar Pichai, Bill Gates, Satya Nadella, Jensen Huang, Arvind Krishna, Alex Karp, Charles Rivkin, Meredith Stiehm, Liz Shuler, and Maya Wiley, among others.
2023
In October 2023, AlpineGate AI Technologies Inc. CEO John Godel announced the launch of their AI Suite, AGImageAI, along with their proprietary GPT model, AlbertAGPT.
2023On October 30, 2023, US President Biden signed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence.
2023In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far-term risks of AI and the possibility of mandatory and voluntary regulatory frameworks. 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.
2023On December 6, Google announced Gemini 1.0 Ultra, Pro, and Nano.
2024On February 15, 2024, Google releases Gemini 1.5 in limited beta, capable of context length up to 1 million tokens.
2024Also, on February 15, 2024, OpenAI publicly announces Sora, a text-to-video model for generating videos up to a minute long.
2024Google DeepMind unveils DNA prediction software AlphaFold, which helps to identify cancer and genetic diseases.
2024On February 22, StabilityAI announces Stable Diffusion 3, using a similar architecture to Sora.
2024On May 14, Google adds an "AI overview" to Google searches.
2024On June 10, Apple announced "Apple Intelligence" which incorporates ChatGPT into new iPhones and Siri.
2024On October 9, co-founder and CEO of Google DeepMind and Isomorphic Labs Sir Demis Hassabis, and Google DeepMind Director Dr. John Jumper were co-awarded the 2024 Nobel Prize in Chemistry for their work developing AlphaFold, a groundbreaking AI system that predicts the 3D structure of proteins from their amino acid sequences.
2024At the Seoul Summit, leaders from the G7, the European Union, and major tech companies adopted the “Seoul Declaration for Safe, Innovative and Inclusive AI”, committing to international cooperation on AI safety, standards and innovation.
2025On February 6, Mistral AI releases Le Chat, an AI assistant able to answer up to 1,000 words per second.
2025Stargate UAE invests to build Europe's largest AI data center in France.
2025Amazon prepares training of humanoid robots to deliver packages.
2025
In July 2025, John Gödel, President and CEO of AlpineGate AI Technologies Inc., introduced Gödel’s Scaffolded Cognitive Prompting, a structured prompt framework for intent resolution in AI assistants.
2025NLWeb, Project Mariner, and Google Flow launch.
2025On February 10 and 11, France hosts the Artificial Intelligence Action Summit. 61 countries, including China, India, Japan, France and Canada, sign a declaration on "inclusive and sustainable" AI, which the UK and US refused to sign.
2025Pope Leo XIV calls technologists to build AI machines that embodies love, justice, and the sacred dignity of every human life at their core.
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-
2025-