Artificial general intelligence
Artificial general intelligence is a hypothetical type of artificial intelligence that would match or surpass human capabilities across virtually all cognitive tasks.
Beyond AGI, artificial superintelligence would outperform the best human abilities across every domain by a wide margin. Unlike artificial narrow intelligence, whose competence is confined to well‑defined tasks, an AGI system can generalise knowledge, transfer skills between domains, and solve novel problems without task‑specific reprogramming. The concept does not, in principle, require the system to be an autonomous agent; a static model—such as a highly capable large language model—or an embodied robot could both satisfy the definition so long as human‑level breadth and proficiency are achieved.
Creating AGI is a stated goal of AI technology companies such as OpenAI, Google, xAI, and Meta. A 2020 survey identified 72 active AGI research and development projects across 37 countries.
AGI is a common topic in science fiction and futures studies.
Contention exists over whether AGI represents an existential risk. Some AI experts and industry figures have stated that mitigating the risk of human extinction posed by AGI should be a global priority. Others find the development of AGI to be in too remote a stage to present such a risk.
Terminology
AGI is also known as strong AI, full AI, human-level AI, human-level intelligent AI, or general intelligent action.Some academic sources reserve the term "strong AI" for computer programs that will experience sentience or consciousness. In contrast, weak AI can solve one specific problem but lacks general cognitive abilities. Some academic sources use "weak AI" to refer more broadly to any programs that neither experience consciousness nor have a mind in the same sense as humans.
Related concepts include artificial superintelligence and transformative AI. An artificial superintelligence is a hypothetical type of AGI that is much more generally intelligent than humans, while the notion of transformative AI relates to AI having a large impact on society, for example, similar to the agricultural or industrial revolution.
A framework for classifying AGI was proposed in 2023 by Google DeepMind researchers. They define five performance levels of AGI: emerging, competent, expert, virtuoso, and superhuman. For example, a competent AGI is defined as an AI that outperforms 50% of skilled adults in a wide range of non-physical tasks, and a superhuman AGI is similarly defined but with a threshold of 100%. They consider large language models like ChatGPT or LLaMA 2 to be instances of emerging AGI. Regarding the autonomy of AGI and associated risks, they define five levels: tool, consultant, collaborator, expert, and agent.
Characteristics
There is no single agreed-upon definition of intelligence as applied to computers. Computer scientist John McCarthy wrote in 2007: "We cannot yet characterize in general what kinds of computational procedures we want to call intelligent."Intelligence traits
Researchers generally hold that a system is required to do all of the following to be regarded as an AGI:- reason, use strategy, solve puzzles, and make judgments under uncertainty,
- represent knowledge, including common sense knowledge,
- plan,
- learn,
- communicate in natural language,
- if necessary, integrate these skills in completion of any given goal.
Computer-based systems that exhibit many of these capabilities exist. There is debate about whether modern AI systems possess them to an adequate degree.
Physical traits
Other capabilities are considered desirable in intelligent systems, as they may affect intelligence or aid in its expression. These include:- the ability to sense, and
- the ability to act
Tests for human-level AGI
Several tests meant to confirm human-level AGI have been considered, including:;The Turing Test
;The Robot College Student Test
;The Employment Test
;The Ikea test
;The Coffee Test
;The Modern Turing Test
;The General Video-Game Learning Test
AI-complete problems
A problem is informally called "AI-complete" or "AI-hard" if it is believed that AGI would be needed to solve it, because the solution is beyond the capabilities of a purpose-specific algorithm.Many problems have been conjectured to require general intelligence to solve. Examples include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Even a specific task like translation requires a machine to read and write in both languages, follow the author's argument, understand the context, and faithfully reproduce the author's original intent. All of these problems need to be solved simultaneously in order to reach human-level machine performance.
However, many of these tasks can now be performed by modern large language models. According to Stanford University's 2024 AI index, AI has reached human-level performance on many benchmarks for reading comprehension and visual reasoning.
History
Classical AI
Modern AI research began in the mid-1950s. The first generation of AI researchers were convinced that artificial general intelligence was possible and that it would exist in just a few decades. AI pioneer Herbert A. Simon wrote in 1965: "machines will be capable, within twenty years, of doing any work a man can do."Their predictions were the inspiration for Stanley Kubrick and Arthur C. Clarke's fictional character HAL 9000, who embodied what AI researchers believed they could create by the year 2001. AI pioneer Marvin Minsky was a consultant on the project of making HAL 9000 as realistic as possible according to the consensus predictions of the time. He said in 1967, "Within a generation... the problem of creating 'artificial intelligence' will substantially be solved".
Several classical AI projects, such as Doug Lenat's Cyc project, and Allen Newell's Soar project, were directed at AGI.
However, in the early 1970s, it became obvious that researchers had grossly underestimated the difficulty of the project. Funding agencies became skeptical of AGI and put researchers under increasing pressure to produce useful "applied AI". In the early 1980s, Japan's Fifth Generation Computer Project revived interest in AGI, setting out a ten-year timeline that included AGI goals like "carry on a casual conversation". In response to this and the success of expert systems, both industry and government pumped money into the field. However, confidence in AI spectacularly collapsed in the late 1980s, and the goals of the Fifth Generation Computer Project were never fulfilled. For the second time in 20 years, AI researchers who predicted the imminent achievement of AGI had been mistaken. By the 1990s, AI researchers had a reputation for making vain promises. They became reluctant to make predictions at all and avoided mention of "human level" artificial intelligence for fear of being labeled "wild-eyed dreamer".
Narrow AI research
In the 1990s and early 21st century, mainstream AI achieved commercial success and academic respectability by focusing on specific sub-problems where AI can produce verifiable results and commercial applications, such as speech recognition and recommendation algorithms. These "applied AI" systems are now used extensively throughout the technology industry, and research in this vein is heavily funded in both academia and industry., development in this field was considered an emerging trend, and a mature stage was expected to be reached in more than 10 years.At the turn of the century, many mainstream AI researchers hoped that strong AI could be developed by combining programs that solve various sub-problems. Hans Moravec wrote in 1988:
I am confident that this bottom-up route to artificial intelligence will one day meet the traditional top-down route more than halfway, ready to provide the real-world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven, uniting the two efforts.
However, even at the time, this was disputed. For example, Stevan Harnad of Princeton University concluded his 1990 paper on the symbol grounding hypothesis by stating:
The expectation has often been voiced that "top-down" approaches to modeling cognition will somehow meet "bottom-up" approaches somewhere in between. If the grounding considerations in this paper are valid, then this expectation is hopelessly modular and there is really only one viable route from sense to symbols: from the ground up. A free-floating symbolic level like the software level of a computer will never be reached by this route – nor is it clear why we should even try to reach such a level, since it looks as if getting there would just amount to uprooting our symbols from their intrinsic meanings.
Modern artificial general intelligence research
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000. Named AIXI, the proposed AGI agent maximizes "the ability to satisfy goals in a wide range of environments". This type of AGI, characterized by the ability to maximize a mathematical definition of intelligence rather than exhibit human-like behaviour, was also called universal artificial intelligence.The term AGI was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described by Pei Wang and Ben Goertzel as "producing publications and preliminary results". The first summer school on AGI was organized in Xiamen, China in 2009 by the Xiamen university's Artificial Brain Laboratory and OpenCog. The first university course was given in 2010 and 2011 at Plovdiv University, Bulgaria by Todor Arnaudov. The Massachusetts Institute of Technology presented a course on AGI in 2018, organized by Lex Fridman and featuring a number of guest lecturers.