Technological singularity
The technological singularity, often simply called the singularity, is a hypothetical event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing an explosive increase in intelligence that culminates in a powerful superintelligence, far surpassing human intelligence.
Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human species have been intensely debated.
Prominent technologists and academics dispute the plausibility of a technological singularity and associated artificial intelligence "explosion", including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, Gordon Moore, and Roger Penrose. One claim is that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones. Stuart J. Russell and Peter Norvig observe that in the history of technology, improvement in a particular area tends to follow an S curve: it begins with accelerating improvement, then levels off without continuing upward into a hyperbolic singularity.
History
, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper "Computing Machinery and Intelligence" argued that a machine could, in theory, exhibit intelligent behavior equivalent to or indistinguishable from that of a human. But a technological singularity is not required for machines that can perform at or beyond a human level on certain tasks to be developed, nor does their existence imply the possibility of such an occurrence, as demonstrated by events such as the 1996 victory of IBM's Deep Blue supercomputer in a chess match with grandmaster Garry Kasparov.The Hungarian–American mathematician John von Neumann is the first person known to have discussed a "singularity" in technological progress. Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors echoed this viewpoint.
In 1965, I. J. Good speculated that superhuman intelligence might bring about an "intelligence explosion":
The concept and the term "singularity" were popularized by Vernor Vinge: first in 1983, in an article that claimed that, once humans create intelligences greater than their own, there will be a technological and social transition similar in some sense to "the knotted space-time at the center of a black hole"; and then in his 1993 essay "The Coming Technological Singularity", in which he wrote that it would signal the end of the human era, as the new superintelligence would continue to upgrade itself and advance technologically at an incomprehensible rate, and he would be surprised if it occurred before 2005 or after 2030.
Another significant contribution to wider circulation of the notion was Ray Kurzweil's 2005 book The Singularity Is Near, predicting singularity by 2045.
Intelligence explosion
Although technological progress has been accelerating in most areas, it has been limited by the basic intelligence of the human brain, which has not, according to Paul R. Ehrlich, changed significantly for millennia. But with the increasing power of computers and other technologies, it might eventually be possible to build a machine significantly more intelligent than humans.If superhuman intelligence is invented—through either the amplification of human intelligence or artificial intelligence—it will, in theory, vastly surpass human problem-solving and inventive skill. Such an AI is often called a seed AI because if an AI is created with engineering capabilities that match or surpass those of its creators, it could autonomously improve its own software and hardware to design an even more capable machine, which could repeat the process in turn. This recursive self-improvement could accelerate, potentially allowing enormous qualitative change before reaching any limits imposed by the laws of physics or theoretical computation. It is speculated that over many iterations, such an AI would far surpass human cognitive abilities.
Emergence of superintelligence
A superintelligence, hyperintelligence, or superhuman intelligence is a hypothetical agent that possesses intelligence far surpassing that of even the brightest and most gifted humans. "Superintelligence" may also refer to the form or degree of intelligence possessed by such an agent. I. J. Good, Vernor Vinge, and Ray Kurzweil define the concept in terms of the technological creation of super intelligence, arguing that it is difficult or impossible for present-day humans to predict what human beings' lives would be like in a post-singularity world.The related concept of "speed superintelligence" describes an artificial intelligence that can function like a human mind but much faster. For example, given a millionfold increase in the speed of information processing relative to that of humans, a subjective year would pass in 30 physical seconds. Such an increase in information processing speed could result in or significantly contribute to the singularity.
Technology forecasters and researchers disagree about when, or whether, human intelligence will be surpassed. Some argue that advances in artificial intelligence may result in general reasoning systems that bypass human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence. A number of futures studies focus on scenarios that combine these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification. Robin Hanson's 2016 book The Age of Em describes a future in which human brains are scanned and digitized, creating "uploads" or digital versions of human consciousness. In this future, the development of these uploads may precede or coincide with the emergence of superintelligent AI.
Variations
Non-AI singularity
Some writers use "the singularity" in a broader way, to refer to any radical changes in society brought about by new technology, although Vinge and other writers say that without superintelligence, such changes would not be a true singularity.Predictions
Numerous dates have been predicted for the attainment of singularity.In 1965, Good wrote that it was more probable than not that an ultra-intelligent machine would be built in the 20th century.
That computing capabilities for human-level AI would be available in supercomputers before 2010 was predicted in 1988 by Moravec, assuming that the then current rate of improvement continued.
The attainment of greater-than-human intelligence between 2005 and 2030 was predicted by Vinge in 1993.
Human-level AI around 2029 and the singularity in 2045 was predicted by Kurzweil in 2005. He reaffirmed these predictions in 2024 in The Singularity is Nearer.
Human-level AI by 2040, and intelligence far beyond human by 2050 was predicted in 1998 by Moravec, revising his earlier prediction.
A median confidence of 50% that human-level AI would be developed by 2040–2050 was the outcome of four informal polls of AI researchers, conducted in 2012 and 2013 by Bostrom and Müller.
In September 2025, a review of surveys of scientists and industry experts from the previous 15 years found that most agreed that artificial general intelligence, a level well below technological singularity, will occur by 2100. A more recent analysis by AIMultiple reported, "Current surveys of AI researchers are predicting AGI around 2040".
Plausibility
Prominent technologists and academics who dispute the plausibility of a technological singularity include Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, and Gordon Moore, whose law is often cited in support of the concept.Proposed methods for creating superhuman or transhuman minds typically fall into two categories: intelligence amplification of human brains and artificial intelligence. The many speculated ways to augment human intelligence include bioengineering, genetic engineering, nootropic drugs, AI assistants, direct brain–computer interfaces, and mind uploading.Robin Hanson has expressed skepticism of human intelligence augmentation, writing that once the "low-hanging fruit" of easy methods for increasing human intelligence have been exhausted, further improvements will become increasingly difficult.
In conversation about human-level artificial intelligence with cognitive scientist Gary Marcus, computer scientist Grady Booch skeptically said the singularity is "sufficiently imprecise, filled with emotional and historic baggage, and touches some of humanity's deepest hopes and fears that it's hard to have a rational discussion therein". Later in the conversation, Marcus, while more optimistic about the progress of AI, agreed that any major advances would not happen as a single event, but rather as a slow and gradual increase in reliability usefulness.
The possibility of an intelligence explosion depends on three factors. The first accelerating factor is the new intelligence enhancements made possible by each previous improvement. But as the intelligences become more advanced, further advances will become more and more complicated, possibly outweighing the advantage of increased intelligence. Each improvement should generate at least one more improvement, on average, for movement toward singularity to continue. Finally, the laws of physics may eventually prevent further improvement.
There are two logically independent, but mutually reinforcing, causes of intelligence improvements: increases in the speed of computation and improvements to the algorithms used. The former is predicted by Moore's Law and the forecasted improvements in hardware, and is comparatively similar to previous technological advances. "Most experts believe that Moore's law is coming to an end during this decade", the AIMultiple report reads, but "quantum computing can be used to efficiently train neural networks", potentially working around any end to Moore's Law. But Schulman and Sandberg argue that software will present more complex challenges than simply operating on hardware capable of running at human intelligence levels or beyond.
A 2017 email survey of authors with publications at the 2015 NeurIPS and ICML machine learning conferences asked about the chance that "the intelligence explosion argument is broadly correct". Of the respondents, 12% said it was "quite likely", 17% said it was "likely", 21% said it was "about even", 24% said it was "unlikely", and 26% said it was "quite unlikely".