AI winter


In the history of artificial intelligence, an AI winter is a period of reduced funding and interest in AI research. The field has experienced several hype cycles, followed by disappointment and criticism, followed by funding cuts, followed by renewed interest years or even decades later.
The term first appeared in 1984 as the topic of a public debate at the annual meeting of AAAI. Roger Schank and Marvin Minsky—two leading AI researchers who experienced the "winter" of the 1970s—warned the business community that enthusiasm for AI had spiraled out of control in the 1980s and that disappointment would certainly follow. They described a chain reaction, similar to a "nuclear winter", that would begin with pessimism in the AI community, followed by pessimism in the press, followed by a severe cutback in funding, followed by the end of serious research. Three years later the billion-dollar AI industry began to collapse.
There were two major "winters" approximately 1974–1980 and 1987–2000, and several smaller episodes, including the following:
Enthusiasm and optimism about AI has generally increased since its low point in the early 1990s. Beginning about 2012, interest in artificial intelligence from the research and corporate communities led to a dramatic increase in funding and investment, leading to the current AI boom.

Early episodes

Machine translation and the ALPAC report of 1966

research has its roots in the early 1930s and began its existence with the work on machine translation. However, significant advancements and applications began to emerge after the publication of Warren Weaver's influential memorandum, Machine translation of languages: fourteen essays in 1949. The memorandum generated great excitement within the research community. In the following years, notable events unfolded: IBM embarked on the development of the first machine, MIT appointed its first full-time professor in machine translation, and several conferences dedicated to MT took place. The culmination came with the public demonstration of the Georgetown–IBM machine, which garnered widespread attention in respected newspapers in 1954.
Just like all AI booms that have been followed by desperate AI winters, the media tended to exaggerate the significance of these developments. Headlines about the Georgetown–IBM experiment proclaimed phrases like "The bilingual machine," "Robot brain translates Russian into King's English," and "Polyglot brainchild." However, the actual demonstration involved the translation of a curated set of only 49 Russian sentences into English, with the machine's vocabulary limited to just 250 words. To put things into perspective, a 2006 study made by Paul Nation found that humans need a vocabulary of around 8,000 to 9,000-word families to comprehend written texts with 98% accuracy.
During the Cold War, the US government was particularly interested in the automatic, instant translation of Russian documents and scientific reports. The government aggressively supported efforts at machine translation starting in 1954. Another factor that propelled the field of mechanical translation was the interest shown by the Central Intelligence Agency. During that period, the CIA firmly believed in the importance of developing machine translation capabilities and supported such initiatives. They also recognized that this program had implications that extended beyond the interests of the CIA and the intelligence community.
At the outset, the researchers were optimistic. Noam Chomsky's new work in grammar was streamlining the translation process and there were "many predictions of imminent 'breakthroughs'".File:Computer-translation Briefing for Gerald Ford.jpg|thumb|Briefing for US Vice President Gerald Ford in 1973 on the junction-grammar-based computer translation modelHowever, researchers had underestimated the profound difficulty of word-sense disambiguation. In order to translate a sentence, a machine needed to have some idea what the sentence was about, otherwise it made mistakes. An apocryphal example is "the spirit is willing but the flesh is weak." Translated back and forth with Russian, it became "the vodka is good but the meat is rotten." Later researchers would call this the commonsense knowledge problem.
By 1964, the National Research Council had become concerned about the lack of progress and formed the Automatic Language Processing Advisory Committee to look into the problem. They concluded, in a famous 1966 report, that machine translation was more expensive, less accurate and slower than human translation. After spending some 20 million dollars, the NRC ended all support. Careers were destroyed and research ended.
Machine translation shared the same path with NLP from the rule-based approaches through the statistical approaches up to the neural network approaches, which have in 2023 culminated in large language models.

The failure of single-layer neural networks in 1969

Simple networks or circuits of connected units, including Walter Pitts and Warren McCulloch's neural network for logic and Marvin Minsky's SNARC system, have failed to deliver the promised results and were abandoned in the late 1950s. Following the success of programs such as the Logic Theorist and the General Problem Solver, algorithms for manipulating symbols seemed more promising at the time as means to achieve logical reasoning viewed at the time as the essence of intelligence, either natural or artificial.
Interest in perceptrons, invented by Frank Rosenblatt, was kept alive only by the sheer force of his personality.
He optimistically predicted that the perceptron "may eventually be able to learn, make decisions, and translate languages".
Mainstream research into perceptrons ended partially because the 1969 book Perceptrons by Marvin Minsky and Seymour Papert emphasized the limits of what perceptrons could do. While it was already known that multilayered perceptrons are not subject to the criticism, nobody in the 1960s knew how to train a multilayered perceptron. Backpropagation was still years away.
Major funding for projects neural network approaches was difficult to find in the 1970s and early 1980s. Important theoretical work continued despite the lack of funding. The "winter" of neural network approach came to an end in the middle 1980s, when the work of John Hopfield, David Rumelhart and others revived large scale interest. Rosenblatt did not live to see this, however, as he died in a boating accident shortly after Perceptrons was published.

The setbacks of 1974

The Lighthill report

In 1973, professor Sir James Lighthill was asked by the UK Parliament to evaluate the state of AI research in the United Kingdom. His report, now called the Lighthill report, criticized the utter failure of AI to achieve its "grandiose objectives". He concluded that nothing being done in AI could not be done in other sciences. He specifically mentioned the problem of "combinatorial explosion" or "intractability", which implied that many of AI's most successful algorithms would grind to a halt on real world problems and were only suitable for solving "toy" versions.
The report was contested in a debate broadcast in the BBC "Controversy" series in 1973. The debate "The general purpose robot is a mirage" from the Royal Institution was Lighthill versus the team of Donald Michie, John McCarthy and Richard Gregory. McCarthy later wrote that "the combinatorial explosion problem has been recognized in AI from the beginning".
The report led to the complete dismantling of AI research in the UK. AI research continued in only a few universities. Research would not revive on a large scale until 1983, when Alvey began to fund AI again from a war chest of £350 million in response to the Japanese Fifth Generation Project. Alvey had a number of UK-only requirements which did not sit well internationally, especially with US partners, and lost Phase 2 funding.

DARPA's early 1970s funding cuts

During the 1960s, the Defense Advanced Research Projects Agency provided millions of dollars for AI research with few strings attached. J. C. R. Licklider, the founding director of DARPA's computing division, believed in "funding people, not projects" and he and several successors allowed AI's leaders to spend it almost any way they liked.
This attitude changed after the passage of Mansfield Amendment in 1969, which required DARPA to fund "mission-oriented direct research, rather than basic undirected research". Pure undirected research of the kind that had gone on in the 1960s would no longer be funded by DARPA. Researchers now had to show that their work would soon produce some useful military technology. AI research proposals were held to a very high standard. The situation was not helped when the Lighthill report and DARPA's own study suggested that most AI research was unlikely to produce anything truly useful in the foreseeable future. DARPA's money was directed at specific projects with identifiable goals, such as autonomous tanks and battle management systems. By 1974, funding for AI projects was hard to find.
AI researcher Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues: "Many researchers were caught up in a web of increasing exaggeration. Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more." The result, Moravec claims, is that some of the staff at DARPA had lost patience with AI research. "It was literally phrased at DARPA that 'some of these people were going to be taught a lesson having their two-million-dollar-a-year contracts cut to almost nothing!'" Moravec told Daniel Crevier.
While the autonomous tank project was a failure, the battle management system proved to be enormously successful, saving billions in the first Gulf War, repaying all of DARPAs investment in AI and justifying DARPA's pragmatic policy.