Hubert Dreyfus's views on artificial intelligence
was a critic of artificial intelligence research. In a series of papers and books, including Alchemy and AI, What Computers Can't Do and Mind over Machine, he presented a skeptical and cautious assessment of AI's progress and a critique of the philosophical foundations of the field. Dreyfus' objections are discussed in most introductions to the philosophy of artificial intelligence, including, a standard AI textbook, and in, a survey of contemporary philosophy.
Dreyfus argued that human intelligence and expertise depend primarily on yet-to-be understood informal and unconscious processes rather than symbolic manipulation and that these essentially human skills cannot be fully captured in formal rules. His critique was based on the insights of modern continental philosophers such as Merleau-Ponty and Heidegger, and was directed at the first wave of AI research which tried to reduce intelligence to high level formal symbols.
When Dreyfus' ideas were first introduced in the mid-1960s, they were met in the AI community with ridicule and outright hostility. By the 1980s, however, some of his perspectives were rediscovered by researchers working in robotics and the new field of connectionism—approaches that were called "sub-symbolic" at the time because they eschewed early AI research's emphasis on high level symbols.
In the 21st century, "sub-symbolic" artificial neural networks and other statistics-based approaches to machine learning were highly successful. Historian and AI researcher Daniel Crevier wrote: "time has proven the accuracy and perceptiveness of some of Dreyfus's comments." Dreyfus said in 2007, "I figure I won and it's over—they've given up."
Dreyfus' critique
The grandiose promises of artificial intelligence
In Alchemy and AI and What Computers Can't Do, Dreyfus summarized the history of artificial intelligence and ridiculed the unbridled optimism that permeated the field. For example, Herbert A. Simon, following the success of his program General Problem Solver, predicted that by 1967:- A computer would be world champion in chess.
- A computer would discover and prove an important new mathematical theorem.
- Most theories in psychology will take the form of computer programs.
Dreyfus felt that this optimism was unwarranted and, in 1965, argued forcefully that predictions like these would not come true. He would eventually be proven right. Pamela McCorduck explains Dreyfus' position:
A great misunderstanding accounts for public confusion about thinking machines, a misunderstanding perpetrated by the unrealistic claims researchers in AI have been making, claims that thinking machines are already here, or at any rate, just around the corner.
These predictions were based on the success of the cognitive revolution, which promoted an "information processing" model of the mind. It was articulated by Newell and Simon in their physical symbol systems hypothesis, and later expanded into a philosophical position known as computationalism by philosophers such as Jerry Fodor and Hilary Putnam. In AI, the approach is now called symbolic AI or "GOFAI".
Dreyfus argued that "symbolic AI" was the latest version of the ancient program of rationalism in philosophy. Rationalism had come under heavy criticism in the 20th century from philosophers like Martin Heidegger and Edmund Husserl. The mind, according to modern continental philosophy, is not "rationalist" and is nothing like a digital computer.
Cognitivism led early AI researchers to believe that they had successfully simulated the essential process of human thought, thus it seemed a short step to producing fully intelligent machines. Dreyfus' last paper detailed the ongoing history of the "first step fallacy", where AI researchers tend to wildly extrapolate initial success as promising, perhaps even guaranteeing, wild future successes.
Dreyfus' four assumptions of artificial intelligence research
In Alchemy and AI and What Computers Can't Do, Dreyfus identified four philosophical assumptions, at least one of which he deems necessary for AI to succeed. "In each case," Dreyfus writes, "the assumption is taken by workers in AI as an axiom, guaranteeing results, whereas it is, in fact, one hypothesis among others, to be tested by the success of such work."Dreyfus argues that AI would be impossible without accepting at least one of these four assumptions:
;The biological assumption: The brain processes information in discrete operations by way of some biological equivalent of on/off switches.
In the early days of research into neurology, scientists found that neurons fire in all-or-nothing pulses. Several researchers, such as Walter Pitts and Warren McCulloch, speculated with great confidence that neurons functioned similarly to the way Boolean logic gates operate, and so could be imitated by electronic circuitry at the level of the neuron.
When digital computers became widely used in the early 50s, this argument was extended to suggest that the brain was a vast physical symbol system, manipulating the binary symbols of zero and one. Dreyfus was able to refute the biological assumption by citing research in neurology that suggested that the action and timing of neuron firing had analog components. But Daniel Crevier observes that "few still held that belief in the early 1970s, and nobody argued against Dreyfus" about the biological assumption.
;The psychological assumption: The mind can be viewed as a device operating on bits of information according to formal rules.
He refuted this assumption by showing that much of what we know about the world consists of complex attitudes or tendencies that make us lean towards one interpretation over another. He argued that, even when we use explicit symbols, we are using them against an unconscious and informal background including commonsense knowledge and that without this background our symbols cease to mean anything. This background, in Dreyfus' view, was not implemented in individual brains as explicit individual symbols with explicit individual meanings.
;The epistemological assumption: All knowledge can be formalized.
This concerns the philosophical issue of epistemology, or the study of knowledge. Even if we agree that the psychological assumption is false, AI researchers could still argue that it is possible for a symbol processing machine to represent all knowledge, regardless of whether human beings represent knowledge the same way. Dreyfus argued that there is no justification for this assumption, since so much of human knowledge is not symbolic or even expressible using formal constructs.
;The ontological assumption: The world consists of independent facts that can be represented by independent symbols
AI researchers often assume that there is no limit to formal, scientific knowledge, because they assume that any phenomenon in the universe can be described by symbols or scientific theories. This assumes that everything that exists can be understood as objects, properties of objects, classes of objects, relations of objects, and so on: precisely those things that can be described by logic, language and mathematics. The study of being or existence is called ontology, and so Dreyfus calls this the ontological assumption. If this is false, then it raises doubts about what we can ultimately know and what intelligent machines will ultimately be able to help us to do.
Knowing-how vs. knowing-that: the primacy of intuition
In Mind Over Machine, written during the heyday of expert systems, Dreyfus analyzed the difference between human expertise and the programs that claimed to capture it. This expanded on ideas from What Computers Can't Do, where he had made a similar argument criticizing the "cognitive simulation" school of AI research practiced by Allen Newell and Herbert A. Simon in the 1960s.Dreyfus argued that human problem solving and expertise depend on our background sense of the context, of what is important and interesting given the situation, rather than on the process of searching through combinations of possibilities to find what we need. Dreyfus would describe it in 1986 as the difference between "knowing-that" and "knowing-how", based on Heidegger's distinction of present-at-hand and ready-to-hand.
Knowing-that is our conscious, step-by-step problem solving abilities. We use these skills when we encounter a difficult problem that requires us to stop, step back and search through ideas one at time. At moments like this, the ideas become very precise and simple: they become context free symbols, which we manipulate using logic and language. These are the skills that Newell and Simon had demonstrated with both psychological experiments and computer programs. Dreyfus argued that these "knowing-that" skills -- the conscious, rational problem solving techniques the Newell and Simon had modeled -- were a relatively small part of intelligent behavior and were far from sufficient for human-level intelligence.
Knowing-how, on the other hand, is the way we deal with things normally. We take actions without using conscious symbolic reasoning at all, as when we recognize a face, drive ourselves to work or find the right thing to say. We seem to simply jump to the appropriate response, without explicitly considering any alternatives. This is the essence of skill and expertise, Dreyfus argued: when our intuitions have been trained to the point that we forget the rules and simply "size up the situation" and react.
The human sense of the situation, according to Dreyfus, is based on our goals, our bodies and our culture—all of our unconscious intuitions, attitudes and knowledge of the world. This "context" or "background" is a form of knowledge that is not stored symbolically, but intuitively in some way. It affects what we notice and what we don't notice, what we expect and what possibilities we don't consider: we discriminate between what is essential and inessential. The things that are inessential are relegated to our "fringe consciousness" : the millions of things we're aware of, but we're not really thinking about right now.
Dreyfus did not believe that AI programs could capture this "background" or do the kind of fast problem solving that it allows. He argued that our unconscious knowledge could never be captured symbolically. If AI could not find a way to address these issues, then it was doomed to failure, an exercise in "tree climbing with one's eyes on the moon."