Philosophy of artificial intelligence


The philosophy of artificial intelligence is a branch of the philosophy of mind and the philosophy of computer science that explores artificial intelligence and its implications for knowledge and understanding of intelligence, ethics, consciousness, epistemology, and free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence.
The philosophy of artificial intelligence attempts to answer such questions as follows:
Questions like these reflect the divergent interests of AI researchers, cognitive scientists and philosophers respectively. The scientific answers to these questions depend on the definition of "intelligence" and "consciousness" and exactly which "machines" are under discussion.
Important propositions in the philosophy of AI include some of the following:
  • Turing's "polite convention": If a machine behaves as intelligently as a human being, then it is as intelligent as a human being.
  • The Dartmouth proposal: "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
  • Allen Newell and Herbert A. Simon's physical symbol system hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."
  • John Searle's strong AI hypothesis: "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."
  • Hobbes' mechanism: "For 'reason' ... is nothing but 'reckoning,' that is adding and subtracting, of the consequences of general names agreed upon for the 'marking' and 'signifying' of our thoughts..."

    Can a machine display general intelligence?

Is it possible to create a machine that can solve all the problems humans solve using their intelligence? This question defines the scope of what machines could do in the future and guides the direction of AI research. It only concerns the behavior of machines and ignores the issues of interest to psychologists, cognitive scientists and philosophers, evoking the question: does it matter whether a machine is really thinking, as a person thinks, rather than just producing outcomes that appear to result from thinking?
The basic position of most AI researchers is summed up in this statement, which appeared in the proposal for the Dartmouth workshop of 1956:
  • "Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it."
Arguments against the basic premise must show that building a working AI system is impossible because there is some practical limit to the abilities of computers or that there is some special quality of the human mind that is necessary for intelligent behavior and yet cannot be duplicated by a machine. Arguments in favor of the basic premise must show that such a system is possible.
It is also possible to sidestep the connection between the two parts of the above proposal. For instance, machine learning, beginning with Turing's infamous child machine proposal, essentially achieves the desired feature of intelligence without a precise design-time description as to how it would exactly work. The account on robot tacit knowledge eliminates the need for a precise description altogether.
The first step to answering the question is to clearly define "intelligence".

Intelligence

Turing test

reduced the problem of defining intelligence to a simple question about conversation. He suggests that: if a machine can answer any question posed to it, using the same words that an ordinary person would, then we may call that machine intelligent. A modern version of his experimental design would use an online chat room, where one of the participants is a real person and one of the participants is a computer program. The program passes the test if no one can tell which of the two participants is human. Turing notes that no one ever asks the question "can people think?" He writes "instead of arguing continually over this point, it is usual to have a polite convention that everyone thinks". Turing's test extends this polite convention to machines:
  • If a machine acts as intelligently as a human being, then it is as intelligent as a human being.
One criticism of the Turing test is that it only measures the "humanness" of the machine's behavior, rather than the "intelligence" of the behavior. Since human behavior and intelligent behavior are not exactly the same thing, the test fails to measure intelligence. Stuart J. Russell and Peter Norvig write that "aeronautical engineering texts do not define the goal of their field as 'making machines that fly so exactly like pigeons that they can fool other pigeons'".

Intelligence as achieving goals

Twenty-first century AI research defines intelligence in terms of goal-directed behavior. It views intelligence as a set of problems that the machine is expected to solve – the more problems it can solve, and the better its solutions are, the more intelligent the program is. AI founder John McCarthy defined intelligence as "the computational part of the ability to achieve goals in the world."
Stuart Russell and Peter Norvig formalized this definition using abstract intelligent agents. An "agent" is something which perceives and acts in an environment. A "performance measure" defines what counts as success for the agent.
  • "If an agent acts so as to maximize the expected value of a performance measure based on past experience and knowledge then it is intelligent."
Definitions like this one try to capture the essence of intelligence. They have the advantage that, unlike the Turing test, they do not also test for unintelligent human traits such as making typing mistakes.
They have the disadvantage that they can fail to differentiate between "things that think" and "things that do not". By this definition, even a thermostat has a rudimentary intelligence.

Arguments that a machine can display general intelligence

The brain can be simulated

describes this argument as claiming that "if the nervous system obeys the laws of physics and chemistry, which we have every reason to suppose it does, then ... we ... ought to be able to reproduce the behavior of the nervous system with some physical device". This argument, first introduced as early as 1943 and vividly described by Hans Moravec in 1988,
is now associated with futurist Ray Kurzweil, who estimates that computer power will be sufficient for a complete brain simulation by the year 2029. A non-real-time simulation of a thalamocortical model that has the size of the human brain was performed in 2005, and it took 50 days to simulate 1 second of brain dynamics on a cluster of 27 processors.
Even AI's harshest critics agree that a brain simulation is possible in theory.
However, Searle points out that, in principle, anything can be simulated by a computer; thus, bringing the definition to its breaking point leads to the conclusion that any process at all can technically be considered "computation". "What we wanted to know is what distinguishes the mind from thermostats and livers," he writes. Thus, merely simulating the functioning of a living brain would in itself be an admission of ignorance regarding intelligence and the nature of the mind, like trying to build a jet airliner by copying a living bird precisely, feather by feather, with no theoretical understanding of aeronautical engineering.

Human thinking is symbol processing

In 1963, Allen Newell and Herbert A. Simon proposed that "symbol manipulation" was the essence of both human and machine intelligence. They wrote:
  • "A physical symbol system has the necessary and sufficient means of general intelligent action."
This claim is very strong: it implies both that human thinking is a kind of symbol manipulation and that machines can be intelligent.
Another version of this position was described by philosopher Hubert Dreyfus, who called it "the psychological assumption":
  • "The mind can be viewed as a device operating on bits of information according to formal rules."
The "symbols" that Newell, Simon and Dreyfus discussed were word-like and high levelsymbols that directly correspond with objects in the world, such as and . Most AI programs written between 1956 and 1990 used this kind of symbol. Modern AI, based on statistics and mathematical optimization, does not use the high-level "symbol processing" that Newell and Simon discussed.

Arguments against symbol processing

These arguments show that human thinking does not consist of high level symbol manipulation. They do not show that artificial intelligence is impossible, only that more than symbol processing is required.
Gödelian anti-mechanist arguments
In 1931, Kurt Gödel proved with an incompleteness theorem that it is always possible to construct a "Gödel statement" that a given consistent formal system of logic could not prove. Despite being a true statement, the constructed Gödel statement is unprovable in the given system. More speculatively, Gödel conjectured that the human mind can eventually correctly determine the truth or falsity of any well-grounded mathematical statement, and that therefore the human mind's power is not reducible to a mechanism. Philosopher John Lucas and Roger Penrose have championed this philosophical anti-mechanist argument.
Gödelian anti-mechanist arguments tend to rely on the innocuous-seeming claim that a system of human mathematicians is both consistent and believes fully in its own consistency . This is probably impossible for a Turing machine to do ; therefore, the Gödelian concludes that human reasoning is too powerful to be captured by a Turing machine, and by extension, any digital mechanical device.
However, the modern consensus in the scientific and mathematical community is that actual human reasoning is inconsistent; that any consistent "idealized version" H of human reasoning would logically be forced to adopt a healthy but counter-intuitive open-minded skepticism about the consistency of H ; and that Gödel's theorems do not lead to any valid argument that humans have mathematical reasoning capabilities beyond what a machine could ever duplicate. This consensus that Gödelian anti-mechanist arguments are doomed to failure is laid out strongly in Artificial Intelligence: "any attempt to utilize to attack the computationalist thesis is bound to be illegitimate, since these results are quite consistent with the computationalist thesis."
Stuart Russell and Peter Norvig agree that Gödel's argument does not consider the nature of real-world human reasoning. It applies to what can theoretically be proved, given an infinite amount of memory and time. In practice, real machines have finite resources and will have difficulty proving many theorems. It is not necessary to be able to prove everything in order to be an intelligent person.
Less formally, Douglas Hofstadter, in his Pulitzer Prize winning book Gödel, Escher, Bach: An Eternal Golden Braid, states that these "Gödel-statements" always refer to the system itself, drawing an analogy to the way the Epimenides paradox uses statements that refer to themselves, such as "this statement is false" or "I am lying". But, of course, the Epimenides paradox applies to anything that makes statements, whether it is a machine or a human, even Lucas himself. Consider:
  • Lucas can't assert the truth of this statement.
This statement is true but cannot be asserted by Lucas. This shows that Lucas himself is subject to the same limits that he describes for machines, as are all people, and so Lucas's argument is pointless.
After concluding that human reasoning is non-computable, Penrose went on to controversially speculate that some kind of hypothetical non-computable processes involving the collapse of quantum mechanical states give humans a special advantage over existing computers. Existing quantum computers are only capable of reducing the complexity of Turing computable tasks and are still restricted to tasks within the scope of Turing machines. . By Penrose and Lucas's arguments, the fact that quantum computers are only able to complete Turing computable tasks implies that they cannot be sufficient for emulating the human mind. Therefore, Penrose seeks for some other process involving new physics, for instance quantum gravity which might manifest new physics at the scale of the Planck mass via spontaneous quantum collapse of the wave function. These states, he suggested, occur both within neurons and also spanning more than one neuron. However, other scientists point out that there is no plausible organic mechanism in the brain for harnessing any sort of quantum computation, and furthermore that the timescale of quantum decoherence seems too fast to influence neuron firing.