Foundations of mathematics
Foundations of mathematics are the logical and mathematical frameworks that allow the development of mathematics without generating self-contradictory theories, and to have reliable concepts of theorems, proofs, algorithms, etc. in particular. This may also include the philosophical study of the relation of this framework with reality.
The term "foundations of mathematics" was not coined before the end of the 19th century, although foundations were first established by the ancient Greek philosophers under the name of Aristotle's logic and systematically applied in Euclid's Elements. A mathematical assertion is considered as truth only if it is a theorem that is proved from true premises by means of a sequence of syllogisms, the premises being either already proved theorems or self-evident assertions called axioms or postulates.
These foundations were tacitly assumed to be definitive until the introduction of infinitesimal calculus by Isaac Newton and Gottfried Wilhelm Leibniz in the 17th century. This new area of mathematics involved new methods of reasoning and new basic concepts that were not well founded, but had astonishing consequences, such as the deduction from Newton's law of gravitation that the orbits of the planets are ellipses.
During the 19th century, progress was made towards elaborating precise definitions of the basic concepts of infinitesimal calculus, notably the natural and real numbers. This led to a series of seemingly paradoxical mathematical results near the end of the 19th century that challenged the general confidence in the reliability and truth of mathematical results. This has been called the foundational crisis of mathematics.
The resolution of this crisis involved the rise of a new mathematical discipline called mathematical logic that includes set theory, model theory, proof theory, computability and computational complexity theory, and more recently, parts of computer science. Subsequent discoveries in the 20th century then stabilized the foundations of mathematics into a coherent framework valid for all mathematics. This framework is based on a systematic use of axiomatic method and on set theory, specifically Zermelo–Fraenkel set theory with the axiom of choice. Foundations based on type theory have also gained prevalence, being commonly used in computer proof assistants.
It results from this that the basic mathematical concepts, such as numbers, points, lines, and geometrical spaces are not defined as abstractions from reality but from basic properties. Their adequation with their physical origins does not belong to mathematics anymore, although their relation with reality is still used for guiding mathematical intuition: physical reality is still used by mathematicians to choose axioms, find which theorems are interesting to prove, and obtain indications of possible proofs.
Ancient Greece
Most civilisations developed some mathematics, mainly for practical purposes, such as counting, surveying, prosody, astronomy, and astrology. It seems that ancient Greek philosophers were the first to study the nature of mathematics and its relation with the real world.Zeno of Elea produced several paradoxes he used to support his thesis that movement does not exist. These paradoxes involve mathematical infinity, a concept that was outside the mathematical foundations of that time and was not well understood before the end of the 19th century.
The Pythagorean school of mathematics originally insisted that the only numbers are natural numbers and ratios of natural numbers. The discovery that the ratio of the diagonal of a square to its side is not the ratio of two natural numbers was a shock to them which they only reluctantly accepted. A testimony of this is the modern terminology of irrational number for referring to a number that is not the quotient of two integers, since "irrational" means originally "not reasonable" or "not accessible with reason".
The fact that length ratios are not represented by rational numbers was resolved by Eudoxus of Cnidus, a student of Plato, who reduced the comparison of two irrational ratios to comparisons of integer multiples of the magnitudes involved. His method anticipated that of Dedekind cuts in the modern definition of real numbers by Richard Dedekind ; see.
In the Posterior Analytics, Aristotle laid down the logic for organizing a field of knowledge by means of primitive concepts, axioms, postulates, definitions, and theorems. Aristotle took a majority of his examples for this from arithmetic and from geometry, and his logic served as the foundation of mathematics for centuries. This method resembles the modern axiomatic method but with a big philosophical difference: axioms and postulates were supposed to be true, being either self-evident or resulting from experiments, while no other truth than the correctness of the proof is involved in the axiomatic method. So, for Aristotle, a proved theorem is true, while in the axiomatic methods, the proof says only that the axioms imply the statement of the theorem.
Aristotle's logic reached its high point with Euclid's Elements, a treatise on mathematics structured with very high standards of rigor: Euclid justifies each proposition by a demonstration in the form of chains of syllogisms.
Aristotle's syllogistic logic, together with its exemplification by Euclid's Elements, are recognized as scientific achievements of ancient Greece, and remained as the foundations of mathematics for centuries.
Before infinitesimal calculus
During Middle Ages, Euclid's Elements stood as a perfectly solid foundation for mathematics, and philosophy of mathematics concentrated on the ontological status of mathematical concepts; the question was whether they exist independently of perception or within the mind only ; or even whether they are simply names of collection of individual objects.In Elements, the only numbers that are considered are natural numbers and ratios of lengths. This geometrical view of non-integer numbers remained dominant until the end of Middle Ages, although the rise of algebra led to consider them independently from geometry, which implies implicitly that there are foundational primitives of mathematics. For example, the transformations of equations introduced by Al-Khwarizmi and the cubic and quartic formulas discovered in the 16th century result from algebraic manipulations that have no geometric counterpart.
Nevertheless, this did not challenge the classical foundations of mathematics since all properties of numbers that were used can be deduced from their geometrical definition.
In 1637, René Descartes published La Géométrie, in which he showed that geometry can be reduced to algebra by means of coordinates, which are numbers determining the position of a point. This gives to the numbers that he called real numbers a more foundational role. Descartes' book became famous after 1649 and paved the way to infinitesimal calculus.
Infinitesimal calculus
in England and Leibniz in Germany independently developed the infinitesimal calculus for dealing with mobile points and variable quantities.This needed the introduction of new concepts such as continuous functions, derivatives and limits. For dealing with these concepts in a logical way, they were defined in terms of infinitesimals that are hypothetical numbers that are infinitely close to zero. The strong implications of infinitesimal calculus on foundations of mathematics is illustrated by a pamphlet of the Protestant philosopher George Berkeley, who wrote " are neither finite quantities, nor quantities infinitely small, nor yet nothing. May we not call them the ghosts of departed quantities?".
Also, a lack of rigor has been frequently invoked, because infinitesimals and the associated concepts were not formally defined. Real numbers, continuous functions, derivatives were not formally defined before the 19th century, as well as Euclidean geometry. It is only in the 20th century that a formal definition of infinitesimals has been given, with the proof that the whole infinitesimal can be deduced from them.
Despite its lack of firm logical foundations, infinitesimal calculus was quickly adopted by mathematicians, and validated by its numerous applications; in particular the fact that the planet trajectories can be deduced from the Newton's law of gravitation.
19th century
In the 19th century, mathematics developed quickly in many directions. Several of the problems that were considered led to questions on the foundations of mathematics. Frequently, the proposed solutions led to further questions that were often simultaneously of philosophical and mathematical nature. All these questions led, at the end of the 19th century and the beginning of the 20th century, to debates which have been called the foundational crisis of mathematics. The following subsections describe the main such foundational problems revealed during the 19th century.Real analysis
started the project of giving rigorous bases to infinitesimal calculus. In particular, he rejected the heuristic principle that he called the generality of algebra, which consisted to apply properties of algebraic operations to infinite sequences without proper proofs. In his Cours d'Analyse, he considered very small quantities, which could presently be called "sufficiently small quantities"; that is, a sentence such that "if is very small must be understood as "there is a natural number such that ". In the proofs he used this in a way that predated the modern -definition of limit.The modern -definition of limits and continuous functions was first developed by Bolzano in 1817, but remained relatively unknown, and Cauchy probably did know Bolzano's work.
Karl Weierstrass formalized and popularized the -definition of limits, and discovered some pathological functions that seemed paradoxical at this time, such as continuous, nowhere-differentiable functions. Indeed, such functions contradict previous conceptions of a function as a rule for computation or a smooth graph.
At this point, the program of arithmetization of analysis advocated by Weierstrass was essentially completed, except for two points.
Firstly, a formal definition of real numbers was still lacking. Indeed, beginning with Richard Dedekind in 1858, several mathematicians worked on the definition of the real numbers, including Hermann Hankel, Charles Méray, and Eduard Heine, but this is only in 1872 that two independent complete definitions of real numbers were published: one by Dedekind, by means of Dedekind cuts; the other one by Georg Cantor as equivalence classes of Cauchy sequences.
Several problems were left open by these definitions, which contributed to the foundational crisis of mathematics. Firstly both definitions suppose that rational numbers and thus natural numbers are rigorously defined; this was done a few years later with Peano axioms. Secondly, both definitions involve infinite sets, and Cantor's set theory was published several years later.
The third problem is more subtle: and is related to the foundations of logic: classical logic is a first-order logic; that is, quantifiers apply to variables representing individual elements, not to variables representing sets of elements. The basic property of the completeness of the real numbers that is required for defining and using real numbers involves a quantification on infinite sets. Indeed, this property may be expressed either as for every infinite sequence of real numbers, if it is a Cauchy sequence, it has a limit that is a real number, or as every subset of the real numbers that is bounded has a least upper bound that is a real number. This need of quantification over infinite sets is one of the motivation of the development of higher-order logics during the first half of the 20th century.