Inductive reasoning


Inductive reasoning refers to a variety of methods of reasoning in which the conclusion of an argument is supported not with deductive certainty, but at best with some degree of probability. Unlike deductive reasoning, where the conclusion is certain, given the premises are correct, inductive reasoning produces conclusions that are at best probable, given the evidence provided.

Types

The types of inductive reasoning include generalization, prediction, statistical syllogism, argument from analogy, and causal inference. There are also differences in how their results are regarded.

Inductive generalization

A generalization proceeds from premises about a sample to a conclusion about the population. The observation obtained from this sample is projected onto the broader population.
For example, if there are 20 balls—either black or white—in an urn: to estimate their respective numbers, a sample of four balls is drawn, three are black and one is white. An inductive generalization may be that there are 15 black and five white balls in the urn. However this is only one of 17 possibilities as to the actual number of each color of balls in the urn '' -- there may, of course, have been 19 black and just 1 white ball, or only 3 black balls and 17 white, or any mix in between. The probability of each possible distribution being the actual numbers of black and white balls can be estimated using techniques such as Bayesian inference, where prior assumptions about the distribution are updated with the observed sample, or maximum likelihood estimation, which identifies the distribution most likely given the observed sample.
How much the premises support the conclusion depends upon the number in the sample group, the number in the population, the degree to which the sample represents the population. The extent to which the sample represents the population depends on the reliability of the procedure used for individual observations, which is not always as simple as taking a random element from a static population, which in itself is not always simple. The greater the sample size relative to the population and the more closely the sample represents the population, the stronger the generalization is. The hasty generalization and the biased sample are generalization fallacies.

Statistical generalization

A statistical generalization is a type of inductive argument in which a conclusion about a population is inferred using a statistically representative sample. For example:
The measure is highly reliable within a well-defined margin of error provided that the selection process was genuinely random and that the numbers of items in the sample having the properties considered are large. It is readily quantifiable. Compare the preceding argument with the following. "Six of the ten people in my book club are Libertarians. Therefore, about 60% of people are Libertarians." The argument is weak because the sample is non-random and the sample size is very small.
Statistical generalizations are also called statistical projections and sample projections.

Anecdotal generalization

An anecdotal generalization is a type of inductive argument in which a conclusion about a population is inferred using a non-statistical sample. In other words, the generalization is based on anecdotal evidence. For example:
This inference is less reliable than a statistical generalization, first, because the sample events are non-random, and second because it is not reducible to a mathematical expression. Statistically speaking, there is simply no way to know, measure and calculate the circumstances affecting performance that will occur in the future. On a philosophical level, the argument relies on the presupposition that the operation of future events will mirror the past. In other words, it takes for granted a uniformity of nature, an unproven principle that cannot be derived from the empirical data itself. Arguments that tacitly presuppose this uniformity are sometimes called Humean after the philosopher who was first to subject them to philosophical scrutiny.

Prediction

An inductive prediction draws a conclusion about a future, current, or past instance from a sample of other instances. Like an inductive generalization, an inductive prediction relies on a data set consisting of specific instances of a phenomenon. But rather than conclude with a general statement, the inductive prediction concludes with a specific statement about the probability that a single instance will have an attribute shared by the other instances.

Statistical syllogism

A statistical syllogism proceeds from a generalization about a group to a conclusion about an individual.
For example:
This is a statistical syllogism. Even though one cannot be sure Bob will attend university, the exact probability of this outcome is fully assured. Two dicto simpliciter fallacies can occur in statistical syllogisms: "accident" and "converse accident".

Argument from analogy

The process of analogical inference involves noting the shared properties of two or more things and from this basis inferring that they also share some further property:
Analogical reasoning is very frequent in common sense, science, philosophy, law, and the humanities, but sometimes it is accepted only as an auxiliary method. A refined approach is case-based reasoning.
This is analogical induction, according to which things alike in certain ways are more prone to be alike in other ways. This form of induction was explored in detail by philosopher John Stuart Mill in his System of Logic, where he states, "here can be no doubt that every resemblance affords some degree of probability, beyond what would otherwise exist, in favor of the conclusion." See Mill's Methods.
Some thinkers contend that analogical induction is a subcategory of inductive generalization because it assumes a pre-established uniformity governing events. Analogical induction requires an auxiliary examination of the relevancy of the characteristics cited as common to the pair. In the preceding example, if a premise were added stating that both stones were mentioned in the records of early Spanish explorers, this common attribute is extraneous to the stones and does not contribute to their probable affinity.
A pitfall of analogy is that features can be cherry-picked: while objects may show striking similarities, two things juxtaposed may respectively possess other characteristics not identified in the analogy that are characteristics sharply dissimilar. Thus, analogy can mislead if not all relevant comparisons are made.

Causal inference

A causal inference draws a conclusion about a possible or probable causal connection based on the conditions of the occurrence of an effect. Premises about the correlation of two things can indicate a causal relationship between them, but additional factors must be confirmed to establish the exact form of the causal relationship.

Methods

The two principal methods used to reach inductive generalizations are enumerative induction and ''eliminative induction.''

Enumerative induction

Enumerative induction is an inductive method in which a generalization is constructed based on the number of instances that support it. The more supporting instances, the stronger the conclusion.
The most basic form of enumerative induction reasons from particular instances to all instances and is thus an unrestricted generalization. If one observes 100 swans, and all 100 were white, one might infer a probable universal categorical proposition of the form All swans are white. As this reasoning form's premises, even if true, do not entail the conclusion's truth, this is a form of inductive inference. The conclusion might be true, and might be thought probably true, yet it can be false. Questions regarding the justification and form of enumerative inductions have been central in philosophy of science, as enumerative induction has a pivotal role in the traditional model of the scientific method.
This is enumerative induction, also known as simple induction or simple predictive induction. It is a subcategory of inductive generalization. In everyday practice, this is perhaps the most common form of induction. For the preceding argument, the conclusion is tempting but makes a prediction well in excess of the evidence. First, it assumes that life forms observed until now can tell us how future cases will be: an appeal to uniformity. Second, the conclusion All is a bold assertion. A single contrary instance foils the argument. And last, quantifying the level of probability in any mathematical form is problematic. By what standard do we measure our Earthly sample of known life against all life? Suppose we do discover some new organism—such as some microorganism floating in the mesosphere or an asteroid—and it is cellular. Does the addition of this corroborating evidence oblige us to raise our probability assessment for the subject proposition? It is generally deemed reasonable to answer this question "yes", and for a good many this "yes" is not only reasonable but incontrovertible. So then just how much should this new data change our probability assessment? Here, consensus melts away, and in its place arises a question about whether we can talk of probability coherently at all with or without numerical quantification.
This is enumerative induction in its weak form. It truncates "all" to a mere single instance and, by making a far weaker claim, considerably strengthens the probability of its conclusion. Otherwise, it has the same shortcomings as the strong form: its sample population is non-random, and quantification methods are elusive.

Eliminative induction

, also called variative induction, is an inductive method first put forth by Francis Bacon; in it a generalization is constructed based on the variety of instances that support it. Unlike enumerative induction, eliminative induction reasons based on the various kinds of instances that support a conclusion, rather than the number of instances that support it. As the variety of instances increases, the more possible conclusions based on those instances can be identified as incompatible and eliminated. This, in turn, increases the strength of any conclusion that remains consistent with the various instances. In this context, confidence is the function of how many instances have been identified as incompatible and eliminated. This confidence is expressed as the Baconian probability in where n reasons for finding a claim incompatible has been identified and i of these have been eliminated by evidence or argument.
There are three ways of attacking an argument; these ways - known as defeaters in defeasible reasoning literature - are: rebutting, undermining, and undercutting. Rebutting defeats by offering a counterexample, undermining defeats by questioning the validity of the evidence, and undercutting defeats by pointing out conditions where a conclusion is not true when the inference is. By identifying defeaters and proving them wrong, is how this approach builds confidence.
This type of induction may use different methodologies such as quasi-experimentation, which tests and, where possible, eliminates rival hypotheses. Different evidential tests may also be employed to eliminate possibilities that are entertained.
Eliminative induction is crucial to the scientific method and is used to eliminate hypotheses that are inconsistent with observations and experiments. It focuses on possible causes instead of observed actual instances of causal connections.
Eliminative induction has also been criticised for its reliance on identifying all plausible rival hypotheses before they can be ruled out. Salmon notes that this requirement is rarely achievable in scientific practice, since new explanations may emerge even after existing alternatives have been discarded, which limits the certainty the method can provide. Torretti likewise argues that eliminative strategies face the broader problem of underdetermination, where several different hypotheses may remain compatible with the same body of evidence, making elimination incomplete unless the space of possibilities is already well defined.