Regula falsi
In mathematics, the regula falsi, method of false position, or false position method refers to a family of algorithms used to solve linear equations and smooth nonlinear equations for a single unknown value.
In its oldest known examples found in cuneiform and hieroglyphic writings, the method replaces simple trial and error with proportional correction of an initial guess. In modern usage, the method relies on linear interpolation based on two different guesses.
Two historical types
Two basic types of false position method can be distinguished historically, simple false position and double false position.Simple false position is aimed at solving problems involving direct proportion and can be thought
of as an early algorithm for division. Such problems can be written algebraically in the form: determine such that
if and are known. The method begins by using a test input value, and finding the corresponding output value by multiplication: . The correct answer is then found by proportional adjustment,.
As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of the equation. This is solved by false position. First, guess that to obtain, on the left,. This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply by 3 and substitute again to get, verifying that the solution is.
Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine such that
if it is known that
Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by,
would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his Ground of Artes is:
For an affine linear function,
double false position provides the exact solution, while for a nonlinear function it provides an approximation that can be successively improved by iteration.
History
The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics.Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art, dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:
Now an item is purchased jointly; everyone contributes 8 , the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.
Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors. The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa, an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn. It was used for centuries to solve practical problems such as commercial and juridical questions, as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin.
Leonardo of Pisa devoted Chapter 13 of his book Liber Abaci to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources. In 1494, Pacioli used the term el cataym in his book Summa de arithmetica, probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556. Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". Regula Falsi appears as the Latinized version of Rule of False as early as 1690.
Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:
Numerical analysis
The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques.Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and regula falsi provides one of these.
Given an equation, move all of its terms to one side so that it has the form,, where is some function of the unknown variable. A value that satisfies this equation, that is,, is called a root or zero of the function and is a solution of the original equation. If is a continuous function and there exist two points and such that and are of opposite signs, then, by the intermediate value theorem, the function has a root in the interval.
There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the two-point bracketing methods. These methods proceed by producing a sequence of shrinking intervals, at the th step, such that contains a root of.
Two-point bracketing methods
These methods start with two -values, initially found by trial-and-error, at which has opposite signs. Under the continuity assumption, a root of is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If is the point selected, then the smaller interval goes from to the endpoint where has the sign opposite that of. In the improbable case that, a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy.The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate.
Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method.
The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step, the current bracketing interval is, then the new solution estimate is obtained by,
This ensures that is between and, thereby guaranteeing convergence toward the solution.
Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy. This is commonly referred to as 1st-order convergence, meaning the number of digits of precision is proportional to the number of iterations used.
The ''regula falsi'' (false position) method
The convergence rate of the bisection method could possibly be improved by using a different solution estimate.The regula falsi method calculates the new solution estimate as the -intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment.
More precisely, suppose that in the -th iteration the bracketing interval is. Construct the line through the points and, as illustrated. This line is a secant or chord of the graph of the function. In point-slope form, its equation is given by
Now choose to be the -intercept of this line, that is, the value of for which, and substitute these values to obtain
Solving this equation for ck gives:
This last symmetrical form has a computational advantage when using floating-point arithmetic: As a solution is approached, and will be very close together, and nearly always of the same sign. Such a subtraction can lose precision through cancellation. Because and are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition.
At iteration number, the number is calculated as above and then, if and have the same sign, set and, otherwise set and. This process is repeated until the root is approximated sufficiently well.
The above formula is also used in the secant method.
For nonlinear functions, once the interval of search
shrinks far enough that the second derivative has constant sign throughout the interval, one endpoint
of the search becomes fixed, while the other converges to the root. Thus, the best estimate of the solution
is the last calculated value of. However, because the interval stops shrinking, regula falsi can not match the bisection method's guarantee of precision. In some cases, rate of convergence can drop below that of the bisection method. Modified versions of regula falsi are generally to be preferred because they can fix these shortcomings at minimal cost.