Binary logarithm
In mathematics, the binary logarithm is the power to which the number must be raised to obtain the value . That is, for any real number,
For example, the binary logarithm of is, the binary logarithm of is, the binary logarithm of is , and the binary logarithm of is .
The binary logarithm is the logarithm to the base and is the inverse function of the power of two function. There are several alternatives to the notation for the binary logarithm; see the [|Notation] section below.
Historically, the first application of binary logarithms was in music theory, by Leonhard Euler: the binary logarithm of a frequency ratio of two musical tones gives the number of octaves by which the tones differ. Binary logarithms can be used to calculate the length of the representation of a number in the binary numeral system, or the number of bits needed to encode a message in information theory. In computer science, they count the number of steps needed for binary search and related algorithms. Other areas
in which the binary logarithm is frequently used include combinatorics, bioinformatics, the design of sports tournaments, and photography.
Binary logarithms are included in the standard C mathematical functions and other mathematical software packages.
History
The powers of two have been known since antiquity; for instance, they appear in Euclid's Elements, Props. IX.32 and IX.36.And the binary logarithm of a power of two is just its position in the ordered sequence of powers of two.
On this basis, Michael Stifel has been credited with publishing the first known table of binary logarithms in 1544. His book Arithmetica Integra contains several tables that show the integers with their corresponding powers of two. Reversing the rows of these tables allow them to be interpreted as tables of binary logarithms.
Earlier than Stifel, the 8th century Jain mathematician Virasena is credited with a precursor to the binary logarithm. Virasena's concept of ardhacheda has been defined as the number of times a given number can be divided evenly by two. This definition gives rise to a function that coincides with the binary logarithm on the powers of two, but it is different for other integers, giving the 2-adic order rather than the logarithm.
The modern form of a binary logarithm, applying to any number was considered explicitly by Leonhard Euler in 1739. Euler established the application of binary logarithms to music theory, long before their applications in information theory and computer science became known. As part of his work in this area, Euler published a table of binary logarithms of the integers from 1 to 8, to seven decimal digits of accuracy.
Definition and properties
The binary logarithm function may be defined as the inverse function to the power of two function, which is a strictly increasing function over the positive real numbers and therefore has a unique inverse.Alternatively, it may be defined as, where is the natural logarithm, defined in any of its standard ways. Using the complex logarithm in this definition allows the binary logarithm to be extended to the complex numbers.
As with other logarithms, the binary logarithm obeys the following equations, which can be used to simplify formulas that combine binary logarithms with multiplication or exponentiation:
For more, see list of logarithmic identities.
Notation
In mathematics, the binary logarithm of a number is often written as. However, several other notations for this function have been used or proposed, especially in application areas.Some authors write the binary logarithm as, the notation listed in The Chicago Manual of Style. Donald Knuth credits this notation to a suggestion of Edward Reingold, but its use in both information theory and computer science dates to before Reingold was active. The binary logarithm has also been written as with a prior statement that the default base for the logarithm is . Another notation that is often used for the same function is, from Latin logarithmus dualis or logarithmus dyadis.
The, ISO 31-11 and ISO 80000-2 standards recommend yet another notation,. According to these standards, should not be used for the binary logarithm, as it is instead reserved for the common logarithm.
Applications
Information theory
The number of digits in the binary representation of a positive integer is the integral part of, i.e.In information theory, the definition of the amount of self-information and information entropy is often expressed with the binary logarithm, corresponding to making the bit the fundamental unit of information. With these units, the Shannon–Hartley theorem expresses the information capacity of a channel as the binary logarithm of its signal-to-noise ratio, plus one. However, the natural logarithm and the nat are also used in alternative notations for these definitions.
Combinatorics
Although the natural logarithm is more important than the binary logarithm in many areas of pure mathematics such as number theory and mathematical analysis, the binary logarithm has several applications in combinatorics:- Every binary tree with leaves has height at least, with equality when is a power of two and the tree is a complete binary tree. Relatedly, the Strahler number of a river system with tributary streams is at most.
- Every family of sets with different sets has at least elements in its union, with equality when the family is a power set.
- Every partial cube with vertices has isometric dimension at least, and has at most edges, with equality when the partial cube is a hypercube graph.
- According to Ramsey's theorem, every -vertex undirected graph has either a clique or an independent set of size logarithmic in. The precise size that can be guaranteed is not known, but the best bounds known on its size involve binary logarithms. In particular, all graphs have a clique or independent set of size at least and almost all graphs do not have a clique or independent set of size larger than.
- From a mathematical analysis of the Gilbert–Shannon–Reeds model of random shuffles, one can show that the number of times one needs to shuffle an -card deck of cards, using riffle shuffles, to get a distribution on permutations that is close to uniformly random, is approximately. This calculation forms the basis for a recommendation that 52-card decks should be shuffled seven times.
Computational complexity
The running time of an algorithm is usually expressed in big O notation, which is used to simplify expressions by omitting their constant factors and lower-order terms. Because logarithms in different bases differ from each other only by a constant factor, algorithms that run in time can also be said to run in, say, time. The base of the logarithm in expressions such as or is therefore not important and can be omitted. However, for logarithms that appear in the exponent of a time bound, the base of the logarithm cannot be omitted. For example, is not the same as because the former is equal to and the latter to.
Algorithms with running time are sometimes called linearithmic. Some examples of algorithms with running time or are:
- Average time quicksort and other comparison sort algorithms
- Searching in balanced binary search trees
- Exponentiation by squaring
- Longest increasing subsequence
and the Strassen algorithm for multiplying matrices in time
. The occurrence of binary logarithms in these running times can be explained by reference to the master theorem for divide-and-conquer recurrences.
Bioinformatics
In bioinformatics, microarrays are used to measure how strongly different genes are expressed in a sample of biological material. Different rates of expression of a gene are often compared by using the binary logarithm of the ratio of expression rates: the log ratio of two expression rates is defined as the binary logarithm of the ratio of the two rates. Binary logarithms allow for a convenient comparison of expression rates: a doubled expression rate can be described by a log ratio of, a halved expression rate can be described by a log ratio of, and an unchanged expression rate can be described by a log ratio of zero, for instance.Data points obtained in this way are often visualized as a scatterplot in which one or both of the coordinate axes are binary logarithms of intensity ratios, or in visualizations such as the MA plot and RA plot that rotate and scale these log ratio scatterplots.
Music theory
In music theory, the interval or perceptual difference between two tones is determined by the ratio of their frequencies. Intervals coming from rational number ratios with small numerators and denominators are perceived as particularly euphonious. The simplest and most important of these intervals is the octave, a frequency ratio of. The number of octaves by which two tones differ is the binary logarithm of their frequency ratio.To study tuning systems and other aspects of music theory that require finer distinctions between tones, it is helpful to have a measure of the size of an interval that is finer than an octave and is additive rather than multiplicative. That is, if tones,, and form a rising sequence of tones, then the measure of the interval from to plus the measure of the interval from to should equal the measure of the interval from to. Such a measure is given by the cent, which divides the octave into equal intervals. Mathematically, given tones with frequencies and, the number of cents in the interval from to is
The millioctave is defined in the same way, but with a multiplier of instead of.