Decimal


The decimal numeral system is the standard system for denoting integer and non-integer numbers. It is the extension to non-integer numbers of the Hindu–Arabic numeral system. The way of denoting numbers in the decimal system is often referred to as decimal notation.
A decimal numeral, refers generally to the notation of a number in the decimal numeral system. Decimals may sometimes be identified by a decimal separator.
Decimal may also refer specifically to the digits after the decimal separator, such as in " is the approximation of to two decimals".
The numbers that may be represented exactly by a decimal of finite length are the decimal fractions. That is, fractions of the form, where is an integer, and is a non-negative integer. Decimal fractions also result from the addition of an integer and a fractional part; the resulting sum sometimes is called a fractional number.
Decimals are commonly used to approximate real numbers. By increasing the number of digits after the decimal separator, one can make the approximation errors as small as one wants, when one has a method for computing the new digits. In the sciences, the number of decimal places given generally gives an indication of the precision to which a quantity is known; for example, if a mass is given as 1.32 milligrams, it usually means there is reasonable confidence that the true mass is somewhere between 1.315 milligrams and 1.325 milligrams, whereas if it is given as 1.320 milligrams, then it is likely between 1.3195 and 1.3205 milligrams. The same holds in pure mathematics; for example, if one computes the square root of 22 to two digits past the decimal point, the answer is 4.69, whereas computing it to three digits, the answer is 4.690. The extra 0 at the end is meaningful, in spite of the fact that 4.69 and 4.690 are the same real number.
In principle, the decimal expansion of any real number can be carried out as far as desired past the decimal point. If the expansion reaches a point where all remaining digits are zero, then the remainder can be omitted, and such an expansion is called a terminating decimal. A repeating decimal is an infinite decimal that, after some place, repeats indefinitely the same sequence of digits. An infinite decimal represents a rational number, the quotient of two integers, if and only if it is a repeating decimal or has a finite number of non-zero digits.

Origin

Many numeral systems of ancient civilizations use ten and its powers for representing numbers, possibly because there are ten fingers on two hands and people started counting by using their fingers. Examples are firstly the Egyptian numerals, then the Brahmi numerals, Greek numerals, Hebrew numerals, Roman numerals, and Chinese numerals. Very large numbers were difficult to represent in these old numeral systems, and only the best mathematicians were able to multiply or divide large numbers. These difficulties were completely solved with the introduction of the Hindu–Arabic numeral system for representing integers. This system has been extended to represent some non-integer numbers, called decimal fractions or decimal numbers, for forming the decimal numeral system.

Decimal notation

For writing numbers, the decimal system uses ten decimal digits, a decimal mark, and, for negative numbers, a minus sign "−". The decimal digits are 0, 1, 2, 3, 4, 5, 6, 7, 8, 9; the decimal separator is the dot "" in many countries, and a comma "" in other countries.
For representing a non-negative number, a decimal numeral consists of
  • either a sequence of digits, where the entire sequence represents an integer:
  • :
  • or a decimal mark separating two sequences of digits
If, that is, if the first sequence contains at least two digits, it is generally assumed that the first digit is not zero. In some circumstances it may be useful to have one or more 0's on the left; this does not change the value represented by the decimal: for example,. Similarly, if the final digit on the right of the decimal mark is zero—that is, if —it may be removed; conversely, trailing zeros may be added after the decimal mark without changing the represented number; for example, and.
For representing a negative number, a minus sign is placed before.
The numeral represents the number
The integer part or integral part of a decimal numeral is the integer written to the left of the decimal separator. For a non-negative decimal numeral, it is the largest integer that is not greater than the decimal. The part from the decimal separator to the right is the fractional part, which equals the difference between the numeral and its integer part.
When the integral part of a numeral is zero, it may occur, typically in computing, that the integer part is not written. In normal writing, this is generally avoided, because of the risk of confusion between the decimal mark and other punctuation.
In brief, the contribution of each digit to the value of a number depends on its position in the numeral. That is, the decimal system is a positional numeral system.

Decimal fractions

Decimal fractions are the rational numbers that may be expressed as a fraction whose denominator is a power of ten. For example, the decimal expressions represent the fractions,,, and, and therefore denote decimal fractions. An example of a fraction that cannot be represented by a decimal expression is, 3 not being a power of 10.
More generally, a decimal with digits after the separator represents the fraction with denominator, whose numerator is the integer obtained by removing the separator.
It follows that a number is a decimal fraction if and only if it has a finite decimal representation.
Expressed as fully reduced fractions, the decimal numbers are those whose denominator is a product of a power of 2 and a power of 5. Thus the smallest denominators of decimal numbers are

Approximation using decimal numbers

Decimal numerals do not allow an exact representation for all real numbers. Nevertheless, they allow approximating every real number with any desired accuracy, e.g., the decimal 3.14159 approximates, being less than 10−5 off; so decimals are widely used in science, engineering and everyday life.
More precisely, for every real number and every positive integer, there are two decimals and with at most digits after the decimal mark such that and.
Numbers are very often obtained as the result of measurement. As measurements are subject to measurement uncertainty with a known upper bound, the result of a measurement is well-represented by a decimal with digits after the decimal mark, as soon as the absolute measurement error is bounded from above by. In practice, measurement results are often given with a certain number of digits after the decimal point, which indicate the error bounds. For example, although 0.080 and 0.08 denote the same number, the decimal numeral 0.080 suggests a measurement with an error less than 0.001, while the numeral 0.08 indicates an absolute error bounded by 0.01. In both cases, the true value of the measured quantity could be, for example, 0.0803 or 0.0796.

Infinite decimal expansion

For a real number and an integer, let denote the decimal expansion of the greatest number that is not greater than ' that has exactly digits after the decimal mark. Let denote the last digit of. It is straightforward to see that may be obtained by appending to the right of. This way one has
and the difference of and amounts to
which is either 0, if, or gets arbitrarily small as '
tends to infinity. According to the definition of a limit, ' is the limit of when ' tends to infinity. This is written asor
which is called an infinite decimal expansion of '.
Conversely, for any integer and any sequence of digits the expression is an infinite decimal expansion of a real number '
. This expansion is unique if neither all are equal to 9 nor all are equal to 0 for ' large enough.
If all for equal to 9 and, the limit of the sequence is the decimal fraction obtained by replacing the last digit that is not a 9, i.e.:, by, and replacing all subsequent 9s by 0s.
Any such decimal fraction, i.e.: for, may be converted to its equivalent infinite decimal expansion by replacing by and replacing all subsequent 0s by 9s.
In summary, every real number that is not a decimal fraction has a unique infinite decimal expansion. Each decimal fraction has exactly two infinite decimal expansions, one containing only 0s after some place, which is obtained by the above definition of, and the other containing only 9s after some place, which is obtained by defining as the greatest number that is less than, having exactly '
digits after the decimal mark.

Rational numbers

allows computing the infinite decimal expansion of a rational number. If the rational number is a decimal fraction, the division stops eventually, producing a decimal numeral, which may be prolongated into an infinite expansion by adding infinitely many zeros. If the rational number is not a decimal fraction, the division may continue indefinitely. However, as all successive remainders are less than the divisor, there are only a finite number of possible remainders, and after some place, the same sequence of digits must be repeated indefinitely in the quotient. That is, one has a repeating decimal. For example,
The converse is also true: if, at some point in the decimal representation of a number, the same string of digits starts repeating indefinitely, the number is rational.
or, dividing both numerator and denominator by 6,.

Decimal computation

Most modern computer hardware and software systems commonly use a binary representation internally.
For external use by computer specialists, this binary representation is sometimes presented in the related octal or hexadecimal systems.
For most purposes, however, binary values are converted to or from the equivalent decimal values for presentation to or input from humans; computer programs express literals in decimal by default.
Both computer hardware and software also use internal representations which are effectively decimal for storing decimal values and doing arithmetic. Often this arithmetic is done on data which are encoded using some variant of binary-coded decimal, especially in database implementations, but there are other decimal representations in use.
Decimal arithmetic is used in computers so that decimal fractional results of adding values with a fixed length of their fractional part always are computed to this same length of precision. This is especially important for financial calculations, e.g., requiring in their results integer multiples of the smallest currency unit for book keeping purposes. This is not possible in binary, because the negative powers of have no finite binary fractional representation; and is generally impossible for multiplication. See Arbitrary-precision arithmetic for exact calculations.