Normal number (computing)
In computing, a normal number is a non-zero number in a floating-point representation which is within the balanced range supported by a given floating-point format: it is a floating point number that can be represented without leading zeros in its significand.
The magnitude of the smallest normal number in a format is given by:
where b is the base of the format, and ' depends on the size and layout of the format.
Similarly, the magnitude of the largest normal number in a format is given by
where p is the precision of the format in digits and ' is related to ' as:
In the IEEE 754 binary and decimal formats, b, p,, and ' have the following values:
| Format | Smallest Normal Number | Largest Normal Number | ||||
| binary16 | 2 | 11 | −14 | 15 | ||
| binary32 | 2 | 24 | −126 | 127 | ||
| binary64 | 2 | 53 | −1022 | 1023 | ||
| binary128 | 2 | 113 | −16382 | 16383 | ||
| decimal32 | 10 | 7 | −95 | 96 | ||
| decimal64 | 10 | 16 | −383 | 384 | ||
| decimal128 | 10 | 34 | −6143 | 6144 |
For example, in the smallest decimal format in the table, the range of positive normal numbers is 10−95 through 9.999999 × 1096.
Non-zero numbers smaller in magnitude than the smallest normal number are called subnormal 'numbers'.
Zero is considered neither normal nor subnormal.