Error correction code


In computing, telecommunication, information theory, and coding theory, forward error correction or channel coding is a technique used for controlling errors in data transmission over unreliable or noisy communication channels.
The central idea is that the sender encodes the message in a redundant way, most often by using an error correction code, or error correcting code. The redundancy allows the receiver not only to detect errors that may occur anywhere in the message, but often to correct a limited number of errors. Therefore a reverse channel to request re-transmission may not be needed. The cost is a fixed, higher forward channel bandwidth.
The American mathematician Richard Hamming pioneered this field in the 1940s and invented the first error-correcting code in 1950: the Hamming code.
FEC can be applied in situations where re-transmissions are costly or impossible, such as one-way communication links or when transmitting to multiple receivers in multicast.
Long-latency connections also benefit; in the case of satellites orbiting distant planets, retransmission due to errors would create a delay of several hours. FEC is also widely used in modems and in cellular networks.
FEC processing in a receiver may be applied to a digital bit stream or in the demodulation of a digitally modulated carrier. For the latter, FEC is an integral part of the initial analog-to-digital conversion in the receiver. The Viterbi decoder implements a soft-decision algorithm to demodulate digital data from an analog signal corrupted by noise. Many FEC decoders can also generate a bit-error rate signal which can be used as feedback to fine-tune the analog receiving electronics.
FEC information is added to mass storage devices to enable recovery of corrupted data, and is used as ECC computer memory on systems that require special provisions for reliability.
The maximum proportion of errors or missing bits that can be corrected is determined by the design of the ECC, so different forward error correcting codes are suitable for different conditions. In general, a stronger code induces more redundancy that needs to be transmitted using the available bandwidth, which reduces the effective bit-rate while improving the received effective signal-to-noise ratio. The noisy-channel coding theorem of Claude Shannon can be used to compute the maximum achievable communication bandwidth for a given maximum acceptable error probability. This establishes bounds on the theoretical maximum information transfer rate of a channel with some given base noise level. However, the proof is not constructive, and hence gives no insight of how to build a capacity achieving code. After years of research, some advanced FEC systems like polar code come very close to the theoretical maximum given by the Shannon channel capacity under the hypothesis of an infinite length frame.

Method

ECC is accomplished by adding redundancy to the transmitted information using an algorithm. A redundant bit may be a complicated function of many original information bits. The original information may or may not appear literally in the encoded output; codes that include the unmodified input in the output are systematic, while those that do not are non-systematic.
A simplistic example of ECC is to transmit each data bit three times, which is known as a repetition code. Through a noisy channel, a receiver might see eight versions of the output; see the table below.
Triplet receivedInterpreted as
0000
0010
0100
1000
1111
1101
1011
0111

This allows an error in any one of the three samples to be corrected by "majority vote" or "democratic voting". The correcting ability of this ECC is:
  • up to one bit of triplet in error, or
  • up to two bits of triplet omitted.
Though simple to implement and widely used, this triple modular redundancy is a relatively inefficient ECC. Better ECC codes typically examine the last several tens or even the last several hundreds of previously received bits to determine how to decode the current small handful of bits.

Simplified formalism

Formally, an error-correcting code is given by its encoding function which assigns to each word of a finite alphabet a unique word from the alphabet.
Most commonly, is a homomorphism in the sense that if is the concatenation of and, then we have the following:This implies that it is enough to define for single-letter words. The range of the function is the set of code-words. The capabilities of the code to detect and correct errors can then be understood from the distance of the code, which is the minimum Hamming distance separating any two distinct code words. A code with distance can detect errors on bits as long as, and among those detected errors, the code can correct -bit errors whenever.

Averaging noise to reduce errors

ECC could be said to work by "averaging noise"; since each data bit affects many transmitted symbols, the corruption of some symbols by noise usually allows the original user data to be extracted from the other, uncorrupted received symbols that also depend on the same user data.
  • Because of this "risk-pooling" effect, digital communication systems that use ECC tend to work well above a certain minimum signal-to-noise ratio and not at all below it.
  • This all-or-nothing tendency – the cliff effect – becomes more pronounced as stronger codes are used that more closely approach the theoretical Shannon limit.
  • Interleaving ECC coded data can reduce the all or nothing properties of transmitted ECC codes when the channel errors tend to occur in bursts. However, this method has limits; it is best used on narrowband data.
Most telecommunication systems use a fixed channel code designed to tolerate the expected worst-case bit error rate, and then fail to work at all if the bit error rate is ever worse.
However, some systems adapt to the given channel error conditions: some instances of hybrid automatic repeat-request use a fixed ECC method as long as the ECC can handle the error rate, then switch to ARQ when the error rate gets too high;
adaptive modulation and coding uses a variety of ECC rates, adding more error-correction bits per packet when there are higher error rates in the channel, or taking them out when they are not needed.

Types

The two main categories of ECC codes are block codes and convolutional codes.
  • Block codes work on fixed-size blocks of bits or symbols of predetermined size. Practical block codes can generally be hard-decoded in polynomial time to their block length.
  • Convolutional codes work on bit or symbol streams of arbitrary length. They are most often soft decoded with the Viterbi algorithm, though other algorithms are sometimes used. Viterbi decoding allows asymptotically optimal decoding efficiency with increasing constraint length of the convolutional code, but at the expense of exponentially increasing complexity. A convolutional code that is terminated is also a 'block code' in that it encodes a block of input data, but the block size of a convolutional code is generally arbitrary, while block codes have a fixed size dictated by their algebraic characteristics. Types of termination for convolutional codes include "tail-biting" and "bit-flushing".
Classical block codes are usually decoded using hard-decision algorithms, which means that for every input and output signal a hard decision is made whether it corresponds to a one or a zero bit. In contrast, convolutional codes are typically decoded using soft-decision algorithms like the Viterbi, MAP or BCJR algorithms, which process analog signals, and which allow for much higher error-correction performance than hard-decision decoding.
Nearly all classical block codes apply the algebraic properties of finite fields. Hence classical block codes are often referred to as algebraic codes.

Block codes

There are many types of block codes; Reed–Solomon coding is noteworthy for its widespread use in compact discs, DVDs, and hard disk drives. Other examples of classical block codes include Golay, BCH, Multidimensional parity, and Hamming codes.
Hamming ECC is commonly used to correct ECC memory and early SLC NAND flash memory errors.
This provides single-bit error correction and 2-bit error detection.
Hamming codes are only suitable for more reliable single-level cell NAND.
Denser multi-level cell NAND may use multi-bit correcting ECC such as BCH, Reed–Solomon, or LDPC. NOR flash typically does not use any error correction.

Soft codes

Low-density parity-check (LDPC)

codes are a class of highly efficient linear block
codes made from many single parity check codes. They can provide performance very close to the channel capacity using an iterated soft-decision decoding approach, at linear time complexity in terms of their block length. Practical implementations rely heavily on decoding the constituent SPC codes in parallel.
LDPC codes were first introduced by Robert G. Gallager in his PhD thesis in 1960,
but due to the computational effort in implementing encoder and decoder and the introduction of Reed–Solomon codes,
they were mostly ignored until the 1990s.
LDPC codes are now used in many recent high-speed communication standards, such as DVB-S2, WiMAX, High-Speed Wireless LAN, 10GBase-T Ethernet and G.hn/G.9960. Other LDPC codes are standardized for wireless communication standards within 3GPP MBMS.

Turbo code

is an iterated soft-decoding scheme that combines two or more relatively simple convolutional codes and an interleaver to produce a block code that can perform to within a fraction of a decibel of the Shannon limit. Predating LDPC codes in terms of practical application, they now provide similar performance.
One of the earliest commercial applications of turbo coding was the CDMA2000 1x digital cellular technology developed by Qualcomm and sold by Verizon Wireless, Sprint, and other carriers. It is also used for the evolution of CDMA2000 1x specifically for Internet access, 1xEV-DO. Like 1x, EV-DO was developed by Qualcomm, and is sold by Verizon Wireless, Sprint, and other carriers.