Lossless compression
Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data with no loss of information. Lossless compression is possible because most real-world data exhibits statistical redundancy. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates.
By operation of the pigeonhole principle, no lossless compression algorithm can shrink the size of all possible data: Some data will get longer by at least one symbol or bit.
Compression algorithms are usually effective for human- and machine-readable documents and cannot shrink the size of random data that contain no redundancy. Different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip. It is also often used as a component within lossy data compression technologies.
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Common examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Techniques
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" data will produce shorter output than "improbable" data.The primary encoding algorithms used to produce bit sequences are Huffman coding and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed-color images.
Multimedia
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values. This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes. For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format for compressing still image files in favor of Portable Network Graphics, which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images, and other techniques that take advantage of the specific characteristics of images.
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the essentially using autoregressive models to predict the "next" value and encoding the difference between the expected value and the actual data. If the difference between the predicted and the actual data tends to be small, then certain difference values become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file. This is called delta encoding, but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Methods
No lossless compression algorithm can efficiently compress all possible data. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.Some of the most common lossless compression algorithms are listed below.
General purpose
- ANS – Entropy encoding, used by LZFSE and Zstandard
- Arithmetic coding – Entropy encoding
- Burrows–Wheeler transform reversible transform for making textual data more compressible, used by bzip2
- Huffman coding – Entropy encoding, pairs well with other algorithms
- Lempel-Ziv compression – Dictionary-based algorithm that forms the basis for many other algorithms
- * Deflate – Combines LZ77 compression with Huffman coding, used by ZIP, gzip, and PNG images
- * Lempel–Ziv–Markov chain algorithm – Very high compression ratio, used by 7zip and xz
- * Lempel–Ziv–Storer–Szymanski – Used by WinRAR in tandem with Huffman coding
- * Lempel–Ziv–Welch – Used by GIF images and Unix's
compressutility - Prediction by partial matching – Optimized for compressing plain text
- Run-length encoding – Simple scheme that provides good compression of data containing many runs of the same value
Audio
- Adaptive Transform Acoustic Coding
- Apple Lossless
- Audio Lossless Coding
- Direct Stream Transfer
- Dolby TrueHD
- DTS-HD Master Audio
- Free Lossless Audio Codec
- Meridian Lossless Packing
- Monkey's Audio
- MPEG-4 SLS
- OptimFROG
- Original Sound Quality
- RealPlayer
- Shorten
- TTA
- WavPack
- Windows Media Audio 9 Lossless
Raster graphics
- Lossless only encoding
- * BMP
- * PNG – Portable Network Graphics
- * GIF – Graphics Interchange Format
- Lossy and Lossless encoding options
- * AVIF – AV1 Image File Format
- * FLIF – Free Lossless Image Format
- * HEIF – High Efficiency Image File Format, using HEVC
- * ILBM –
- * JBIG2 – compression of B&W images
- * JPEG 2000 –
- * JPEG-LS
- * JPEG XL
- * JPEG XR – formerly WMPhoto and HD Photo
- * LDCT – Discrete Cosine Transform
- * PCX – PiCture eXchange
- * QOI – Quite OK Image Format
- * TGA – Truevision TGA
- * TIFF – Tag Image File Format
- * WebP
3D Graphics
- OpenCTM – Lossless compression of 3D triangle meshes
Video
Cryptography
s often compress data before encryption for added security. When properly implemented, compression greatly increases the unicity distance by removing patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.Genetics and genomics
are the latest generation of lossless algorithms that compress data using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression, providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities.Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.