UTF-16
UTF-16 is a character encoding that supports all 1,112,064 valid code points of Unicode. The encoding is variable-length as code points are encoded with one or two code units. UTF-16 arose from an earlier obsolete fixed-width 16-bit encoding now known as UCS-2, once it became clear that more than 216 code points were needed, including most emoji and important CJK characters such as for personal and place names.
UTF-16 is used by the Windows API, and by many programming environments such as Java and Qt. The variable-length character of UTF-16, combined with the fact that most characters are not variable-length, has led to many bugs in software, including in Windows itself.
UTF-16 is the only encoding allowed on the web that is incompatible with 8-bit ASCII. It has never gained popularity on the web, where it is declared by under 0.004% of public web pages. UTF-8, by comparison, gained dominance years ago and accounted for 99% of all web pages by 2025. The Web Hypertext Application Technology Working Group considers UTF-8 "the mandatory encoding for all " and that for security reasons browser applications should not use UTF-16.
File:Unifont Full Map.png|thumb|310x310px|GNU Unifont 16.0.01 Plane 0 map. The white stripe near the bottom is the surrogate code point range.
History
In the late 1980s, work began on developing a uniform encoding for a "Universal Character Set" that would replace earlier language-specific encodings with one coordinated system. The goal was to include all required characters from most of the world's languages, as well as symbols from technical domains such as science, mathematics, and music. The original idea was to replace the typical 256-character encodings, which required 1 byte per character, with an encoding using 65,536 values, which would require 2 bytes per character.Two groups worked on this in parallel, ISO/IEC JTC 1/SC 2 and the Unicode Consortium, the latter representing mostly manufacturers of computing equipment. The two groups attempted to synchronize their character assignments so that the developing encodings would be mutually compatible. The early 2-byte encoding was called "UCS-2".
When it became increasingly clear that 216 characters would not suffice, IEEE introduced a larger 31-bit space and an encoding that would require 4 bytes per character. This was resisted by the Unicode Consortium, both because 4 bytes per character wasted a lot of memory and disk space, and because some manufacturers were already heavily invested in 2-byte-per-character technology. The UTF-16 encoding scheme was developed as a compromise and introduced with version 2.0 of the Unicode standard in July 1996. It is fully specified in RFC 2781, published in 2000 by the IETF.
UTF-16 is specified in the latest versions of both the international standard ISO/IEC 10646 and the Unicode Standard. "UCS-2 should now be considered obsolete. It no longer refers to an encoding form in either 10646 or the Unicode Standard." UTF-16 will never be extended to support a larger number of code points or to support the code points that were replaced by surrogates, as this would violate the Unicode Stability Policy with respect to general category or surrogate code points.
Description
Each Unicode code point is encoded either as one or two 16-bit code units. Code points less than 216 are encoded with a single 16-bit code unit equal to the numerical value of the code point, as in the older UCS-2. Code points greater than or equal to 216 are encoded using two 16-bit code units. These two 16-bit code units are chosen from the UTF-16 surrogate range which had not previously been assigned to characters. Values in this range are not used as characters, and UTF-16 provides no legal way to code them as individual code points. A UTF-16 stream, therefore, consists of single 16-bit codes outside the surrogate range, and pairs of 16-bit values that are within the surrogate range.U+0000 to U+D7FF and U+E000 to U+FFFF
Both UTF-16 and UCS-2 encode code points in this range as single 16-bit code units that are numerically equal to the corresponding code points. These code points in the Basic Multilingual Plane are the only code points that can be represented in UCS-2. As of Unicode 9.0, some modern non-Latin Asian, Middle-Eastern, and African scripts fall outside this range, as do most emoji characters.Code points from U+010000 to U+10FFFF
Code points from the other planes are encoded as two 16-bit code units called a surrogate pair. The first code unit is a high surrogate and the second is a low surrogate :- 0x10000 is subtracted from the code point ', leaving a 20-bit number ' in the hex number range 0x00000–0xFFFFF.
- The high ten bits are added to 0xD800 to give the first 16-bit code unit or high surrogate ', which will be in the range.
- The low ten bits are added to 0xDC00 to give the second 16-bit code unit or low surrogate ', which will be in the range.
U' = yyyyyyyyyyxxxxxxxxxx // U - 0x10000
W1 = 110110yyyyyyyyyy // 0xD800 + yyyyyyyyyy
W2 = 110111xxxxxxxxxx // 0xDC00 + xxxxxxxxxx
Since the ranges for the high surrogates, low surrogates, and valid BMP characters are disjoint, it is not possible for a surrogate to match a BMP character, or for two adjacent code units to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units. UTF-8 shares these advantages, but many earlier multi-byte encoding schemes did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string. UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte.
Because the most commonly used characters are all in the BMP, handling of surrogate pairs is often not thoroughly tested. This leads to persistent bugs and potential security holes, even in popular and well-reviewed application software.
U+D800 to U+DFFF (surrogates)
The official Unicode standard says that no UTF forms, including UTF-16, can encode the surrogate code points. Since these will never be assigned a character, there should be no reason to encode them. However, Windows allows unpaired surrogates in filenames and other places, which generally means they have to be supported by software in spite of their exclusion from the Unicode standard.UCS-2, UTF-8, and UTF-32 can encode these code points in trivial and obvious ways, and a large amount of software does so, even though the standard states that such arrangements should be treated as encoding errors. It is possible to unambiguously encode an unpaired surrogate in the format of UTF-16 by using a code unit equal to the code point. The result is not valid UTF-16, but the majority of UTF-16 encoder and decoder implementations do this when translating between encodings.
Examples
To encode U+10437 to UTF-16:- Subtract 0x10000 from the code point, leaving 0x0437.
- For the high surrogate, shift right by 10, then add 0xD800, resulting in 0x0001 + 0xD800 = 0xD801.
- For the low surrogate, take the low 10 bits, then add 0xDC00, resulting in 0x0037 + 0xDC00 = 0xDC37.
- Take the high surrogate and subtract 0xD800, then multiply by 0x400, resulting in 0x0001 × 0x400 = 0x0400.
- Take the low surrogate and subtract 0xDC00, resulting in 0x37.
- Add these two results together, and finally add 0x10000 to get the final code point, 0x10437.
Byte-order encoding schemes
UTF-16 and UCS-2 produce a sequence of 16-bit code units. Since most communication and storage protocols are defined for bytes, and each unit thus takes two 8-bit bytes, the order of the bytes may depend on the endianness of the computer architecture.To assist in recognizing the byte order of code units, UTF-16 allows a byte order mark, a code point with the value U+FEFF, to precede the first actual coded value.. If the endian architecture of the decoder matches that of the encoder, the decoder detects the 0xFEFF value, but an opposite-endian decoder interprets the BOM as the value U+FFFE reserved for this purpose. This incorrect result provides a hint to perform byte-swapping for the remaining values.
If the BOM is missing, RFC 2781 recommends that big-endian encoding be assumed. In practice, due to Windows using little-endian order by default, many applications assume little-endian encoding. It is also reliable to detect endianness by looking for null bytes, on the assumption that characters less than U+0100 are very common. If more even bytes are null, then it is big-endian.
The standard also allows the byte order to be stated explicitly by specifying UTF-16BE or UTF-16LE as the encoding type. When the byte order is specified explicitly this way, a BOM is specifically not supposed to be prepended to the text, and a U+FEFF at the beginning should be handled as a ZWNBSP character. Most applications ignore a BOM in all cases despite this rule.
For Internet protocols, IANA has approved "UTF-16", "UTF-16BE", and "UTF-16LE" as the names for these encodings. The aliases UTF_16 or UTF16 may be meaningful in some programming languages or software applications, but they are not standard names in Internet protocols.
Similar designations, UCS-2BE and UCS-2LE, are used to show versions of UCS-2.