UTF-8

UTF-8 (8-bit Unicode Transformation Format) is a variable-length character encoding for Unicode. Like UTF-16 and UTF-32, UTF-8 can represent every character in the Unicode character set, but unlike them it has the special property of being backwards-compatible with ASCII. For this reason, it is steadily becoming the dominant character encoding for files, e-mail, web pages,[1][2] and other software that manipulates textual information.

UTF-8 encodes each character (code point) in 1 to 4 octets (8-bit bytes); and encodes the first 128 characters of the Unicode character set (which correspond directly to the ASCII character set) using a single octet with the same binary value as in ASCII.

The Internet Engineering Task Force (IETF) requires all Internet protocols to identify the encoding used for character data, and the supported character encodings must include UTF-8.[3] The Internet Mail Consortium (IMC) recommends that all e-mail programs be able to display and create mail using UTF-8.[4]

Contents

History

By early 1992 the search was on for a good byte-stream encoding of multi-byte character sets. The draft ISO 10646 standard contained a non-required annex called UTF that provided a byte-stream encoding of its 32-bit code points. This encoding was not satisfactory on performance grounds, but did introduce the notion that bytes in the ASCII range of 0–127 represent themselves in UTF, thereby providing backward compatibility.

In July 1992, the X/Open committee XoJIG was looking for a better encoding. Dave Prosser of Unix System Laboratories submitted a proposal for one that had faster implementation characteristics and introduced the improvement that 7-bit ASCII characters would only represent themselves; all multibyte sequences would include only bytes where the high bit was set.

In August 1992, this proposal was circulated by an IBM X/Open representative to interested parties. Ken Thompson of the Plan 9 operating system group at Bell Labs, then made a crucial modification to the encoding to allow it to be self-synchronizing, meaning that it was not necessary to read from the beginning of the string to find code point boundaries. Thompson's design was outlined on September 2, 1992, on a placemat in a New Jersey diner with Rob Pike. The following days, Pike and Thompson implemented it and updated Plan 9 to use it throughout, and then communicated their success back to X/Open.[5]

UTF-8 was first officially presented at the USENIX conference in San Diego, from January 25–29, 1993.

Description

The UTF-8 encoding is variable-width, with each character represented by 1 to 4 bytes. Each byte has 0–4 leading consecutive '1' bits followed by a '0' bit to indicate its type. 2 or more '1' bits indicates the first byte in a sequence of that many bytes. The scalar value of the Unicode code point is the concatenation of the non-control bits. In this table, zeros and ones in black represent control bits, each x represents one of the lowest 8 bits of the Unicode value, y represents the next higher 8 bits, and z represents the bits higher than that.

Unicode range Encoded bytes Example
Hex Binary
U+0000 to
U+007F
00000000 to
01111111
0xxxxxxx '$' U+0024
= 00100100
00100100
0x24
 
 
 
U+0080 to
U+07FF
00000000 10000000 to
00000111 11111111
110yyyxx '¢' U+00A2
= 00000000 10100010
11000010 10100010
0xC2 0xA2
10xxxxxx
 
 
U+0800 to
U+FFFF
00001000 00000000 to
11111111 11111111
1110yyyy '€' U+20AC
= 00100000 10101100
11100010 10000010 10101100
0xE2 0x82 0xAC
10yyyyxx
10xxxxxx
 
U+010000 to
U+10FFFF
00000001 00000000 00000000 to
00010000 11111111 11111111
11110zzz '𤭢' U+024B62
= 00000010 01001011 01100010
11110000 10100100 10101101 10100010
0xF0 0xA4 0xAD 0xA2
10zzyyyy
10yyyyxx
10xxxxxx

So the first 128 characters (US-ASCII) need one byte. The next 1,920 characters need two bytes to encode. This includes Latin letters with diacritics and characters from Greek, Cyrillic, Coptic, Armenian, Hebrew, Arabic, Syriac and Tāna alphabets. Three bytes are needed for the rest of the Basic Multilingual Plane (which contains virtually all characters in common use). Four bytes are needed for characters in the other planes of Unicode, which include less common CJK characters and various historic scripts.

By continuing the pattern given above it is possible to deal with much larger numbers. The original specification allowed for sequences of up to six bytes covering numbers up to 31 bits (the original limit of the Universal Character Set). However, UTF-8 was restricted by RFC 3629 (Note: IETF doesn't define UTF-8, Unicode does) to use only the area covered by the formal Unicode definition, U+0000 to U+10FFFF, in November 2003.

With these restrictions, bytes in a UTF-8 sequence have the following meanings. The ones marked in red can never appear in a legal UTF-8 sequence. The ones in green are represented in a single byte. The ones in blue must only appear as the first byte in a multi-byte sequence, and the ones in orange can only appear as the second or later byte in a multi-byte sequence:

UTF-8 byte range Interpretation
Binary Hex Decimal
00000000-01111111 00-7F 0-127 Single-byte encoding (compatible with US-ASCII)
10000000-10111111 80-BF 128-191 Second, third, or fourth byte of a multi-byte sequence
11000000-11000001 C0-C1 192-193 Overlong encoding: start of 2-byte sequence, but would encode a code point ≤ 127
11000010-11011111 C2-DF 194-223 Start of 2-byte sequence
11100000-11101111 E0-EF 224-239 Start of 3-byte sequence
11110000-11110100 F0-F4 240-244 Start of 4-byte sequence
11110101-11110111 F5-F7 245-247 Restricted by RFC 3629: start of 4-byte sequence for codepoint above 10FFFF
11111000-11111011 F8-FB 248-251 Restricted by RFC 3629: start of 5-byte sequence
11111100-11111101 FC-FD 252-253 Restricted by RFC 3629: start of 6-byte sequence
11111110-11111111 FE-FF 254-255 Invalid: not defined by original UTF-8 specification

Invalid byte sequences

Not all sequences of bytes are valid UTF-8. A UTF-8 decoder should be prepared for:

Many earlier decoders would happily try to decode these. Carefully crafted invalid UTF-8 could make them either skip or create ASCII characters such as NUL, slash, or quotes. Invalid UTF-8 has been used to bypass security validations in high profile products including Microsoft's IIS web server.[6]

RFC 3629 states "Implementations of the decoding algorithm MUST protect against decoding invalid sequences."[7] The Unicode Standard requires decoders to "...treat any ill-formed code unit sequence as an error condition. This guarantees that it will neither interpret nor emit an ill-formed code unit sequence." Many UTF-8 decoders throw an exception if a string has an error in it. One example was Python 3.0 which would exit immediately if the command line had invalid UTF-8 in it.[8] In some cases, though, being unable to work with data means you cannot even try to fix it. Another option is to translate the first byte to a replacement and continue parsing with the next byte. Popular replacements are:

Replacing errors is "lossy": more than one UTF-8 string converts to the same Unicode result. Therefore the original UTF-8 should be stored, and translation should only be used when displaying the text to the user.

Invalid code points

UTF-8 may only legally be used to encode valid Unicode scalar values. According to the Unicode standard the high and low surrogate halves used by UTF-16 (U+D800 through U+DFFF) and values above U+10FFFF are not legal Unicode values, and the UTF-8 encoding of them is an invalid byte sequence and should be treated as described above.

Whether an actual application should do this with surrogate halves is questionable. Allowing them allows lossless storage of invalid UTF-16, and allows CESU encoding (described below) to be decoded. There are other code points that are far more important to detect and reject, such as the reversed-BOM U+FFFE, or the C1 controls, caused by improper conversion of CP1252 text or double-encoding of UTF-8. These are invalid in HTML.

Official name and incorrect variants

The official name is "UTF-8". All letters are upper-case, and the name is hyphenated. This spelling is used in all the documents relating to the encoding.

Alternatively, the name "utf-8" may be used by all standards conforming to the Internet Assigned Numbers Authority (IANA) list[9] (which include CSS, HTML, XML, and HTTP headers)[10], as the declaration is case insensitive.

Other descriptions that omit the hyphen or replace it with a space, such as "utf8" or "UTF 8", are not accepted as correct by any standard. Despite this, most agents such as browsers can understand them.

UTF-8 derivations

The following implementations are slight differences from the UTF-8 specification. They are incompatible with the UTF-8 specification.

CESU-8

Many pieces of software added UTF-8 conversions for UCS-2 data and did not alter their UTF-8 conversion when UCS-2 was replaced with the surrogate-pair supporting UTF-16. The result is that each half of a UTF-16 surrogate pair is encoded as its own 3-byte UTF-8 encoding, resulting in 6 bytes rather than 4 for characters outside the Basic Multilingual Plane. Oracle databases use this, as well as Java and Tcl as described below, and probably a great deal of other Windows software where the programmers were unaware of the complexities of UTF-16. Although most usage is by accident, a supposed benefit is that this preserves UTF-16 binary sorting order when CESU-8 is binary sorted.

Modified UTF-8

In Modified UTF-8[11] the null character (U+0000) is encoded as 0xC0,0x80, this is not valid UTF-8[12] because it is not the shortest possible representation. Modified UTF-8 strings will never contain any null bytes,[13] which allows them (with a null byte added to the end) to be processed by the traditional ASCIIZ string functions, yet allows all Unicode values including U+0000 to be in the string.

All known Modified UTF-8 implementations also treat the surrogate pairs as in CESU-8.

In normal usage, the Java programming language supports standard UTF-8 when reading and writing strings through InputStreamReader and OutputStreamWriter. However it uses Modified UTF-8 for object serialization,[14] for the Java Native Interface,[15] and for embedding constant strings in class files.[16] Tcl also uses the same modified UTF-8[17] as Java for internal representation of Unicode data but uses strict CESU-8 for external data.

Byte order mark

Many Windows programs (including Windows Notepad) add the bytes 0xEF, 0xBB, 0xBF at the start of any document saved as UTF-8. This is the UTF-8 encoding of the Unicode byte order mark (BOM), and is commonly referred to as a UTF-8 BOM even though it is not relevant to byte order. The BOM can also appear if another encoding with a BOM is translated to UTF-8 without stripping it.

The presence of the UTF-8 BOM may cause interoperability problems with existing software that could otherwise handle UTF-8, for example:

If compatibility with existing programs is not important, the BOM could be used to identify if a file is UTF-8 versus a legacy encoding, but this is still problematic due to many instances where the BOM is added or removed without actually changing the encoding, or various encodings are concatenated together. Checking if the text is valid UTF-8 is more reliable than using BOM.

Advantages and disadvantages

General

Advantages

Disadvantages

Compared to single-byte encodings

Advantages

Disadvantages

Compared to other multi-byte encodings

Advantages

Disadvantages

Compared to UTF-16

Advantages

Disadvantages

See also

References

  1. "Moving to Unicode 5.1". Official Google Blog. 2008-05-05. http://googleblog.blogspot.com/2008/05/moving-to-unicode-51.html. Retrieved 2008-05-08. 
  2. "Usage of character encodings for websites". W3Techs. http://w3techs.com/technologies/overview/character_encoding/all. Retrieved 2010-03-30. 
  3. Alvestrand, H. (1998). "IETF Policy on Character Sets and Languages". RFC 2277. Internet Engineering Task Force 
  4. "Using International Characters in Internet Mail". Internet Mail Consortium. August 1, 1998. http://www.imc.org/mail-i18n.html. Retrieved 2007-11-08. 
  5. Pike, Rob (2003-04-03). "UTF-8 history". http://www.cl.cam.ac.uk/~mgk25/ucs/utf-8-history.txt. 
  6. Marin, Marvin (2000-10-17). "Web Server Folder Traversal MS00-078". http://www.sans.org/resources/malwarefaq/wnt-unicode.php. 
  7. Yergeau, F. (2003). "UTF-8, a transformation format of ISO 10646". RFC 3629. Internet Engineering Task Force 
  8. "Non-decodable Bytes in System Character Interfaces". http://www.python.org/dev/peps/pep-0383/. 
  9. Internet Assigned Numbers Authority Character Sets
  10. W3C: Setting the HTTP charset parameter notes that the IANA list is used for HTTP
  11. "Java SE 6 documentation for Interface java.io.DataInput, subsection on Modified UTF-8". Sun Microsystems. 2008. http://java.sun.com/javase/6/docs/api/java/io/DataInput.html#modified-utf-8. Retrieved 2009-05-22. 
  12. "[...] the overlong UTF-8 sequence C0 80 [...]", "[...] the illegal two-octet sequence C0 80 [...]""Request for Comments 3629: "UTF-8, a transformation format of ISO 10646"". 2003. http://www.apps.ietf.org/rfc/rfc3629.html#page-5. Retrieved 2009-05-22. 
  13. "[...] Java virtual machine UTF-8 strings never have embedded nulls.""The Java Virtual Machine Specification, 2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"". Sun Microsystems. 1999. http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963. Retrieved 2009-05-24. 
  14. "[...] encoded in modified UTF-8.""Java Object Serialization Specification, chapter 6: Object Serialization Stream Protocol, section 2: Stream Elements". Sun Microsystems. 2005. http://java.sun.com/javase/6/docs/platform/serialization/spec/protocol.html#8299. Retrieved 2009-05-22. 
  15. "The JNI uses modified UTF-8 strings to represent various string types.""Java Native Interface Specification, chapter 3: JNI Types and Data Structures, section: Modified UTF-8 Strings". Sun Microsystems. 2003. http://java.sun.com/j2se/1.5.0/docs/guide/jni/spec/types.html#wp16542. Retrieved 2009-05-22. 
  16. "[...] differences between this format and the "standard" UTF-8 format.""The Java Virtual Machine Specification, 2nd Edition, section 4.4.7: "The CONSTANT_Utf8_info Structure"". Sun Microsystems. 1999. http://java.sun.com/docs/books/jvms/second_edition/html/ClassFile.doc.html#7963. Retrieved 2009-05-23. 
  17. "In orthodox UTF-8, a NUL byte(\x00) is represented by a NUL byte. [...] But [...] we [...] want NUL bytes inside [...] strings [...]""Tcler's Wiki: UTF-8 bit by bit (Revision 6)". 2009-04-25. http://wiki.tcl.tk/_/revision?N=1211&V=6. Retrieved 2009-05-22. 
  18. W3.org
  19. W3 FAQ: Multilingual Forms: a Perl regular expression to validate a UTF-8 string)
  20. There are 256 × 256 − 128 × 128 not-pure-ASCII two-byte sequences, and of those, only 1920 encode valid UTF-8 characters (the range U+0080 to U+07FF), so the proportion of valid not-pure-ASCII two-byte sequences is 3.9%. Similarly, there are 256 × 256 × 256 − 128 × 128 × 128 not-pure-ASCII three-byte sequences, and 61,406 valid three-byte UTF-8 sequences (U+000800 to U+00FFFF minus surrogate pairs and non-characters), so the proportion is 0.41%; finally, there are 2564 − 1284 non-ASCII four-byte sequences, and 1,048,544 valid four-byte UTF-8 sequences (U+010000 to U+10FFFF minus non-characters), so the proportion is 0.026%. Note that this assumes that control characters pass as ASCII; without the control characters, the percentage proportions drop somewhat).
  21. Tools.ietf.org
  22. The version from 2009-04-27 of ja:UTF-8 needed 50 kb when saved (as UTF-8), but when converted to UTF-16 (with notepad) it took 81 kb, with a similar result for the Korean article

External links

There are several current definitions of UTF-8 in various standards documents:

They supersede the definitions given in the following obsolete works:

They are all the same in their general mechanics, with the main differences being on issues such as allowed range of code point values and safe handling of invalid input.