10

I was happy in my Python world knowing that I was doing everything in Unicode and encoding as UTF-8 when I needed to output something to a user. Then, one of my colleagues sent me the "The UTF-8 Everywhere' manifesto" (2012) and it confused me.

  • The author of the article claims a number of times that UCS-2, the Unicode representation that Python uses is synonymous with UTF-16.
    • He even goes as far as directly saying Python uses UTF-16 for internal string representation.
  • The author also admits to being a Windows lover and developer and states that the way MS has handled character encodings over the years has led to that group being the most confused so maybe it is just his own confusion. I don't know...

Can somebody please explain what the state of UTF-16 vs Unicode is in Python? Are they synonymous and if not, in what way?

smci
  • 32,567
  • 20
  • 113
  • 146
Endophage
  • 21,038
  • 13
  • 59
  • 90
  • 1
    Why are you concerned with Python's _internal_ string representation? The point of that site is to convince developers to use UTF-8 in all of the code they write - and you're not developing Python internals, are you? – Matt Ball Oct 26 '12 at 22:58
  • 1
    UCS-2 and UTF-16 *are not the same*. UCS-2 is obsolete as it doesn't encode all of the Unicode code points. – Mark Ransom Oct 26 '12 at 23:00
  • 1
    @MattBall SO is about developers sharing knowledge (and helping each other out). This is something that interests me. Do I need any more reason to ask this question? – Endophage Oct 26 '12 at 23:08
  • @MarkRansom if you'd like to post an answer including that and your point from your comment below, I'd happily give you an upvote. – Endophage Oct 26 '12 at 23:16
  • Thanks for the support, but I don't need it. The existing answer has already incorporated everything I had to say, and did a good job of it too. – Mark Ransom Oct 26 '12 at 23:18
  • @MattBall Incidentally, I've had to patch cgi.py in the Python 2.7 core I use in production to correctly handle Content-Disposition headers from Windows 8 Metro so while not directly involved with character encodings, I have actually worked on Python internals... When I get the time, I'll be submitting that patch assuming nobody beats me to it. – Endophage Oct 26 '12 at 23:26
  • 1
    Have you read [The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)](http://www.joelonsoftware.com/articles/Unicode.html)? –  Oct 26 '12 at 23:27
  • @delnan I had not. Thanks for the linky! – Endophage Oct 26 '12 at 23:29

1 Answers1

23

The internal representation of a Unicode string in Python (versions from 2.2 up to 3.2) depends on whether Python was compiled in wide or narrow modes. Most Python builds are narrow (you can check with sys.maxunicode -- it is 65535 on narrow builds and 1114111 on wide builds).

With a wide build, strings are internally sequences of 4-byte wide characters, i.e. they use the UTF-32 encoding. All code points are exactly one wide-character in length.

With a narrow build, strings are internally sequences of 2-byte wide characters, using UTF-16. Characters beyond the BMP (code points U+10000 and above) are stored using the usual UTF-16 surrogate pairs:

>>> q = u'\U00010000'
>>> len(q)
2
>>> q[0]
u'\ud800'
>>> q[1]
u'\udc00'
>>> q
u'\U00010000'

Note that UTF-16 and UCS-2 are not the same. UCS-2 is a fixed-width encoding: every code point is encoded as 2 bytes. Consequently, UCS-2 cannot encode code points beyond the BMP. UTF-16 is a variable-width encoding; code points outside the BMP are encoded using a pair of characters, called a surrogate pair.


Note that this all changes in 3.3, with the implementation of PEP 393. Now, Unicode strings are represented using characters wide enough to hold the largest code point -- 8 bits for ASCII strings, 16 bits for BMP strings, and 32 bits otherwise. This does away with the wide/narrow divide and also helps reduce the memory usage when many ASCII-only strings are used.

nneonneo
  • 171,345
  • 36
  • 312
  • 383
  • 4
    In 3.3 they introduced a more flexible scheme where the size of each character is determined by the largest codepoint in the string. ASCII strings are only 8 bits per character, and there's no more wide/narrow mode. – Mark Ransom Oct 26 '12 at 23:07
  • Thanks. I amended my answer to include these details. (I was reading the relevant PEP when you posted the comment :) – nneonneo Oct 26 '12 at 23:14
  • So if I understand you right, it's really a hybrid representation internally. UCS-2 covering the BMP then UTF-16 beyond that. However, Python still calls it "Unicode". Yes? – Endophage Oct 26 '12 at 23:15
  • 2
    Pre-3.3, it's straight UTF-16/UTF-32. In 3.3, it's a hybrid of "UCS-1", UCS-2, and UCS-4. – nneonneo Oct 26 '12 at 23:20
  • Note that UTF-16 and UCS-2 are different encodings: UCS-2 is fixed-width, and UTF-16 is variable width. UCS-2 cannot encode as many characters as UTF-16. The encodings for every valid Unicode codepoint up to U+FFFF are the same in UCS-2 and UTF-16. – nneonneo Oct 26 '12 at 23:33
  • sorry, deleted that comment when I can across some other info. So yes, UTF-16 is a superset of UCS-2. Thanks for the info! – Endophage Oct 26 '12 at 23:34
  • A narrow build uses UCS2, plus surrogate pairs. It doesn't treat surrogates as a single code point. That could catch someone off guard. 3.3 solves the problem, but 2.7-3.2 will be in use for a long time. – Eryk Sun Oct 27 '12 at 00:07