48

Here is the code:

>>> z = u'\u2022'.decode('utf-8', 'ignore')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.6/encodings/utf_8.py", line 16, in decode
    return codecs.utf_8_decode(input, errors, True)
UnicodeEncodeError: 'latin-1' codec can't encode character u'\u2022' in position 0: ordinal not in range(256)

Why is UnicodeEncodeError raised when I am using .decode?

Why is any error raised when I am using 'ignore'?

Flimm
  • 136,138
  • 45
  • 251
  • 267
Facundo Casco
  • 10,065
  • 8
  • 42
  • 63
  • 1
    Note this wouldn't happen in Python 3. In Python 3, running that code would give you this error instead: `AttributeError: 'str' object has no attribute 'decode'`. This is when of the advantages of Python 3: it enforces the distinction between string/unicode objects and bytes objects. You use `decode` to convert a string to a bytes objects, and you use `encode` to convert a bytes object to a string, and the distinction between `decode` and `encode` is much easier to grasp. – Flimm Mar 07 '22 at 09:00

3 Answers3

66

When I first started messing around with python strings and unicode, It took me awhile to understand the jargon of decode and encode too, so here's my post from here that may help:


Think of decoding as what you do to go from a regular bytestring to unicode and encoding as what you do to get back from unicode. In other words:

You de-code a str to produce a unicode string (in Python 2)

and en-code a unicode string to produce a str (in Python 2)

So:

unicode_char = u'\xb0'

encodedchar = unicode_char.encode('utf-8')

encodedchar will contain your unicode character, displayed in the selected encoding (in this case, utf-8).

The same principle applies to Python 3. You de-code a bytes object to produce a str object. And you en-code a str object to produce a bytes object.

Flimm
  • 136,138
  • 45
  • 251
  • 267
Aphex
  • 7,390
  • 5
  • 33
  • 54
  • 2
    Python 3 has much clearer notion of encoded **byte-arrays** and abstract (Unicode) character **strings**. – ulidtko Feb 25 '11 at 00:04
  • 2
    It should be noted that this is surely the correct answer to what must’ve been F.C.’s underlying problem, but people coming here because they encounter this seemingly paradoxical behavior when they didn’t notice that a small fraction of the strings they try to decode are already Unicode strings are probably better served by the other answers. – Dawn Drescher Jan 29 '15 at 10:44
  • 2
    `u'KEEP ME ㉃‰䥈啌ੁ剆䕅 KEEP ME ALSO'.encode('utf-8').decode('ascii','ignore') # worked for me` – David Kierans Oct 16 '16 at 04:07
  • @DaveKierans That will throw away all non-ascii characters in the string (those Chinese ones, for example). Make sure that's what you want! – Aphex Oct 18 '16 at 19:17
21

From http://wiki.python.org/moin/UnicodeEncodeError

Paradoxically, a UnicodeEncodeError may happen when decoding. The cause of it seems to be the coding-specific decode() functions that normally expect a parameter of type str. It appears that on seeing a unicode parameter, the decode() functions "down-convert" it into str, then decode the result assuming it to be of their own coding. It also appears that the "down-conversion" is performed using the ASCII encoder. Hence an encoding failure inside a decoder.

Facundo Casco
  • 10,065
  • 8
  • 42
  • 63
  • 14
    This seems like pure madness. If you call decode() on a unicode object, I would expect it to simply return the object as it was, since clearly it is already a unicode object... – rkrzr Apr 24 '15 at 13:37
  • @rkrzr In Python 3, you can't call `decode` on a str/unicode object, only on bytes objects. – Flimm Mar 07 '22 at 09:27
6

You're trying to decode a unicode. The implicit encoding to make the decode work is what's failing.

Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358