10

I'm reading in a file with Python's csv module, and have Yet Another Encoding Question (sorry, there are so many on here).

In the CSV file, there are £ signs. After reading the row in and printing it, they have become \xa3.

Trying to encode them as Unicode produces a UnicodeDecodeError:

row = [unicode(x.strip()) for x in row]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xa3 in position 0: ordinal not in range(128)

I have been reading the csv documentation and the numerous other questions about this on StackOverflow. I think that £ becoming \xa3 in ASCII means that the original CSV file is in UTF-8.

(Incidentally, is there a quick way to check the encoding of a CSV file?)

If it's in UTF-8, then shouldn't the csv module be able to cope with it? It seems to be transforming all the symbols into ASCII, even though the documentation claims it accepts UTF-8.

I've tried adding a unicode_csv_reader function as described in the csv examples, but it doesn't help.

---- EDIT -----

I should clarify one thing. I have seen this question, which looks very similar. But adding the unicode_csv_reader function defined there produces a different error instead:

yield [unicode(cell, 'utf-8') for cell in row]
UnicodeDecodeError: 'utf8' codec can't decode byte 0xa3 in position 8: unexpected code byte

So maybe my file isn't UTF8 after all? How can I tell?

Community
  • 1
  • 1
AP257
  • 89,519
  • 86
  • 202
  • 261

2 Answers2

7

Try using the "ISO-8859-1" for your encoding. It seems like you are dealing with extended ASCII, not Unicode.

Edit:

Here's some simple code that deals with extended ASCII:

>>> s = "La Pe\xf1a"
>>> print s
La Pe±a
>>> print s.decode("latin-1")
La Peña
>>>

Even better, dealing with the exact character that is giving you problems:

>>> s = "12\xa3"
>>> print s.decode("latin-1")
12£
>>>
riwalk
  • 14,033
  • 6
  • 51
  • 68
  • Do you mean use: yield [unicode(cell, 'ISO-8859-1') for cell in row] instead, in the unicode_csv_reader function? Unfortunately that doesn't help - back to the ordinal not in range(128) error again. – AP257 Aug 13 '10 at 19:18
  • It wouldn't make much sense to use a function called unicode() when dealing with ASCII. What I am saying is that you are dealing with a file that is encoded using a "ISO-8859-1" encoding. I didn't post any code, because I don't know how to do it off the top of my head, but your problem is that you need to decode it as ISO-8859-1, not Unicode. – riwalk Aug 13 '10 at 19:21
  • OK, thanks. I'll investigate. How did you know it was ISO-8859-1? In other words, is there a way for me to check encodings myself, rather than just ask dumb questions on StackOverflow :) – AP257 Aug 13 '10 at 19:24
  • Not a dumb question at all. I had to work on a project where we were working on a web scraping tool, and we needed to scrape international sites. I spent two full weeks immersing myself in the intricate details of encoding, and to this day I am one of the few at my workplace who has a firm grasp over them. – riwalk Aug 13 '10 at 19:25
  • @Stargazer: (1) UTF-8 is not Unicode. (2) ISO-8859-n maps `\xa3` to `U+00A3 POUND SIGN` for n in (1, 3, 7, 8, 9, 13, 14, 15). Please answer the OP's question: How did you "know" it was ISO-8859-1? – John Machin Aug 13 '10 at 22:05
  • @John Machin: (1) - I don't really care. (2) - The character being larger than 127 implies that it is not ascii, and the fact that it is not decoding as Unicode or UTF-8 implies that it is most likely some form of extended ASCII. From personal experience, I've seen ISO-8859-1 is one of the most popular encodings for those who speak Western-style languages (English, Spanish, French, German, etc.). How did I "know"? I didn't. I went with what was most likely, which worked just fine. – riwalk Aug 16 '10 at 15:36
  • @Stargazer712: (1) It's true, whether you care or not -- "decode as Unicode" is meaningless (2) Those Spanish, French & German speakers would be making do with the generic currency symbol ¤ instead of the Euro symbol € -- perhaps time that they updated to ISO-8859-15 (or UTF-8!). My experience is that often people use ISO-8859-1 because it *appears* to work -- they never get an exception from `any_byte_string.decode('latin1')` and they don't assert `not any(u'\u0080' <= char <= u'\u009f' for char in decode_result)` – John Machin Aug 16 '10 at 22:34
  • 2
    @AP257 This is old, but you can check the charset on linux/unix by using `file -i filename`. With eastern European languages, I've seen the `enca` command mentioned before. – ryanjdillon Dec 22 '12 at 16:41
0

If you are on Windows, it is highly likely that the encoding that you should use is one of the cp125X family ... e.g. if you are in Western Europe or the Americas, it will be cp1252. Windows software often uses bytes in the range \x80 to \x9F inclusive to encode fancy punctuation characters whereas that range is reserved in ISO-8859-X for the rarely used "C1 Control Characters".

You can find out the usual encoding in your locale by running this at the command line:

python -c "import locale; print locale.getpreferredencoding()"
John Machin
  • 81,303
  • 11
  • 141
  • 189
  • He is having difficulty reading £ signs, and you're assuming that the file was conveniently saved on whatever settings *his* computer prefers? I would be careful making the assumption that the file is something that was saved using his machine. – riwalk Aug 16 '10 at 15:40
  • @Stargazer712: No, I'm not assuming anything. I'm suggesting that it is highly likely that the file was created on a machine in the same locale and using the same operating system as the machine the OP is using. – John Machin Aug 16 '10 at 22:01
  • My experience with encodings (as I mentioned earlier) came from scraping the web. I assure you it is not a safe assumption. – riwalk Aug 17 '10 at 04:41
  • @Stargazer712: Which part of "I'm not assuming anything" don't you understand? I'm suggesting that the OP should check whether cp125X might not be more appropriate, i.e. more future-proof. – John Machin Aug 17 '10 at 05:05
  • 1
    "I'm suggesting that it is highly likely that the file was created on a machine in the same locale..." -- That's an assumption, and I'm done talking about this. – riwalk Aug 17 '10 at 14:42