1

I have Python 2.7.1 on a Simplified-Chinese version of Windows XP, and I have a program like this(windows_prn_utf8.py):

#!/usr/bin/env python
# -*- coding: utf8 -*-

print unicode('\xE7\x94\xB5', 'utf8')

If I run it on Windows CMD console, it output the right Chinese character '电' ; however, if I try to redirect the command output to a file. I got error.

D:\Temp>windows_prn_utf8.py > 1.txt
Traceback (most recent call last):
  File "D:\Temp\windows_prn_utf8.py", line 4, in <module>
    print unicode('\xE7\x94\xB5', 'utf8')
UnicodeEncodeError: 'ascii' codec can't encode character u'\u7535' in position 0: ordinal not in range(128)

I realize there is a missing link here. There should be a way to determine, in case 1.txt is generated, whether the unicode character in 1.txt should be encoded in utf-8 or codepage-936 or other encodings.

Then how to fix it? My preference is to have utf-8 encoding in 1.txt . Thank you.

enter image description here

Jimm Chen
  • 3,411
  • 3
  • 35
  • 59
  • possible duplicate of [python... encoding issue when using linux >](http://stackoverflow.com/questions/17430168/python-encoding-issue-when-using-linux) – Martijn Pieters Jul 29 '13 at 08:00
  • The same applies to Windows; output redirecting means that Python cannot determine the required encoding of the output file and falls back to the default. – Martijn Pieters Jul 29 '13 at 08:01

4 Answers4

2

Seems like this was solved, but I think a bit more detail will help explain this actual problem.

The 'utf8' in unicode('\xE7\x94\xB5', 'utf8') is telling the interpreter how to decode the 3 bytes you're providing in the other argument in order to represent the character internally as a unicode object:

In [6]: uobj = unicode('\xe7\x94\xb5','utf8')

In [7]: uobj
Out[7]: u'\u7535'

Another example would be creating the same character from its utf-16 representation (which is what python displays by default and shown in the Out[7] line above):

In [8]: uobj = unicode('\x35\x75','utf16')

In [9]: uobj
Out[9]: u'\u7535'

In your example after the object has been created it becomes an argument to print which tries to write it to standard out (console window, redirected to a file, etc). The complication is that print must re-encode that object into a byte stream before writing it. It looks like in your case the encoding it used by default was ACSII which cannot represent that character.

(If a console will try to display the characters, they will be re-decoded and replaced in the window with the corresponding font glyphs--this is why your output and the console both need to be 'speaking' the same encoding.)

From what I've seen cmd.exe in windows is pretty confusing when it comes to character encodings, but what I do on other OSes is explicitly encode the bytes before printing/writing them with the unicode object's encode function. This returns an encoded byte sequence stored in a str object:

In [10]: sobj = uobj.encode('utf8')

In [11]: type(sobj)
Out[11]: str

In [12]: sobj
Out[12]: '\xe7\x94\xb5'

In [13]: print sobj
电

Now that print is given a str instead of a unicode, it doesn't need to encode anything. In my case my terminal was decoding utf8, and its font contained that particular character, so it was displayed properly on my screen (and hopefully right now in your browser).

1

Set PYTHONIOENCODING environmental variable.

SET PYTHONIOENCODING=cp936
windows_prn_utf8.py > 1.txt
falsetru
  • 357,413
  • 63
  • 732
  • 636
  • Thanks. Wow, I can even use ``SET PYTHONIOENCODING=utf8`` on Windows to get UTF-8 text file. However, this will cause CMD window fail to display unicode characters correctly. So, it's best to keep ``PYTHONIOENCODING`` matching CMD window encoding. – Jimm Chen Jul 29 '13 at 09:11
1

You can encode it to utf-8 before you write it to file.

f.write("电".encode("utf8"))
Friedmannn
  • 138
  • 1
  • 1
  • 9
1

Use codecs.open(filename,encoding) instead of open(filename) and write file with python.

eri
  • 3,133
  • 1
  • 23
  • 35