18

I thought I knew everything about encodings and Python, but today I came across a weird problem: although the console is set to code page 850 - and Python reports it correctly - parameters I put on the command line seem to be encoded in code page 1252. If I try to decode them with sys.stdin.encoding, I get the wrong result. If I assume 'cp1252', ignoring what sys.stdout.encoding reports, it works.

Am I missing something, or is this a bug in Python ? Windows ? Note: I am running Python 2.6.6 on Windows 7 EN, locale set to French (Switzerland).

In the test program below, I check that literals are correctly interpreted and can be printed - this works. But all values I pass on the command line seem to be encoded wrongly:

#!/usr/bin/python
# -*- encoding: utf-8 -*-
import sys

literal_mb = 'utf-8 literal:   üèéÃÂç€ÈÚ'
literal_u = u'unicode literal: üèéÃÂç€ÈÚ'
print "Testing literals"
print literal_mb.decode('utf-8').encode(sys.stdout.encoding,'replace')
print literal_u.encode(sys.stdout.encoding,'replace')

print "Testing arguments ( stdin/out encodings:",sys.stdin.encoding,"/",sys.stdout.encoding,")"
for i in range(1,len(sys.argv)):
    arg = sys.argv[i]
    print "arg",i,":",arg
    for ch in arg:
        print "  ",ch,"->",ord(ch),
        if ord(ch)>=128 and sys.stdin.encoding == 'cp850':
            print "<-",ch.decode('cp1252').encode(sys.stdout.encoding,'replace'),"[assuming input was actually cp1252 ]"
        else:
            print ""

In a newly created console, when running

C:\dev>test-encoding.py abcé€

I get the following output

Testing literals
utf-8 literal:   üèéÃÂç?ÈÚ
unicode literal: üèéÃÂç?ÈÚ
Testing arguments ( stdin/out encodings: cp850 / cp850 )
arg 1 : abcÚÇ
   a -> 97
   b -> 98
   c -> 99
   Ú -> 233 <- é [assuming input was actually cp1252 ]
   Ç -> 128 <- ? [assuming input was actually cp1252 ]

while I would expect the 4th character to have an ordinal value of 130 instead of 233 (see the code pages 850 and 1252).

Notes: the value of 128 for the euro symbol is a mystery - since cp850 does not have it. Otherwise, the '?' are expected - cp850 cannot print the characters and I have used 'replace' in the conversions.

If I change the code page of the console to 1252 by issuing chcp 1252 and run the same command, I (correctly) obtain

Testing literals
utf-8 literal:   üèéÃÂç€ÈÚ
unicode literal: üèéÃÂç€ÈÚ
Testing arguments ( stdin/out encodings: cp1252 / cp1252 )
arg 1 : abcé€
   a -> 97
   b -> 98
   c -> 99
   é -> 233
   € -> 128

Any ideas what I'm missing ?

Edit 1: I've just tested by reading sys.stdin. This works as expected: in cp850, typing 'é' results in an ordinal value of 130. So the problem is really for the command line only. So, is the command line treated differently than the standard input ?

Edit 2: It seems I had the wrong keywords. I found another very close topic on SO: Read Unicode characters from command-line arguments in Python 2.x on Windows. Still, if the command line is not encoded like sys.stdin, and since sys.getdefaultencoding() reports 'ascii', it seems there is no way to know its actual encoding. I find the answer using win32 extensions pretty hacky.

Community
  • 1
  • 1

3 Answers3

28

Replying to myself:

On Windows, the encoding used by the console (thus, that of sys.stdin/out) differs from the encoding of various OS-provided strings - obtained through e.g. os.getenv(), sys.argv, and certainly many more.

The encoding provided by sys.getdefaultencoding() is really that - a default, chosen by Python developers to match the "most reasonable encoding" the interpreter use in extreme cases. I get 'ascii' on my Python 2.6, and tried with portable Python 3.1, which yields 'utf-8'. Both are not what we are looking for - they are merely fallbacks for encoding conversion functions.

As this page seems to state, the encoding used by OS-provided strings is governed by the Active Code Page (ACP). Since Python does not have a native function to retrieve it, I had to use ctypes:

from ctypes import cdll
os_encoding = 'cp' + str(cdll.kernel32.GetACP())

Edit: But as Jacek suggests, there actually is a more robust and Pythonic way to do it (semantics would need validation, but until proven wrong, I'll use this)

import locale
os_encoding = locale.getpreferredencoding()
# This returns 'cp1252' on my system, yay!

and then

u_argv = [x.decode(os_encoding) for x in sys.argv]
u_env = os.getenv('myvar').decode(os_encoding)

On my system, os_encoding = 'cp1252', so it works. I am quite certain this would break on other platforms, so feel free to edit and make it more generic. We would certainly need some kind of translation table between the ACP reported by Windows and the Python encoding name - something better than just prepending 'cp'.

This is a unfortunately a hack, although I find it a bit less intrusive than the one suggested by this ActiveState Code Recipe (linked to by the SO question mentioned in Edit 2 of my question). The advantage I see here is that this can be applied to os.getenv(), and not only to sys.argv.

  • 2
    For Linux usually `locale.getpreferredencoding()` or, after using `locale.setlocale()` – `locale.getlocale()[1]` gives the right encoding for console and environment access. Though, hardcoded UTF-8 is often good-enough for most modern systems (so it is the best fall-back value). – Jacek Konieczny Feb 10 '12 at 13:02
2

I tried the solutions. It may still have some encoding problems. We need to use true type fonts. Fix:

  1. Run chcp 65001 in cmd to change the encoding to UTF-8.
  2. Change cmd font to a True-Type one like Lucida Console that supports the preceding code pages before 65001

Here's my complete fix for the encoding error:

def fixCodePage():
    import sys
    import codecs
    import ctypes
    if sys.platform == 'win32':
        if sys.stdout.encoding != 'cp65001':
            os.system("echo off")
            os.system("chcp 65001") # Change active page code
            sys.stdout.write("\x1b[A") # Removes the output of chcp command
            sys.stdout.flush()
        LF_FACESIZE = 32
        STD_OUTPUT_HANDLE = -11
        class COORD(ctypes.Structure):
        _fields_ = [("X", ctypes.c_short), ("Y", ctypes.c_short)]

        class CONSOLE_FONT_INFOEX(ctypes.Structure):
            _fields_ = [("cbSize", ctypes.c_ulong),
            ("nFont", ctypes.c_ulong),
            ("dwFontSize", COORD),
            ("FontFamily", ctypes.c_uint),
            ("FontWeight", ctypes.c_uint),
            ("FaceName", ctypes.c_wchar * LF_FACESIZE)]

        font = CONSOLE_FONT_INFOEX()
        font.cbSize = ctypes.sizeof(CONSOLE_FONT_INFOEX)
        font.nFont = 12
        font.dwFontSize.X = 7
        font.dwFontSize.Y = 12
        font.FontFamily = 54
        font.FontWeight = 400
        font.FaceName = "Lucida Console"
        handle = ctypes.windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)
        ctypes.windll.kernel32.SetCurrentConsoleFontEx(handle, ctypes.c_long(False), ctypes.pointer(font))

Note: You can see a font change while executing the program.

Gautam Krishna R
  • 2,388
  • 19
  • 26
0

Well what worked for me was using following code sniped:

# -*- coding: utf-8 -*-

import os
import sys

print (f"OS: {os.device_encoding(0)}, sys: {sys.stdout.encoding}")

comparing both on some windows systems with python 3.8, showed that os.device_encoding(0) always reflected code page setting in terminal. (Tested with Powershell and with old cmd-shell on Windows 10 and Windows 7)

This was even true after changing the terminals code page with shell command:

chcp 850

or e.g.:

chcp 1252

Now using os.device_encoding(0) for tasks like decoding a subprocess stdout result from bytes to string worked out even with Non-ASCII chars like é, ö, ³, ↓, ...

So as other already pointed out on windows local setting is really just a system information, about user preferences, but not what shell actually might currently use.