395

I have a socket server that is supposed to receive UTF-8 valid characters from clients.

The problem is some clients (mainly hackers) are sending all the wrong kind of data over it.

I can easily distinguish the genuine client, but I am logging to files all the data sent so I can analyze it later.

Sometimes I get characters like this œ that cause the UnicodeDecodeError error.

I need to be able to make the string UTF-8 with or without those characters.


Update:

For my particular case the socket service was an MTA and thus I only expect to receive ASCII commands such as:

EHLO example.com
MAIL FROM: <john.doe@example.com>
...

I was logging all of this in JSON.

Then some folks out there without good intentions decided to send all kind of junk.

That is why for my specific case it is perfectly OK to strip the non ASCII characters.

transilvlad
  • 13,974
  • 13
  • 45
  • 80
  • 1
    does the string come out of a file or a socket? could you please post code examples of how the string is encoded end decoded before it is send through the socket/filehandler? – devsnd Sep 17 '12 at 23:05

13 Answers13

433

http://docs.python.org/howto/unicode.html#the-unicode-type

str = unicode(str, errors='replace')

or

str = unicode(str, errors='ignore')

Note: This will strip out (ignore) the characters in question returning the string without them.

For me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application.

Alternatively: Use the open method from the codecs module to read in the file:

import codecs
with codecs.open(file_name, 'r', encoding='utf-8',
                 errors='ignore') as fdata:
Max Ghenis
  • 14,783
  • 16
  • 84
  • 132
transilvlad
  • 13,974
  • 13
  • 45
  • 80
  • 66
    Yes, though this is usually bad practice/dangerous, because you'll just lose characters. Better to determine or detect the encoding of the input string and decode it to unicode first, then encode as UTF-8, for example: `str.decode('cp1252').encode('utf-8')` – Ben Hoyt Sep 17 '12 at 23:15
  • 1
    In some cases yes you are right it might cause problems. In my case I don't care about them as they seem to be extra characters originating from a the bad formatting and programming of the clients connecting to my socket server. – transilvlad Sep 18 '12 at 09:24
  • This one actually helps if the content of the string is actually invalid, in my case `'\xc0msterdam'` which turns in to `u'\ufffdmsterdam'` with replace – PvdL Jan 04 '16 at 21:44
  • 9
    if you ended up here because you are having problems reading a file, opening the file in binary mode might help: `open(file_name, "rb")` and then apply Ben's approach from the comments above – kristian Nov 11 '16 at 17:18
  • the same option applies to even more, e.g. to "something.decode()" – Alexander Stohr Mar 17 '20 at 15:31
  • "For me this is ideal case since I'm using it as protection against non-ASCII input which is not allowed by my application." That still allows input that is valid UTF-8 but not valid ASCII. – Sören Mar 14 '22 at 15:31
  • 2
    How can I import `unicode `? – alper Mar 16 '22 at 10:23
  • 2
    `unicode` was a specific string type in Python 2. In Python 3, all regular strings are Unicode strings, so there is nothing to `import` - just use `str`. Perhaps see also http://nedbatchelder.com/text/unipain.html – tripleee Oct 25 '22 at 09:42
135

Changing the engine from C to Python did the trick for me.

Engine is C:

pd.read_csv(gdp_path, sep='\t', engine='c')

'utf-8' codec can't decode byte 0x92 in position 18: invalid start byte

Engine is Python:

pd.read_csv(gdp_path, sep='\t', engine='python')

No errors for me.

Doğuş
  • 1,887
  • 1
  • 16
  • 24
  • 1
    This could be not a good idea if you have a huge `csv` file. It could lead you to an `OutOfMemory` error or an automatic restart of your notebook's kernel. You should set the `encoding` on this case. – LucasBr Apr 06 '19 at 13:51
  • 1
    Excellent answer. Thank You. This worked for me. I had "? " inside a diamond shape character that was causing the issue. With plain eyes i had ' " " which is inch. I did 2 things to figure out. a) df = pd.read_csv('test.csv', n_rows=10000). This worked perfectly without the engine. So i incremented the n_rows to figure out which row had error. b) df = pd.read_csv('test.csv', engine='python') . This worked and i printed the errored row using df.iloc[36145], this printed me the errored record. – Jagannath Banerjee Sep 26 '19 at 12:46
81

This type of issue crops up for me now that I've moved to Python 3. I had no idea Python 2 was simply steam rolling any issues with file encoding.

I found this nice explanation of the differences and how to find a solution after none of the above worked for me.

http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html

In short, to make Python 3 behave as similarly as possible to Python 2 use:

with open(filename, encoding="latin-1") as datafile:
    # work on datafile here

However, read the article, there is no one size fits all solution.

James McCormac
  • 1,635
  • 2
  • 12
  • 26
  • the link is broken as of 2021-10-09 – ofloveandhate Oct 09 '21 at 16:18
  • As of 2022-02-12 using Python 3.8 I have no problems. – alexsmail Feb 12 '22 at 21:04
  • Like all the other answers which blindly propose some random encoding, this will be the wrong answer for the majority of visitors. There's a reason the behavior of Python 2 was regarded as broken enough to be replaced. Python 3 transparently does the right thing most of the time, except on Windows, where the burden of the legacy code pages is still significant. The proper cure is to spend some time on understanding encodings. [The Stack Overflow `character-encoding` tag info page](/tags/character-encoding/info) has a brief overview and some forward pointers. – tripleee Oct 25 '22 at 09:50
40

the first,Using get_encoding_type to get the files type of encode:

import os    
from chardet import detect

# get file encoding type
def get_encoding_type(file):
    with open(file, 'rb') as f:
        rawdata = f.read()
    return detect(rawdata)['encoding']

the second, opening the files with the type:

open(current_file, 'r', encoding = get_encoding_type, errors='ignore')
Ivan Lee
  • 3,420
  • 4
  • 30
  • 45
  • 9
    what happens when it return None – Chop Labalagun Jan 27 '20 at 19:41
  • Like the `chardet` documentation already tells you, it can't guess. or guesses wrong some of the time, because it's just examining statistical correlations. Naïve users will run it on files which don't contain text at all (images, PDF files, executable binaries, etc ... PDFs, Word documents, database dumps etc of course often embed a representation of text, but the file format itself is binary) but sometimes also genuine text documents don't contain enough significant data points to establish an encoding. For illustration, you can guess what _?xac?rbat?_ represents, but probably not _h??y?aie_ – tripleee Oct 25 '22 at 09:31
37
>>> '\x9c'.decode('cp1252')
u'\u0153'
>>> print '\x9c'.decode('cp1252')
œ
Ignacio Vazquez-Abrams
  • 776,304
  • 153
  • 1,341
  • 1,358
  • 19
    I'm confused, how did you choose cp1252? It worked for me, but why ? I don't know and now I'm lost :/. Could you elaborate ? Thanks a lot ! :) – Cyril N. Aug 22 '13 at 13:34
  • 4
    Could you present an option that works for all characters? Is there a way to detect the characters that need to be decoded so a more generic code can be implemented? I see many people are looking at this and I bet for some discarding is not the desired option like it is for me. – transilvlad Sep 16 '13 at 14:19
  • As you can see this question has quite the popularity. Think you could expand your answer with a more generic solution? – transilvlad Nov 26 '13 at 15:41
  • 17
    There is no more generic solution to "Guess the encoding roulette" – Puppy Feb 02 '15 at 10:23
  • 8
    found it using a combination of web search, luck and intuition: [cp1252](https://en.wikipedia.org/wiki/Windows-1252) was `used by default in the legacy components of Microsoft Windows in English and some other Western languages` – bolov Nov 28 '15 at 21:58
  • https://tripleee.github.io/8bit/ lets you look up what individual bytes decode to in different encodings. Find a few character codes in your data which are currently decoding incorrectly, and look them up on this page. With any luck, you are able to narrow your guess down to just a few candidate encodings; if you run out of more bytes to look up, any one of the candidates you found will work. (So, for example, ISO 8859-1 and Windows code page 1252 are identical except for a few character codes, and if your data doesn't contain any of those, the result will be identical with both encodings.) – tripleee Oct 25 '22 at 10:07
30

I had same problem with UnicodeDecodeError and i solved it with this line. Don't know if is the best way but it worked for me.

str = str.decode('unicode_escape').encode('utf-8')
maiky_forrester
  • 598
  • 4
  • 19
19

This solution works nice when using Latin American accents, such as 'ñ'.

I have solved this problem just by adding

df = pd.read_csv(fileName,encoding='latin1')
Community
  • 1
  • 1
Talha Rasool
  • 1,126
  • 14
  • 12
  • Worked for me too, but I wonder what's going to happen to the Chinese, Greek and Russian named media on my drive. To be continued... – Sridhar Sarnobat Dec 13 '21 at 05:11
  • Randomly guessing at a character set is not a good solution. Latin-1 will get rid of the warning, but produce garbage if the actual encoding in the file is something else. There are many legacy 8-bit encodings where ñ, á et al. have completely different character codes. – tripleee Oct 25 '22 at 09:25
3

Just in case of someone has the same problem. I'am using vim with YouCompleteMe, failed to start ycmd with this error message, what I did is: export LC_CTYPE="en_US.UTF-8", the problem is gone.

http8086
  • 1,306
  • 16
  • 37
2

What can you do if you need to make a change to a file, but don’t know the file’s encoding? If you know the encoding is ASCII-compatible and only want to examine or modify the ASCII parts, you can open the file with the surrogateescape error handler:

with open(fname, 'r', encoding="ascii", errors="surrogateescape") as f:
    data = f.read()
Krisztián Balla
  • 19,223
  • 13
  • 68
  • 84
1

If as you say you simply want to permit pure 7-bit ASCII, just discard any bytes which are not. There is no straightforward way to guess what the remote end intended them to represent anyway, without an explicitly specified encoding.

while bytes := socket.read_line_bytes():
    try:
        string = bytes.decode('us-ascii')
    except UnicodeDecodeError as exc:
        logger.warning('[%s] - rejected non-ASCII input %s' % (client, bytes.decode('us-ascii',  errors='backslashreplace'))
        socket.write(b'421 communication error - non-ASCII content rejected\r\n')
        continue
    ...
tripleee
  • 175,061
  • 34
  • 275
  • 318
1

I had the same error.

For me, Python complained about the byte "0x87". I looked it up on https://bytetool.web.app/en/ascii/code/0x87/ where it told me that this byte belong to the codec Windows-1252.

I then only added this line to the beginning of my Python file:

#-*- encoding: Windows-1252 -*-"

And all errors were gone. Before I had added this line, I had tried Pandas to import the file like this:

Df = pd.read_csv(data, sep=",", engine='python', header=0, encoding='Windows-1252')

but this returned me an error. So I changed it back to this:

Df = pd.read_csv(data, sep=",", engine='python', header=0)
Kai
  • 299
  • 6
  • 13
0

A similar error such as

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa0 in position 22: invalid start byte

also shows up if one tries to open an Excel file using read_csv() in pandas. Using pd.read_excel() instead solves the error.

An example that demonstrates it (the file name is data_dictionary because data dictionaries are most often Excel files while the datasets themselves are CSV files).

import pandas as pd

# some sample data
df = pd.DataFrame({'A': [1, 2, 3], 'B': ['a', 'b', 'c']})
df.to_excel('data_dictionary.xlsx', index=False)


df = pd.read_csv("data_dictionary.xlsx")         # <----- error

df = pd.read_excel("data_dictionary.xlsx")       # <----- OK
cottontail
  • 10,268
  • 18
  • 50
  • 51
-1
Dhinesh Kumar
  • 119
  • 1
  • 6