1003

I'm trying to get a Python 3 program to do some manipulations with a text file filled with information. However, when trying to read the file I get the following error:

Traceback (most recent call last):  
  File "SCRIPT LOCATION", line NUMBER, in <module>  
    text = file.read()
  File "C:\Python31\lib\encodings\cp1252.py", line 23, in decode  
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 2907500: character maps to `<undefined>`  

After reading this Q&A, see How to determine the encoding of text if you need help figuring out the encoding of the file you are trying to open.

wjandrea
  • 28,235
  • 9
  • 60
  • 81
Eden Crow
  • 14,684
  • 11
  • 26
  • 24
  • 5
    For the same error these solution has helped me , [solution of charmap error](https://stackoverflow.com/questions/12468179/unicodedecodeerror-utf8-codec-cant-decode-byte-0x9c) – Shubham Sharma Sep 14 '17 at 11:58
  • 8
    See [Processing Text Files in Python 3](http://python-notes.curiousefficiency.org/en/latest/python3/text_file_processing.html) to understand why you get this error. – Andreas Haferburg Apr 24 '18 at 14:33

14 Answers14

1758

The file in question is not using the CP1252 encoding. It's using another encoding. Which one you have to figure out yourself. Common ones are Latin-1 and UTF-8. Since 0x90 doesn't actually mean anything in Latin-1, UTF-8 (where 0x90 is a continuation byte) is more likely.

You specify the encoding when you open the file:

file = open(filename, encoding="utf8")
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Lennart Regebro
  • 167,292
  • 41
  • 224
  • 251
  • 5
    if you're using Python 2.7, and getting the same error, try the `io` module: `io.open(filename,encoding="utf8")` – christopherlovell Jun 03 '15 at 14:02
  • 1
    +1 for specifying the encoding on read. p.s. is it supposed to be encoding="utf8" or is it encoding="utf-8" ? – Davos Feb 03 '16 at 23:03
  • 14
    @1vand1ng0: of course Latin-1 works; it'll work for any file regardless of what the actual encoding of the file is. That's because all 256 possible byte values in a file have a Latin-1 codepoint to map to, but that doesn't mean you get legible results! If you don't know the encoding, even opening the file in binary mode instead might be better than assuming Latin-1. – Martijn Pieters Mar 06 '17 at 14:10
  • 3
    I get the OP error even though the encoding is already specified correctly as UTF-8 (as shown above) in open(). Any ideas? – enahel Nov 15 '17 at 07:11
  • The suggested encoding string should have a dash and therefore it should be: open(csv_file, encoding='utf-8') (as tested on Python3) – rob_7cc Jan 04 '21 at 16:34
  • @enahel It's not possible to get the same error as OP when using UTF-8, since UTF-8 doesn't have any undefined bytes. You must be getting a different error, like say `UnicodeDecodeError: 'utf-8' codec can't decode byte 0x90 in position 0: invalid start byte`. – wjandrea Apr 26 '23 at 19:22
  • 1
    @rob_7cc That's not necessary. `'utf8'` is an alias for UTF-8. [docs](https://docs.python.org/3/library/codecs.html#standard-encodings) – wjandrea Apr 26 '23 at 19:29
130

If file = open(filename, encoding="utf-8") doesn't work, try
file = open(filename, errors="ignore"), if you want to remove unneeded characters. (docs)

Ben
  • 12,614
  • 4
  • 37
  • 69
Declan Nnadozie
  • 1,805
  • 1
  • 10
  • 20
  • 27
    Warning: This will result in data loss when unknown characters are encountered (which may be fine depending on your situation). – Hans Goldman Feb 28 '19 at 00:46
82

Alternatively, if you don't need to decode the file, such as uploading the file to a website, use:

open(filename, 'rb')

where r = reading, b = binary

MendelG
  • 14,885
  • 4
  • 25
  • 52
Kyle Parisi
  • 1,316
  • 1
  • 11
  • 14
  • 2
    Perhaps emphasize that the `b` will produce `bytes` instead of `str` data. Like you note, this is suitable if you don't need to process the bytes in any way. – tripleee May 10 '22 at 07:23
  • The top two answers didn't work, but this one did. I was trying to read a dictionary of pandas dataframes and kept getting errrors. – Realhermit Nov 17 '22 at 18:40
  • 1
    @Realhermit Please see https://stackoverflow.com/questions/436220. Every text file has a particular encoding, and you have to know what it is in order to use it properly. The common guesses won't always be correct. – Karl Knechtel Apr 12 '23 at 22:20
43

As an extension to @LennartRegebro's answer:

If you can't tell what encoding your file uses and the solution above does not work (it's not utf8) and you found yourself merely guessing - there are online tools that you could use to identify what encoding that is. They aren't perfect but usually work just fine. After you figure out the encoding you should be able to use solution above.

EDIT: (Copied from comment)

A quite popular text editor Sublime Text has a command to display encoding if it has been set...

  1. Go to View -> Show Console (or Ctrl+`)

enter image description here

  1. Type into field at the bottom view.encoding() and hope for the best (I was unable to get anything but Undefined but maybe you will have better luck...)

enter image description here

Stevoisiak
  • 23,794
  • 27
  • 122
  • 225
Matas Vaitkevicius
  • 58,075
  • 31
  • 238
  • 265
  • 3
    Some text editors will provide this information as well. I know that with vim you can get this via `:set fileencoding` ([from this link](http://superuser.com/questions/28779/how-do-i-find-the-encoding-of-the-current-buffer-in-vim)) – PaxRomana99 Dec 17 '16 at 15:20
  • 5
    Sublime Text, also -- open up the console and type `view.encoding()`. – JimmidyJoo Jul 12 '17 at 20:27
  • 1
    alternatively, you can open your file with notepad. 'Save As' and you shall see a drop-down with the encoding used – don_Gunner94 Mar 05 '20 at 12:11
  • Please see https://stackoverflow.com/questions/436220 for more details on the general task. – Karl Knechtel Apr 12 '23 at 22:23
43

TLDR: Try: file = open(filename, encoding='cp437')

Why? When one uses:

file = open(filename)
text = file.read()

Python assumes the file uses the same codepage as current environment (cp1252 in case of the opening post) and tries to decode it to its own default UTF-8. If the file contains characters of values not defined in this codepage (like 0x90) we get UnicodeDecodeError. Sometimes we don't know the encoding of the file, sometimes the file's encoding may be unhandled by Python (like e.g. cp790), sometimes the file can contain mixed encodings.

If such characters are unneeded, one may decide to replace them by question marks, with:

file = open(filename, errors='replace')

Another workaround is to use:

file = open(filename, errors='ignore')

The characters are then left intact, but other errors will be masked too.

A very good solution is to specify the encoding, yet not any encoding (like cp1252), but the one which has ALL characters defined (like cp437):

file = open(filename, encoding='cp437')

Codepage 437 is the original DOS encoding. All codes are defined, so there are no errors while reading the file, no errors are masked out, the characters are preserved (not quite left intact but still distinguishable).

Olivia Stork
  • 4,660
  • 5
  • 27
  • 40
rha
  • 577
  • 4
  • 4
  • 5
    Probably you should emphasize even more that randomly guessing at the encoding is likely to produce garbage. You have to _know_ the encoding of the data. – tripleee May 10 '22 at 07:21
  • 2
    There are many encodings that "have all characters defined" (you really mean "map every single-byte value to a character"). CP437 is very specifically associated with the Windows/DOS ecosystem. In most cases, Latin-1 (ISO-8859-1) will be a better starting guess. – Karl Knechtel Apr 12 '23 at 22:22
15

Stop wasting your time, just add the following encoding="cp437" and errors='ignore' to your code in both read and write:

open('filename.csv', encoding="cp437", errors='ignore')
open(file_name, 'w', newline='', encoding="cp437", errors='ignore')

Godspeed

E.Zolduoarrati
  • 1,539
  • 2
  • 9
  • 9
  • 2
    Before you apply that, be sure that you want your `0x90` to be decoded to `'É'`. Check `b'\x90'.decode('cp437')`. – hanna Aug 06 '20 at 15:56
  • 1
    This is absolutely horrible advice. Code page 437 is a terrible guess unless your source data comes from an MS-DOS system from the 1990s, and ignoring errors is often the worst possible way to silence the warnings. It's like cutting the wires to the "engine hot" and "fuel low" lights in your car to get rid of those annoying distractions. – tripleee Oct 25 '22 at 09:18
5

Before you apply the suggested solution, you can check what is the Unicode character that appeared in your file (and in the error log), in this case 0x90: https://unicodelookup.com/#0x90/1 (or directly at Unicode Consortium site http://www.unicode.org/charts/ by searching 0x0090)

and then consider removing it from the file.

hanna
  • 627
  • 9
  • 15
  • 2
    I have a web page at https://tripleee.github.io/8bit/#90 where you can look up the character's value in the various 8-bit encodings supported by Python. With enough data points, you can often infer a suitable encoding (though some of them are quite similar, and so establishing _exactly_ which encoding the original writer used will often involve some guesswork, too). – tripleee May 10 '22 at 07:24
5
def read_files(file_path):

    with open(file_path, encoding='utf8') as f:
        text = f.read()
        return text

OR (AND)

def read_files(text, file_path):

    with open(file_path, 'rb') as f:
        f.write(text.encode('utf8', 'ignore'))

OR

document = Document()
document.add_heading(file_path.name, 0)
    file_path.read_text(encoding='UTF-8'))
        file_content = file_path.read_text(encoding='UTF-8')
        document.add_paragraph(file_content)

OR

def read_text_from_file(cale_fisier):
    text = cale_fisier.read_text(encoding='UTF-8')
    print("what I read: ", text)
    return text # return written text

def save_text_into_file(cale_fisier, text):
    f = open(cale_fisier, "w", encoding = 'utf-8') # open file
    print("Ce am scris: ", text)
    f.write(text) # write the content to the file

OR

def read_text_from_file(file_path):
    with open(file_path, encoding='utf8', errors='ignore') as f:
        text = f.read()
        return text # return written text


def write_to_file(text, file_path):
    with open(file_path, 'wb') as f:
        f.write(text.encode('utf8', 'ignore')) # write the content to the file

OR

import os
import glob

def change_encoding(fname, from_encoding, to_encoding='utf-8') -> None:
    '''
    Read the file at path fname with its original encoding (from_encoding)
    and rewrites it with to_encoding.
    '''
    with open(fname, encoding=from_encoding) as f:
        text = f.read()

    with open(fname, 'w', encoding=to_encoding) as f:
        f.write(text)
Just Me
  • 864
  • 2
  • 18
  • 28
4

for me encoding with utf16 worked

file = open('filename.csv', encoding="utf16")
gabi939
  • 107
  • 2
  • 8
  • 1
    Like many of the other answers on this page, randomly guessing which encoding the OP is actually dealing with is mostly a waste of time. The proper solution is to tell them how to figure out the correct encoding, not offer more guesses (the Python documentation contains a list of all of them; there are many, many more which are not suggested in any answer here yet, but which _could_ be correct for any random visitor). UTF-16 is pesky in that the results will often look vaguely like valid Chinese or Korean text if you don't speak the language. – tripleee Oct 25 '22 at 09:13
3

For those working in Anaconda in Windows, I had the same problem. Notepad++ help me to solve it.

Open the file in Notepad++. In the bottom right it will tell you the current file encoding. In the top menu, next to "View" locate "Encoding". In "Encoding" go to "character sets" and there with patiente look for the enconding that you need. In my case the encoding "Windows-1252" was found under "Western European"

Antoni
  • 2,542
  • 20
  • 21
  • Only the viewing encoding is changed in this way. In order to effectively change the file's encoding, change preferences in Notepad++ and create a new document, as shown here: https://superuser.com/questions/1184299/is-there-a-way-to-force-notepad-encoding-to-windows-1252. – hanna Aug 06 '20 at 10:36
3

In the newer version of Python (starting with 3.7), you can add the interpreter option -Xutf8, which should fix your problem. If you use Pycharm, just got to Run > Edit configurations (in tab Configuration change value in field Interpreter options to -Xutf8).

Or, equivalently, you can just set the environmental variable PYTHONUTF8 to 1.

1

If you are on Windows, the file may be starting with a UTF-8 BOM indicating it definitely is a UTF-8 file. As per https://bugs.python.org/issue44510, I used encoding="utf-8-sig", and the csv file was read successfully.

Sayantam
  • 914
  • 8
  • 5
0

for me changing the Mysql character encoding the same as my code helped to sort out the solution. photo=open('pic3.png',encoding=latin1) enter image description here

SuperStormer
  • 4,997
  • 5
  • 25
  • 35
Piyush raj
  • 19
  • 2
  • Like many other random guesses, "latin-1" will remove the error, but will not guarantee that the file is decoded correctly. You have to know which encoding the file _actually_ uses. Also notice that `latin1` without quotes is a syntax error (unless you have a variable with that name, and it contains a string which represents a valid Python character encoding name). – tripleee Oct 25 '22 at 09:07
  • In this particular example, the real problem is that a PNG file does not contain text at all. You should instead read the raw bytes (`open('pic3.png', 'rb')` where the `b` signifies binary mode). – tripleee Oct 25 '22 at 09:09
0

This is an example of how I open and close file with UTF-8, extracted from a recent code:

def traducere_v1_txt(translator, file):
  data = []
  with open(f"{base_path}/{file}" , "r" ,encoding='utf8', errors='ignore') as open_file:
    data = open_file.readlines()
    
    
file_name = file.replace(".html","")
        with open(f"Translated_Folder/{file_name}_{input_lang}.html","w", encoding='utf8') as htmlfile:
          htmlfile.write(lxml1)