146

I've read something about a Python 2 limitation with respect to Pandas' to_csv( ... etc ...). Have I hit it? I'm on Python 2.7.3

This turns out trash characters for ≥ and - when they appear in strings. Aside from that the export is perfect.

df.to_csv("file.csv", encoding="utf-8") 

Is there any workaround?

df.head() is this:

demography  Adults ≥49 yrs  Adults 18−49 yrs at high risk||  \
state                                                           
Alabama                 32.7                             38.6   
Alaska                  31.2                             33.2   
Arizona                 22.9                             38.8   
Arkansas                31.2                             34.0   
California              29.8                             38.8  

csv output is this

state,  Adults ≥49 yrs,   Adults 18−49 yrs at high risk||
0,  Alabama,    32.7,   38.6
1,  Alaska, 31.2,   33.2
2,  Arizona,    22.9,   38.8
3,  Arkansas,31.2,  34
4,  California,29.8, 38.8

the whole code is this:

import pandas
import xlrd
import csv
import json

df = pandas.DataFrame()
dy = pandas.DataFrame()
# first merge all this xls together


workbook = xlrd.open_workbook('csv_merger/vaccoverage.xls')
worksheets = workbook.sheet_names()


for i in range(3,len(worksheets)):
    dy = pandas.io.excel.read_excel(workbook, i, engine='xlrd', index=None)
    i = i+1
    df = df.append(dy)

df.index.name = "index"

df.columns = ['demography', 'area','state', 'month', 'rate', 'moe']

#Then just grab month = 'May'

may_mask = df['month'] == "May"
may_df = (df[may_mask])

#then delete some columns we dont need

may_df = may_df.drop('area', 1)
may_df = may_df.drop('month', 1)
may_df = may_df.drop('moe', 1)


print may_df.dtypes #uh oh, it sees 'rate' as type 'object', not 'float'.  Better change that.

may_df = may_df.convert_objects('rate', convert_numeric=True)

print may_df.dtypes #that's better

res = may_df.pivot_table('rate', 'state', 'demography')
print res.head()


#and this is going to spit out an array of Objects, each Object a state containing its demographics
res.reset_index().to_json("thejson.json", orient='records')
#and a .csv for good measure
res.reset_index().to_csv("thecsv.csv", orient='records', encoding="utf-8")
Maggie
  • 1,975
  • 3
  • 15
  • 17
  • Give us an example of your data, becuase I can't reproduce "trash" characters. – Vor Sep 11 '14 at 13:03
  • Doesn't even have to be your data. A simple, complete example that reproduces the problem is what we want: `df = pd.DataFrame({"A": ['a', '≥']}); df.to_csv('test.csv')`, works fine for me. Post your python version as well. – TomAugspurger Sep 11 '14 at 13:04
  • Huh, I try @TomAugspurger 's simple test but I get "SyntaxError: Non-ASCII character '\xe2' in file test.py on line 5, but no encoding declared; see http://www.python.org/peps/pep-0263.html for details" Needless to say, I don't understand the page they point me to. I mean, I understand I need to edit my python install … but I'm on deadline elsewhere now, you know? – Maggie Sep 11 '14 at 21:03
  • 1
    Either your python or your terminal encoding is set to expect only ascii characters. You can read [here](http://stackoverflow.com/questions/3828723/why-we-need-sys-setdefaultencodingutf-8-in-a-py-script) for a way to set your encoding that may work as a *temporary* solution. – TomAugspurger Sep 12 '14 at 13:49
  • Yes I think that will have to do. I am scared to update to Python 3 in the middle of a project anyway. – Maggie Sep 14 '14 at 16:11

2 Answers2

356

Your "bad" output is UTF-8 displayed as CP1252.

On Windows, many editors assume the default ANSI encoding (CP1252 on US Windows) instead of UTF-8 if there is no byte order mark (BOM) character at the start of the file. While a BOM is meaningless to the UTF-8 encoding, its UTF-8-encoded presence serves as a signature for some programs. For example, Microsoft Office's Excel requires it even on non-Windows OSes. Try:

df.to_csv('file.csv',encoding='utf-8-sig')

That encoder will add the BOM.

Mark Tolonen
  • 166,664
  • 26
  • 169
  • 251
  • 18
    this solution `encoding='utf-8-sig'` worked for me. also `encoding='utf-16'` should work – Wael Almadhoun Nov 10 '18 at 09:20
  • 4
    This issue was driving me crazy! Thank you very much for this awesome answer! Of interesting note, doing `df.to_excel('file.csv')` generates an excel file that has no issue with Excel. Seems this issue is only pertaining to CSV files... – gaborous Jan 02 '19 at 18:51
  • 2
    @gaborous CSV files are text files and need the encoded BOM hint for Excel to open it correctly. Did you mean `df.to_excel('file.xls')`? I get an error using `df.to_excel('file.csv')`. XLS and XLSX files are in an Excel format already so Excel should definitely have no problem opening them. – Mark Tolonen Jan 03 '19 at 02:53
  • @MarkTolonen Yes for the file extension, that was a typo on my part. Indeed for the BOM, I learnt the hard way, but this is not obvious (why then offer the option to save as 'utf-8' into CSV without a BOM?). – gaborous Jan 11 '19 at 12:46
  • would you happen to know if there's an encoding that will help with plus signs? The ones provided in and under your post did not work for me – frlzjosh Aug 04 '20 at 01:13
  • @gaborous Most reasonable programs assume a text file with no BOM is UTF-8 – xjcl Nov 11 '20 at 08:36
  • 2
    @xjcl For some narrow definition of reasonable. Windows dealt with encodings and then went Unicode before UTF-8 was invented, and backward compatibility was important. – Mark Tolonen Nov 11 '20 at 08:45
  • Another thing to note: too long texts in a CSV file entry sometimes don't fit into a cell (around 32760 characters?) and this will add another row in an excel sheet. If you use pandas `df.to_excel` the sheet will look ok but the text will be truncated! AND if you have line breaks in your text blocks using `quoting=csv.QUOTE_ALL` seems to work well otherwise it doesn't parse it correctly. AND if you are in another locale, take care to use the correct separator. E.g. for german a semicolon works best out of the box. – E. Körner Jun 23 '21 at 07:06
0

encoding='utf-8-sig does not work for me. Excel reads the special characters fine now, but the Tab separators are gone! However, encoding='utf-16 does work correctly: special characters OK and Tab separators work. This is the solution for me.

germ
  • 1,477
  • 1
  • 18
  • 18
  • The ultimate lesson is that you need to understand the basics of encodings to be able to specify the correct one. Perhaps see the [Stack Overflow `character-encoding` tag info page](http://stackoverflow.com/tags/character-encoding/info) which contains a brief intro with pointers to more information. – tripleee Mar 27 '22 at 06:27
  • `utf-16` didn't work in my case. Hit _AttributeError: Can only use .str accessor with string values!_ once opened at `df[name].str.contains(regx, regex=True, na=False)`. Back to 'utf-8-sig' here on Windows. – gseattle May 07 '22 at 00:33