71

I have a data frame with alpha-numeric keys which I want to save as a csv and read back later. For various reasons I need to explicitly read this key column as a string format, I have keys which are strictly numeric or even worse, things like: 1234E5 which Pandas interprets as a float. This obviously makes the key completely useless.

The problem is when I specify a string dtype for the data frame or any column of it I just get garbage back. I have some example code here:

df = pd.DataFrame(np.random.rand(2,2),
                  index=['1A', '1B'],
                  columns=['A', 'B'])
df.to_csv(savefile)

The data frame looks like:

           A         B
1A  0.209059  0.275554
1B  0.742666  0.721165

Then I read it like so:

df_read = pd.read_csv(savefile, dtype=str, index_col=0)

and the result is:

   A  B
B  (  <

Is this a problem with my computer, or something I'm doing wrong here, or just a bug?

sophros
  • 14,672
  • 11
  • 46
  • 75
daver
  • 833
  • 1
  • 8
  • 10
  • It would be good if you could say the 'various reasons' why you want to save it as a string. The context might be helpful for finding a more elegant solution. – Sami Start Feb 21 '19 at 09:02
  • Does this answer your question? [Import pandas dataframe column as string not int](https://stackoverflow.com/questions/13293810/import-pandas-dataframe-column-as-string-not-int) – sophros Aug 18 '20 at 11:56
  • Does this answer your question? [Get pandas.read\_csv to read empty values as empty string instead of nan](https://stackoverflow.com/questions/10867028/get-pandas-read-csv-to-read-empty-values-as-empty-string-instead-of-nan) – dank8 Mar 03 '23 at 01:58

5 Answers5

65

Update: this has been fixed: from 0.11.1 you passing str/np.str will be equivalent to using object.

Use the object dtype:

In [11]: pd.read_csv('a', dtype=object, index_col=0)
Out[11]:
                      A                     B
1A  0.35633069074776547     0.745585398803751
1B  0.20037376323337375  0.013921830784260236

or better yet, just don't specify a dtype:

In [12]: pd.read_csv('a', index_col=0)
Out[12]:
           A         B
1A  0.356331  0.745585
1B  0.200374  0.013922

but bypassing the type sniffer and truly returning only strings requires a hacky use of converters:

In [13]: pd.read_csv('a', converters={i: str for i in range(100)})
Out[13]:
                      A                     B
1A  0.35633069074776547     0.745585398803751
1B  0.20037376323337375  0.013921830784260236

where 100 is some number equal or greater than your total number of columns.

It's best to avoid the str dtype, see for example here.

Chris Conlan
  • 2,774
  • 1
  • 19
  • 23
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
  • 1
    I think that ``read_csv`` should a) raise on an invalid passed dtype, and b) just translate ``str`` dtype to ``object``, open an issue? – Jeff Jun 07 '13 at 16:26
  • 1
    Thank you, I'll try that. I already mentioned I can't just read it in without specifying a type, Pandas keeps taking numeric keys which I need to be strings and parsing them as floats. Like I said in the example a key like: 1234E5 is taken as: 1234.0x10^5, which doesn't help me in the slightest when I go to look it up. – daver Jun 07 '13 at 19:00
  • @daver this is fixed in 0.11.1 when it comes out (soon). Thanks! – Andy Hayden Jun 07 '13 at 21:54
  • 1
    Simply including `dtype=object` (and not index_col) has fixed all my issues of leading zeroes disappearing. – elPastor Nov 08 '16 at 00:13
  • 3
    Setting `dtype` is not enough. For example, `df.applymap(lambda x: x.strip())` throws an error `AttributeError: ("'float' object has no attribute 'strip'", 'occurred at index A')` because pandas coerces `object` to `float` somewhere on the way. – Anton Tarasenko Jun 01 '17 at 16:33
  • @AntonTarasenko This is not a coercion per se. Empty fields are represented by np.nan, which is a float. You can change this behavior via the `na_filter=False` parameter of `read_csv`. – Sebastiaan Aug 28 '18 at 14:45
  • Thanks, it worked for me after using the `dtype=object` parameter it's read the data as a string so the numbers and everything uploaded exactly as it's . – Osama Al-Banna Jul 28 '21 at 07:56
  • If you have strings such as N/A you will want to add na_filter=False to prevent pandas from filtering on them – Justin Furuness Jun 21 '22 at 22:57
  • `pd.read_csv( sourceObj, dtype='string')` nothing else is required, empty strings are kept as strings. Version: Pandas v1.5 – dank8 Mar 03 '23 at 01:54
20

Nowadays, (pandas==1.0.5) it just works.

pd.read_csv(f, dtype=str) will read everything as string Except for NAN values.

Here is the list of values that will be parse to NAN : empty string, ‘#N/A’, ‘#N/A N/A’, ‘#NA’, ‘-1.#IND’, ‘-1.#QNAN’, ‘-NaN’, ‘-nan’, ‘1.#IND’, ‘1.#QNAN’, ‘’, ‘N/A’, ‘NA’, ‘NULL’, ‘NaN’, ‘n/a’, ‘nan’, ‘null’

If you don't want this strings to be parse as NAN use na_filter=False

julesl
  • 311
  • 2
  • 4
13

Like Anton T said in his comment, pandas will randomly turn object types into float types using its type sniffer, even you pass dtype=object, dtype=str, or dtype=np.str.

Since you can pass a dictionary of functions where the key is a column index and the value is a converter function, you can do something like this (e.g. for 100 columns).

pd.read_csv('some_file.csv', converters={i: str for i in range(0, 100)})

You can even pass range(0, N) for N much larger than the number of columns if you don't know how many columns you will read.

Chris Conlan
  • 2,774
  • 1
  • 19
  • 23
4

Use a converter that applies to any column if you don't know the columns before hand:

import pandas as pd

class StringConverter(dict):
    def __contains__(self, item):
        return True

    def __getitem__(self, item):
        return str

    def get(self, default=None):
        return str

pd.read_csv(file_or_buffer, converters=StringConverter())
danielrs
  • 341
  • 1
  • 10
0

Many of the above answers are fine but neither very elegant nor universal. If you want to read all of the columns as strings you can use the following construct without caring about the number of the columns.

from collections import defaultdict
import pandas as pd

pd.read_csv(file_or_buffer, converters=defaultdict(lambda i: str))

The defaultdict will return str for every index passed into converters.

sophros
  • 14,672
  • 11
  • 46
  • 75
  • Seems to provide no impact. – Sn3akyP3t3 Jan 15 '21 at 20:03
  • @Sn3akyP3t3: how do you know it wasn't for the version of `pandas` affected by the issue in the question? How do you know it is not working - what is the contradicting result you are getting? – sophros Jan 15 '21 at 22:08
  • I can confirm that this example only works in some cases. I applied this earlier in the week and it definitely worked. However; i then found another case, applied this and it had no effect. (Only a 3 column df) I went with the "StringConverter" class option also mentioned in this thread and it worked perfectly. Why? I dunno, but thats what happened. – Codek Nov 15 '21 at 16:45
  • @Codek: were the versions of Python / pandas any different between the runs or only different data? – sophros Nov 15 '21 at 17:28
  • 1
    Well actually thats an excellent point.the new project where the same workaround didn't work could be a subtle different version ill check it tomorrow! I'd certainly love to understand the why of this weirdness!! – Codek Nov 16 '21 at 18:46