19

I used to read my data with numpy.loadtxt(). However, lately I found out in SO, that pandas.read_csv() is much more faster.

To read these data I use:

pd.read_csv(filename, sep=' ',header=None)

The problem that I encounter right now is that in my case the separator can differ from one space, x spaces to even a tab.

Here how my data could look like:

56.00     101.85 52.40 101.85 56.000000 101.850000 1
56.00 100.74 50.60 100.74 56.000000 100.740000 2
56.00 100.74 52.10 100.74 56.000000 100.740000 3
56.00 102.96 52.40 102.96 56.000000 102.960000 4
56.00 100.74 55.40 100.74 56.000000 100.740000 5

That leads to results like:

     0       1     2       3     4       5   6       7   8
0   56     NaN   NaN  101.85  52.4  101.85  56  101.85   1
1   56  100.74  50.6  100.74  56.0  100.74   2     NaN NaN
2   56  100.74  52.1  100.74  56.0  100.74   3     NaN NaN
3   56  102.96  52.4  102.96  56.0  102.96   4     NaN NaN
4   56  100.74  55.4  100.74  56.0  100.74   5     NaN NaN

I have to specify that my data are >100 MB. So I can not preprocess the data or clean them first. Any ideas how to get this fixed?

Tengis
  • 2,721
  • 10
  • 36
  • 58

1 Answers1

40

Your original line:

pd.read_csv(filename, sep=' ',header=None)

was specifying the separator as a single space, because your csvs can have spaces or tabs you can pass a regular expression to the sep param like so:

pd.read_csv(filename, sep='\s+',header=None)

This defines separator as being one single white space or more, there is a handy cheatsheet that lists regular expressions.

EdChum
  • 376,765
  • 198
  • 813
  • 562