75

Basically, I have a bunch of data where the first column is a string (label) and the remaining columns are numeric values. I run the following:

data = numpy.genfromtxt('data.txt', delimiter = ',')

This reads most of the data well, but the label column just gets 'nan'. How can I deal with this?

Jim
  • 4,509
  • 16
  • 50
  • 80

6 Answers6

77

By default, np.genfromtxt uses dtype=float: that's why you string columns are converted to NaNs because, after all, they're Not A Number...

You can ask np.genfromtxt to try to guess the actual type of your columns by using dtype=None:

>>> from StringIO import StringIO
>>> test = "a,1,2\nb,3,4"
>>> a = np.genfromtxt(StringIO(test), delimiter=",", dtype=None)
>>> print a
array([('a',1,2),('b',3,4)], dtype=[('f0', '|S1'),('f1', '<i8'),('f2', '<i8')])

You can access the columns by using their name, like a['f0']...

Using dtype=None is a good trick if you don't know what your columns should be. If you already know what type they should have, you can give an explicit dtype. For example, in our test, we know that the first column is a string, the second an int, and we want the third to be a float. We would then use

>>> np.genfromtxt(StringIO(test), delimiter=",", dtype=("|S10", int, float))
array([('a', 1, 2.0), ('b', 3, 4.0)], 
      dtype=[('f0', '|S10'), ('f1', '<i8'), ('f2', '<f8')])

Using an explicit dtype is much more efficient than using dtype=None and is the recommended way.

In both cases (dtype=None or explicit, non-homogeneous dtype), you end up with a structured array.

[Note: With dtype=None, the input is parsed a second time and the type of each column is updated to match the larger type possible: first we try a bool, then an int, then a float, then a complex, then we keep a string if all else fails. The implementation is rather clunky, actually. There had been some attempts to make the type guessing more efficient (using regexp), but nothing that stuck so far]

Pierre GM
  • 19,809
  • 3
  • 56
  • 67
  • This is so much easier than the experiments with `dtype` I was trying it isn't even funny. :^) I was off building a `dict` rather than using `None`.. *sigh* – DSM Sep 07 '12 at 14:57
  • Lots of people were arguing for `dtype=None` as the default. It would have broken backwards compatibility, though, so we stayed with `dtype=float`. Yes, it's rather powerful, but it's a performance hit... – Pierre GM Sep 07 '12 at 15:06
  • 1
    @PierreGM: maybe `Sniffer(**hints).sniff_dtype(sample)` could be an efficient solution: no need to read all input twice or hardcode dtype. – jfs Sep 07 '12 at 15:20
  • Cool idea, I need to go and check that. Anyway, there's still work to do on `np.genfromtxt`. Quotes are not properly dealt with, for example... – Pierre GM Sep 07 '12 at 15:23
  • It's from IO not from StringIO – Toma Mar 06 '21 at 14:43
42

If your data file is structured like this

col1, col2, col3
   1,    2,    3
  10,   20,   30
 100,  200,  300

then numpy.genfromtxt can interpret the first line as column headers using the names=True option. With this you can access the data very conveniently by providing the column header:

data = np.genfromtxt('data.txt', delimiter=',', names=True)
print data['col1']    # array([   1.,   10.,  100.])
print data['col2']    # array([   2.,   20.,  200.])
print data['col3']    # array([   3.,   30.,  300.])

Since in your case the data is formed like this

row1,   1,  10, 100
row2,   2,  20, 200
row3,   3,  30, 300

you can achieve something similar using the following code snippet:

labels = np.genfromtxt('data.txt', delimiter=',', usecols=0, dtype=str)
raw_data = np.genfromtxt('data.txt', delimiter=',')[:,1:]
data = {label: row for label, row in zip(labels, raw_data)}

The first line reads the first column (the labels) into an array of strings. The second line reads all data from the file but discards the first column. The third line uses dictionary comprehension to create a dictionary that can be used very much like the structured array which numpy.genfromtxt creates using the names=True option:

print data['row1']    # array([   1.,   10.,  100.])
print data['row2']    # array([   2.,   20.,  200.])
print data['row3']    # array([   3.,   30.,  300.])
felerian
  • 653
  • 5
  • 8
11

data=np.genfromtxt(csv_file, delimiter=',', dtype='unicode')

It works fine for me.

Sridhar
  • 1,832
  • 3
  • 23
  • 44
郑自强
  • 121
  • 2
  • 5
2

For a dataset of this format:

CONFIG000   1080.65 1080.87 1068.76 1083.52 1084.96 1080.31 1081.75 1079.98
CONFIG001   414.6   421.76  418.93  415.53  415.23  416.12  420.54  415.42
CONFIG010   1091.43 1079.2  1086.61 1086.58 1091.14 1080.58 1076.64 1083.67
CONFIG011   391.31  392.96  391.24  392.21  391.94  392.18  391.96  391.66
CONFIG100   1067.08 1062.1  1061.02 1068.24 1066.74 1052.38 1062.31 1064.28
CONFIG101   371.63  378.36  370.36  371.74  370.67  376.24  378.15  371.56
CONFIG110   1060.88 1072.13 1076.01 1069.52 1069.04 1068.72 1064.79 1066.66
CONFIG111   350.08  350.69  352.1   350.19  352.28  353.46  351.83  350.94

This code works for my application:

def ShowData(data, names):
    i = 0
    while i < data.shape[0]:
        print(names[i] + ": ")
        j = 0
        while j < data.shape[1]:
            print(data[i][j])
            j += 1
        print("")
        i += 1

def Main():
    print("The sample data is: ")
    fname = 'ANOVA.csv'
    csv = numpy.genfromtxt(fname, dtype=str, delimiter=",")
    num_rows = csv.shape[0]
    num_cols = csv.shape[1]
    names = csv[:,0]
    data = numpy.genfromtxt(fname, usecols = range(1,num_cols), delimiter=",")
    print(names)
    print(str(num_rows) + "x" + str(num_cols))
    print(data)
    ShowData(data, names)

Python-2 output:

The sample data is:
['CONFIG000' 'CONFIG001' 'CONFIG010' 'CONFIG011' 'CONFIG100' 'CONFIG101'
 'CONFIG110' 'CONFIG111']
8x9
[[ 1080.65  1080.87  1068.76  1083.52  1084.96  1080.31  1081.75  1079.98]
 [  414.6    421.76   418.93   415.53   415.23   416.12   420.54   415.42]
 [ 1091.43  1079.2   1086.61  1086.58  1091.14  1080.58  1076.64  1083.67]
 [  391.31   392.96   391.24   392.21   391.94   392.18   391.96   391.66]
 [ 1067.08  1062.1   1061.02  1068.24  1066.74  1052.38  1062.31  1064.28]
 [  371.63   378.36   370.36   371.74   370.67   376.24   378.15   371.56]
 [ 1060.88  1072.13  1076.01  1069.52  1069.04  1068.72  1064.79  1066.66]
 [  350.08   350.69   352.1    350.19   352.28   353.46   351.83   350.94]]
CONFIG000:
1080.65
1080.87
1068.76
1083.52
1084.96
1080.31
1081.75
1079.98

CONFIG001:
414.6
421.76
418.93
415.53
415.23
416.12
420.54
415.42

CONFIG010:
1091.43
1079.2
1086.61
1086.58
1091.14
1080.58
1076.64
1083.67

CONFIG011:
391.31
392.96
391.24
392.21
391.94
392.18
391.96
391.66

CONFIG100:
1067.08
1062.1
1061.02
1068.24
1066.74
1052.38
1062.31
1064.28

CONFIG101:
371.63
378.36
370.36
371.74
370.67
376.24
378.15
371.56

CONFIG110:
1060.88
1072.13
1076.01
1069.52
1069.04
1068.72
1064.79
1066.66

CONFIG111:
350.08
350.69
352.1
350.19
352.28
353.46
351.83
350.94
mancoast
  • 31
  • 2
0

You can use numpy.recfromcsv(filename): the types of each column will be automatically determined (as if you use np.genfromtxt() with dtype=None), and by default delimiter=",". It's basically a shortcut for np.genfromtxt(filename, delimiter=",", dtype=None) that Pierre GM pointed at in his answer.

Franck Dernoncourt
  • 77,520
  • 72
  • 342
  • 501
0

Here is a working example start to finish:

If I want to import numbers from a file without the first line:

 I like trains #this is the first line, a string

1 \t 2 \t 3   #\t is to signify that the delimeter (separation) is tab and not komma  

4 \t 5 \t 6

Then running the following code:

import numpy as np              #contains genfromtxt
import matplotlib.pyplot as plt #enables plots 
from pathlib import Path        # easier using path instead of writing it again and again when you have many files in the same folder
path = r'some_path'             #location of your file in your computer like r'C:my comp\folder\folder2' r is there to make the win 10 path readable in python, it means "just text"
fileNames = [r'\I like trains.txt',
             r'\die potato.txt']

data=np.genfromtxt(path + fileNames[0], delimiter='\t', skip_header=1)

Produces this result:

data = [1 2 3
        4 5 6]

where each number has its own cell and can be reached separately

Mime
  • 1,142
  • 1
  • 9
  • 20
Toma
  • 163
  • 1
  • 10
  • a for loop can be used to go through all the file names and list comprehention can be used to make a big list of lists but its very confusing when dealing with lots of files so giving each file a list can make things easier to understand – Toma Mar 06 '21 at 15:15