0

I have a csv-file with many columns containing something like

"4.2515014131285567e-001"

Pandas reads it as an object, therefore calculation doesn't make sense.

For example *2 gives me:

"4.2515014131285567e-0014.2515014131285567e-001"

How can I use it as a number and doing some math stuff?

I tried to set "dtype=str" "dtype=float" and such things but nothing worked.

Matthew
  • 1,905
  • 3
  • 19
  • 26
Nobody
  • 77
  • 7
  • Looks somehow similar to this: https://stackoverflow.com/q/17737300/4636715 – vahdet Jun 23 '19 at 12:28
  • If you only have columns with values like your example. Try `df = df.astype(float)` And then try your calculations – Erfan Jun 23 '19 at 12:35
  • thanks, I'm facing a problem: In the table is some string, this I would like to delete. This row is starting with a string "info" and I would like to delete this row and every row below. How can I do this? From top I can use "skiprows", how can I do this from the bottom? And how can I skip a specific column? – Nobody Jun 23 '19 at 12:58
  • Okay, I found a solution for deleting rows. Now df.astype(float) works :-) – Nobody Jun 23 '19 at 13:14

2 Answers2

1

With some pre-processing, you can convert the data on import and remove non-float records prior to importing if they exist.

Initial dataset in test.txt:

Math
4.2515014131285567e-001
asdas
123123
asdasd124
123
125423414asd

This tests if float, and if so it'll return true/false while creating a list of values to skip.

def isFloat(val):

    try:
        float(val)
        return True

    except:
        return False

with open('test.txt','r') as f:

    skiplines=[]

    for i, v in enumerate(f.readlines()):

        if not isFloat(v.split(',')[0]):

            skiplines.append(i)

# we want to maintain the column header.
    del skiplines[0]



converter = {'NumberColName':lambda x: float(x)}

df = pd.read_csv('test.txt', converters = converter, skiprows= skiplines)

The lambda function can also just be declaring a data type. I like demonstrating the converters because you can easily round or apply logic here if you need it.

The final dataframe looks as expected (note that there are 0's because I have not set my format.

print(df)
           Math
0       0.42515
1  123123.00000
2     123.00000
krewsayder
  • 446
  • 4
  • 9
1

Try the following test:

Create the following DataFrame, using read_csv, but from a text buffer:

txt = '''c1,c2,c3
Xxxxx,4.2515014131285567e-001,4.2515014131285555e-001
Yyyyy,4.2515014131284444e-001,4.2515014131283333e-001
Zzzzz,4.2515014131282222e-001,4.2515014131281111e-001'''
df = pd.read_csv(pd.compat.StringIO(txt))

Then check types of columns with df.info(). For both c2 and c3 columns you should receive float64 type.

If you execute df.c2 * 2, you should receive doubled values. Don't bother about smaller number of decimal digits. It is the matter of Pandas options.

You can display an individual number with almost full precision, using df.loc[0, 'c2'] (I got 0.4251501413128557).

The same results should be even if numbers were surrounded with e.g. double quotes.

Up to now it was OK, but now try the second test:

In row 3, c2 column, remove e in front of -001, so this value is now 4.2515014131282222-001 and read_csv again.

The value changed is not any properly formatted float, so read_csv assumes for c2 column object type, actually a string (you can confirm it with df.info()).

My assumption is that somewhere in your text file the format of a number is somehow "corrupted" and just this prevents read_csv from reading this column as float.

To find the place - source of this error, run:

df.c2 = pd.to_numeric(df.c2, errors='coerce')

(replacing c2 with the proper column name) and then look in this column for NaN values.

Then look at the corresponding row in the input file and correct the error.

Alternative: df.dropna(inplace=True) removes each row containig NaN in any column. You may also add subset=['column_name'] parameter, to drop rows with NaN in just this one column.

Valdi_Bo
  • 30,023
  • 4
  • 23
  • 41