The pandas read_csv() method interprets 'NA' as nan (not a number) instead of a valid string.
In the simple case below note that the output in row 1, column 2 (zero based count) is 'nan' instead of 'NA'.
sample.tsv (tab delimited)
PDB CHAIN SP_PRIMARY RES_BEG RES_END PDB_BEG PDB_END SP_BEG SP_END
5d8b N P60490 1 146 1 146 1 146
5d8b NA P80377 1 126 1 126 1 126
5d8b O P60491 1 118 1 118 1 118
read_sample.py
import pandas as pd
df = pd.read_csv(
'sample.tsv',
sep='\t',
encoding='utf-8',
)
for df_tuples in df.itertuples(index=True):
print(df_tuples)
output
(0, u'5d8b', u'N', u'P60490', 1, 146, 1, 146, 1, 146)
(1, u'5d8b', nan, u'P80377', 1, 126, 1, 126, 1, 126)
(2, u'5d8b', u'O', u'P60491', 1, 118, 1, 118, 1, 118)
Additional Information
Re-writing the file with quotes for data in the 'CHAIN' column and then using the quotechar parameter quotechar='\''
has the same result. And passing a dictionary of types via the dtype parameter dtype=dict(valid_cols)
does not change the result.
An old answer to Prevent pandas from automatically inferring type in read_csv suggests first using a numpy record array to parse the file, but given the ability to now specify column dtypes, this shouldn't be necessary.
Note that itertuples() is used to preserve dtypes as described in the iterrows documentation: "To preserve dtypes while iterating over the rows, it is better to use itertuples() which returns tuples of the values and which is generally faster as iterrows."
Example was tested on Python 2 and 3 with pandas version 0.16.2, 0.17.0, and 0.17.1.
Is there a way to capture a valid string 'NA' instead of it being converted to nan?