23

I have the following df in pandas.

0       A     B     C
1       2   NaN     8

How can I check if df.iloc[1]['B'] is NaN?

I tried using df.isnan() and I get a table like this:

0       A     B      C
1   false  true  false

but I am not sure how to index the table and if this is an efficient way of performing the job at all?

Newskooler
  • 3,973
  • 7
  • 46
  • 84

3 Answers3

45

Use pd.isnull, for select use loc or iloc:

print (df)
   0  A   B  C
0  1  2 NaN  8

print (df.loc[0, 'B'])
nan

a = pd.isnull(df.loc[0, 'B'])
print (a)
True

print (df['B'].iloc[0])
nan

a = pd.isnull(df['B'].iloc[0])
print (a)
True
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
3

jezrael response is spot on. If you are only concern with NaN value, I was exploring to see if there's a faster option, since in my experience, summing flat arrays is (strangely) faster than counting. This code seems faster:

df.isnull().values.any()

For example:

In [2]: df = pd.DataFrame(np.random.randn(1000,1000))

In [3]: df[df > 0.9] = pd.np.nan

In [4]: %timeit df.isnull().any().any()
100 loops, best of 3: 14.7 ms per loop

In [5]: %timeit df.isnull().values.sum()
100 loops, best of 3: 2.15 ms per loop

In [6]: %timeit df.isnull().sum().sum()
100 loops, best of 3: 18 ms per loop

In [7]: %timeit df.isnull().values.any()
1000 loops, best of 3: 948 µs per loop
ankur09011
  • 453
  • 3
  • 12
0

If you are looking for the indexes of NaN in a specific column you can use

list(df['B'].index[df['B'].apply(np.isnan)])

In case you what to get the indexes of all possible NaN values in the dataframe you may do the following

row_col_indexes = list(map(list, np.where(np.isnan(np.array(df)))))
indexes = []
for i in zip(row_col_indexes[0], row_col_indexes[1]):
    indexes.append(list(i))

And if you are looking for a one liner you can use:

list(zip(*[x for x in list(map(list, np.where(np.isnan(np.array(df)))))]))
Loochie
  • 2,414
  • 13
  • 20