15

How can I create a pandas dataframe column with dtype bool (or int for that matter) with support for Nan/missing values?

When I try like this:

d = {'one' : np.ma.MaskedArray([True, False, True, True], mask = [0,0,1,0]),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
print (df.dtypes)
print (df)

column one is implicitly converted to object. Likewise similar for ints:

d = {'one' : np.ma.MaskedArray([1,3,2,1], mask = [0,0,1,0]),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(d)
print (df.dtypes)
print (df)

one is here implicitly converted to float64, and I'd prefer if I stayed in int domain and not handle floating point arithmetic with its idiosyncrasies (always have tolerance when comparing, rounding errors, etc.)

smci
  • 32,567
  • 20
  • 113
  • 146
nmiculinic
  • 2,224
  • 3
  • 24
  • 39
  • 3
    Pandas doc explains why it's not possible to do what you're asking: http://pandas.pydata.org/pandas-docs/stable/gotchas.html#nan-integer-na-values-and-na-type-promotions – Paul J Dec 29 '15 at 23:01
  • 1
    Could you use an int flag (-999) or some other approach rather than nan? (what are you trying to achieve?) – Paul J Dec 29 '15 at 23:03
  • Ah the venerable -999. Frequent in scientific datasets, scourge of naive grad students. – Paul Dec 30 '15 at 03:03
  • Related: [NumPy or Pandas: Keeping array type as integer while having a NaN value](https://stackoverflow.com/questions/11548005/numpy-or-pandas-keeping-array-type-as-integer-while-having-a-nan-value) – jpp Jan 27 '19 at 04:01
  • The short answer is that pandas and Python don't natively support this. So the longer answer is whether you really really need to preserve NAs in that column? Can't you do all the imputing, then fill NAs? or convert to an integer/Categorical with three levels? If you absolutely need to record which specific rows were NA, you can create a second (boolean) column `one_na` to record that. – smci Sep 21 '19 at 23:05

1 Answers1

20

pandas >= 1.0

As of pandas 1.0.0 (January 2020), there is experimental support for nullable booleans directly:

In [183]: df.one.astype('boolean')
Out[183]:
a     True
b    False
c     <NA>
d     True
Name: one, dtype: object

In this version, pandas will also use pd.NA instead of np.nan in the integer case:

In [166]: df.astype('Int64')
Out[166]:
    one  two
a     1    1
b     3    2
c  <NA>    3
d     1    4

pandas >= 0.24

In the integer case, as of pandas 0.24 (January 2019), you can use nullable integers to achieve what you want:

In [165]: df
Out[165]:
   one  two
a  1.0  1.0
b  3.0  2.0
c  NaN  3.0
d  1.0  4.0

In [166]: df.astype('Int64')
Out[166]:
   one  two
a    1    1
b    3    2
c  NaN    3
d    1    4

This works by converting the backing array to an arrays.IntegerArray, and there is no equivalent thing for booleans, but some work in that direction is discussed in this GitHub issue and this PyData talk. You could write your own extension type to cover this case as well, but if you can live with your booleans being represented by the integers 0 and 1, one approach could be the following:

In [183]: df.one
Out[183]:
a     True
b    False
c      NaN
d     True
Name: one, dtype: object

In [184]: (df.one * 1).astype('Int64')
Out[184]:
a      1
b      0
c    NaN
d      1
Name: one, dtype: Int64
fuglede
  • 17,388
  • 2
  • 54
  • 99