164

I have a pd.DataFrame that was created by parsing some excel spreadsheets. A column of which has empty cells. For example, below is the output for the frequency of that column, 32320 records have missing values for Tenant.

>>> value_counts(Tenant, normalize=False)
                              32320
    Thunderhead                8170
    Big Data Others            5700
    Cloud Cruiser              5700
    Partnerpedia               5700
    Comcast                    5700
    SDP                        5700
    Agora                      5700
    dtype: int64

I am trying to drop rows where Tenant is missing, however .isnull() option does not recognize the missing values.

>>> df['Tenant'].isnull().sum()
    0

The column has data type "Object". What is happening in this case? How can I drop records where Tenant is missing?

Gonçalo Peres
  • 11,752
  • 3
  • 54
  • 83
Amrita Sawant
  • 10,403
  • 4
  • 22
  • 26
  • Related : [delete rows from Pandas dataframe if all its columns have empty string](https://stackoverflow.com/questions/61964116/delete-rows-from-pandas-dataframe-if-all-its-columns-have-empty-string/). – Skippy le Grand Gourou Feb 04 '21 at 12:45

8 Answers8

295

Pandas will recognise a value as null if it is a np.nan object, which will print as NaN in the DataFrame. Your missing values are probably empty strings, which Pandas doesn't recognise as null. To fix this, you can convert the empty stings (or whatever is in your empty cells) to np.nan objects using replace(), and then call dropna()on your DataFrame to delete rows with null tenants.

To demonstrate, we create a DataFrame with some random values and some empty strings in a Tenants column:

>>> import pandas as pd
>>> import numpy as np
>>> 
>>> df = pd.DataFrame(np.random.randn(10, 2), columns=list('AB'))
>>> df['Tenant'] = np.random.choice(['Babar', 'Rataxes', ''], 10)
>>> print df

          A         B   Tenant
0 -0.588412 -1.179306    Babar
1 -0.008562  0.725239         
2  0.282146  0.421721  Rataxes
3  0.627611 -0.661126    Babar
4  0.805304 -0.834214         
5 -0.514568  1.890647    Babar
6 -1.188436  0.294792  Rataxes
7  1.471766 -0.267807    Babar
8 -1.730745  1.358165  Rataxes
9  0.066946  0.375640         

Now we replace any empty strings in the Tenants column with np.nan objects, like so:

>>> df['Tenant'].replace('', np.nan, inplace=True)
>>> print df

          A         B   Tenant
0 -0.588412 -1.179306    Babar
1 -0.008562  0.725239      NaN
2  0.282146  0.421721  Rataxes
3  0.627611 -0.661126    Babar
4  0.805304 -0.834214      NaN
5 -0.514568  1.890647    Babar
6 -1.188436  0.294792  Rataxes
7  1.471766 -0.267807    Babar
8 -1.730745  1.358165  Rataxes
9  0.066946  0.375640      NaN

Now we can drop the null values:

>>> df.dropna(subset=['Tenant'], inplace=True)
>>> print df

          A         B   Tenant
0 -0.588412 -1.179306    Babar
2  0.282146  0.421721  Rataxes
3  0.627611 -0.661126    Babar
5 -0.514568  1.890647    Babar
6 -1.188436  0.294792  Rataxes
7  1.471766 -0.267807    Babar
8 -1.730745  1.358165  Rataxes
McMath
  • 6,862
  • 2
  • 28
  • 33
  • 8
    @mcmath, a little curious. Why do you import numpy and use `np.nan` when you can do `pd.np.nan`? – propjk007 Dec 17 '15 at 00:08
  • 14
    @propjk007, as with many things in life, there are many ways to do many things – andrew Jul 28 '17 at 08:24
  • 2
    From my [tests](https://stackoverflow.com/a/56708633/4909087), it seems doing `df[df['Tenant'].astype(bool)]` (assuming no whitespace chars -- empty string only) is **faster** than `df.replace('', np.nan).dropna(subset=['Tenant'])` – cs95 Jun 23 '19 at 03:41
  • 2
    @propjk007 The pandas.np module is deprecated and will be removed from pandas in a future version. Import numpy directly instead of pd.np.nan – Levi Baguley Nov 04 '21 at 18:58
  • 1
    It doesn't remove fields containing only whitespaces. – hafiz031 Apr 20 '22 at 12:13
128

Pythonic + Pandorable: df[df['col'].astype(bool)]

Empty strings are falsy, which means you can filter on bool values like this:

df = pd.DataFrame({
    'A': range(5),
    'B': ['foo', '', 'bar', '', 'xyz']
})
df
   A    B
0  0  foo
1  1     
2  2  bar
3  3     
4  4  xyz
df['B'].astype(bool)                                                                                                                      
0     True
1    False
2     True
3    False
4     True
Name: B, dtype: bool

df[df['B'].astype(bool)]                                                                                                                  
   A    B
0  0  foo
2  2  bar
4  4  xyz

If your goal is to remove not only empty strings, but also strings only containing whitespace, use str.strip beforehand:

df[df['B'].str.strip().astype(bool)]
   A    B
0  0  foo
2  2  bar
4  4  xyz

Faster than you Think

.astype is a vectorised operation, this is faster than every option presented thus far. At least, from my tests. YMMV.

Here is a timing comparison, I've thrown in some other methods I could think of.

enter image description here

Benchmarking code, for reference:

import pandas as pd
import perfplot

df1 = pd.DataFrame({
    'A': range(5),
    'B': ['foo', '', 'bar', '', 'xyz']
})

perfplot.show(
    setup=lambda n: pd.concat([df1] * n, ignore_index=True),
    kernels=[
        lambda df: df[df['B'].astype(bool)],
        lambda df: df[df['B'] != ''],
        lambda df: df[df['B'].replace('', np.nan).notna()],  # optimized 1-col
        lambda df: df.replace({'B': {'': np.nan}}).dropna(subset=['B']),  
    ],
    labels=['astype', "!= ''", "replace + notna", "replace + dropna", ],
    n_range=[2**k for k in range(1, 15)],
    xlabel='N',
    logx=True,
    logy=True,
    equality_check=pd.DataFrame.equals)
cs95
  • 379,657
  • 97
  • 704
  • 746
  • 1
    Note that even the string 'False' evaluates to True when converting using `astype(bool)`. Which is good for this use case. – craq Feb 04 '21 at 22:13
  • pitfall: ```pd.DataFrame({'a': [np.NaN]}).a.astype(bool) ``` returns True. – chsymann Feb 23 '21 at 18:48
  • 1
    As does bool(math.nan) so this is correct Pythonic behavior. if math.nan requires explicit testing for its presence or absence then so should np.NaN. – lemi57ssss Sep 05 '21 at 13:33
  • 1
    @chsymann the question was about empty strings. If you need to drop nans, `dropna` is by far the idiomatic method for doing so – cs95 Oct 01 '21 at 11:04
  • I got `AttributeError: 'str' object has no attribute 'astype'` – Artyrm Sergeev Mar 10 '23 at 19:42
44

value_counts omits NaN by default so you're most likely dealing with "".

So you can just filter them out like

filter = df["Tenant"] != ""
dfNew = df[filter]
Bob Haffner
  • 8,235
  • 1
  • 36
  • 43
19

There's a situation where the cell has white space, you can't see it, use

df['col'].replace('  ', np.nan, inplace=True)

to replace white space as NaN, then

df= df.dropna(subset=['col'])
cs95
  • 379,657
  • 97
  • 704
  • 746
Learn
  • 528
  • 2
  • 6
  • 18
11

You can use this variation:

import pandas as pd
vals = {
    'name' : ['n1', 'n2', 'n3', 'n4', 'n5', 'n6', 'n7'],
    'gender' : ['m', 'f', 'f', 'f',  'f', 'c', 'c'],
    'age' : [39, 12, 27, 13, 36, 29, 10],
    'education' : ['ma', None, 'school', None, 'ba', None, None]
}
df_vals = pd.DataFrame(vals) #converting dict to dataframe

This will output(** - highlighting only desired rows):

   age education gender name
0   39        ma      m   n1 **
1   12      None      f   n2    
2   27    school      f   n3 **
3   13      None      f   n4
4   36        ba      f   n5 **
5   29      None      c   n6
6   10      None      c   n7

So to drop everything that does not have an 'education' value, use the code below:

df_vals = df_vals[~df_vals['education'].isnull()] 

('~' indicating NOT)

Result:

   age education gender name
0   39        ma      m   n1
2   27    school      f   n3
4   36        ba      f   n5
Amir F
  • 2,431
  • 18
  • 12
4

If you don't care about the columns where the missing files are, considering that the dataframe has the name New and one wants to assign the new dataframe to the same variable, simply run

New = New.drop_duplicates()

If you specifically want to remove the rows for the empty values in the column Tenant this will do the work

New = New[New.Tenant != '']

This may also be used for removing rows with a specific value - just change the string to the value that one wants.

Note: If instead of an empty string one has NaN, then

New = New.dropna(subset=['Tenant'])
Gonçalo Peres
  • 11,752
  • 3
  • 54
  • 83
0

Alternatively, you can use query.

  • If your missing values are empty strings:

    df.query('Tenant != ""')
    
  • If the missing values are NaN:

    df.query('Tenant == Tenant')
    

    (This works since np.nan != np.nan)

rachwa
  • 1,805
  • 1
  • 14
  • 17
0

For anyone who reads data from a csv/tsv file which contains empty string cells, pandas will automatically convert them to NaN values (see the documentation). Assuming these cells are in column "c2", a way to filter them out is:

df[~df["c2"].isna()]

Note that the tilde operator does bitwise negation.

Yiwei Jiang
  • 126
  • 8