220

I have a pandas dataframe with the following column names:

Result1, Test1, Result2, Test2, Result3, Test3, etc...

I want to drop all the columns whose name contains the word "Test". The numbers of such columns is not static but depends on a previous function.

How can I do that?

cs95
  • 379,657
  • 97
  • 704
  • 746
Alexis Eggermont
  • 7,665
  • 24
  • 60
  • 93

12 Answers12

314

Here is one way to do this:

df = df[df.columns.drop(list(df.filter(regex='Test')))]
cs95
  • 379,657
  • 97
  • 704
  • 746
Bindiya12
  • 3,313
  • 2
  • 8
  • 11
  • 79
    Or directly in place: `df.drop(list(df.filter(regex = 'Test')), axis = 1, inplace = True)` – Axel Nov 15 '17 at 13:46
  • 12
    This is a much more elegant solution than the accepted answer. I would break it down a bit more to show why, mainly extracting `list(df.filter(regex='Test'))` to better show what the line is doing. I would also opt for `df.filter(regex='Test').columns` over list conversion – Charles Mar 13 '18 at 23:12
  • 7
    I really wonder what the comments saying this answer is "elegant" means. I myself find it quite obfuscated, when python code should first be readable. It also is twice as slower than the first answer. And it uses the `regex` keyword when the `like` keyword seems to be more adequate. – Jacquot Mar 08 '19 at 09:15
  • 4
    This is not actually as good an answer as people claim. The problem with `filter` is that it _returns a copy of ALL the data as columns_ that you want to drop. It is wasteful if you're only passing this result to `drop` (which again returns a copy)... a better solution would be `str.startswith` (I've added an [answer](https://stackoverflow.com/a/54410702/4909087) with that here). – cs95 May 31 '19 at 03:58
  • My most concise version is `df.drop(columns=df.filter(like='SomeString').columns)`, which returns a copy of the DataFrame without the columns that contain `"SomeString"`. – Migwell Mar 14 '21 at 02:26
  • Merci beaucoup, même après 7ans c'est toujours utile!!! – Sarindra Thérèse May 07 '21 at 13:18
  • 2
    for multiple conditions, this can be done `df.drop(df.filter(regex='Test|Rest|Best').columns, axis=1, inplace=True)` – Srivatsan Feb 03 '22 at 12:32
122

Cheaper, Faster, and Idiomatic: str.contains

In recent versions of pandas, you can use string methods on the index and columns. Here, str.startswith seems like a good fit.

To remove all columns starting with a given substring:

df.columns.str.startswith('Test')
# array([ True, False, False, False])

df.loc[:,~df.columns.str.startswith('Test')]

  toto test2 riri
0    x     x    x
1    x     x    x

For case-insensitive matching, you can use regex-based matching with str.contains with an SOL anchor:

df.columns.str.contains('^test', case=False)
# array([ True, False,  True, False])

df.loc[:,~df.columns.str.contains('^test', case=False)] 

  toto riri
0    x    x
1    x    x

if mixed-types is a possibility, specify na=False as well.

cs95
  • 379,657
  • 97
  • 704
  • 746
  • 2
    Hi cs95, can you explain the syntax / thought behind the syntax a bit more? Why do we need to use the colon and comma? Thus why `df.loc[:,df....]` vs `df.loc[df....]`? – Hedge92 Sep 01 '21 at 12:05
  • 2
    Where the accepted answer do not work properly for columns ending on `_drop` in my test data, this solution does work. This should be the accepted answer. – Hedge92 Sep 01 '21 at 15:37
  • If you want to combine this with the drop method, you can do: `df.drop(columns = df.columns[df.columns.str.startswith('Test')], inplace = True)` – Jake Fisher Mar 31 '23 at 02:49
119
import pandas as pd

import numpy as np

array=np.random.random((2,4))

df=pd.DataFrame(array, columns=('Test1', 'toto', 'test2', 'riri'))

print df

      Test1      toto     test2      riri
0  0.923249  0.572528  0.845464  0.144891
1  0.020438  0.332540  0.144455  0.741412

cols = [c for c in df.columns if c.lower()[:4] != 'test']

df=df[cols]

print df
       toto      riri
0  0.572528  0.144891
1  0.332540  0.741412
Nic
  • 3,365
  • 3
  • 20
  • 31
41

This can be done neatly in one line with:

df = df.drop(df.filter(regex='Test').columns, axis=1)
Warren O'Neill
  • 608
  • 6
  • 6
  • 3
    Similarly (and faster): `df.drop(df.filter(regex='Test').columns, axis=1, inplace=True)` – Max Ghenis Apr 06 '20 at 05:00
  • 4
    for multiple conditions, this can be done `df.drop(df.filter(regex='Test|Rest|Best').columns, axis=1, inplace=True)` – Srivatsan Feb 03 '22 at 12:32
  • 1
    Awesome adaptation of the above solution to filter for multiple conditions! Thank you for posting this :) – veg2020 Feb 08 '22 at 20:42
  • @MaxGhenis I don't think doing anything with inplace = True can be considered fast these days, given that developers are considering removing this parameter at all. – DarknessPlusPlus Sep 28 '22 at 13:02
22

You can filter out the columns you DO want using 'filter'

import pandas as pd
import numpy as np

data2 = [{'test2': 1, 'result1': 2}, {'test': 5, 'result34': 10, 'c': 20}]

df = pd.DataFrame(data2)

df

    c   result1     result34    test    test2
0   NaN     2.0     NaN     NaN     1.0
1   20.0    NaN     10.0    5.0     NaN

Now filter

df.filter(like='result',axis=1)

Get..

   result1  result34
0   2.0     NaN
1   NaN     10.0
SAH
  • 269
  • 3
  • 8
13

Using a regex to match all columns not containing the unwanted word:

df = df.filter(regex='^((?!badword).)*$')
Janosh
  • 3,392
  • 2
  • 27
  • 35
Roy Assis
  • 161
  • 2
  • 10
11

Use the DataFrame.select method:

In [38]: df = DataFrame({'Test1': randn(10), 'Test2': randn(10), 'awesome': randn(10)})

In [39]: df.select(lambda x: not re.search('Test\d+', x), axis=1)
Out[39]:
   awesome
0    1.215
1    1.247
2    0.142
3    0.169
4    0.137
5   -0.971
6    0.736
7    0.214
8    0.111
9   -0.214
Phillip Cloud
  • 24,919
  • 11
  • 68
  • 88
9

This method does everything in place. Many of the other answers create copies and are not as efficient:

df.drop(df.columns[df.columns.str.contains('Test')], axis=1, inplace=True)

winderland
  • 362
  • 3
  • 7
8

Question states 'I want to drop all the columns whose name contains the word "Test".'

test_columns = [col for col in df if 'Test' in col]
df.drop(columns=test_columns, inplace=True)
Marvasti
  • 79
  • 1
  • 3
4

You can use df.filter to get the list of columns that match your string and then use df.drop

resdf = df.drop(df.filter(like='Test',axis=1).columns.to_list(), axis=1)
juil
  • 2,408
  • 3
  • 18
  • 18
ZacNt
  • 49
  • 2
1

Solution when dropping a list of column names containing regex. I prefer this approach because I'm frequently editing the drop list. Uses a negative filter regex for the drop list.

drop_column_names = ['A','B.+','C.*']
drop_columns_regex = '^(?!(?:'+'|'.join(drop_column_names)+')$)'
print('Dropping columns:',', '.join([c for c in df.columns if re.search(drop_columns_regex,c)]))
df = df.filter(regex=drop_columns_regex,axis=1)
BSalita
  • 8,420
  • 10
  • 51
  • 68
0

Building on my preferred answer by @cs95, combining loc with a lambda function enables a nice clean pipe chain like this:

output_df = (
    input_df
    .stuff
    .more_stuff
    .yet_more_stuff
    .loc[:, lambda x: ~x.columns.str.startswith('Test')]
)

This way you can refer to columns of the dataframe produced by pd.DataFrame.yet_more_stuff, rather than the original dataframe input_df itself, as the columns may have changed (depending, of course, on all the stuff).

tef2128
  • 740
  • 1
  • 8
  • 19