314

I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with if x in df['id']. I thought this was working, except when I fed it a value that I knew was not in the column 43 in df['id'] it still returned True. When I subset to a data frame only containing entries matching the missing id df[df['id'] == 43] there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this answer to a similar question).

Community
  • 1
  • 1
Michael
  • 13,244
  • 23
  • 67
  • 115

10 Answers10

346

in of a Series checks whether the value is in the index:

In [11]: s = pd.Series(list('abc'))

In [12]: s
Out[12]: 
0    a
1    b
2    c
dtype: object

In [13]: 1 in s
Out[13]: True

In [14]: 'a' in s
Out[14]: False

One option is to see if it's in unique values:

In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)

In [22]: 'a' in s.unique()
Out[22]: True

or a python set:

In [23]: set(s)
Out[23]: {'a', 'b', 'c'}

In [24]: 'a' in set(s)
Out[24]: True

As pointed out by @DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:

In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)

In [32]: 'a' in s.values
Out[32]: True
Andy Hayden
  • 359,921
  • 101
  • 625
  • 535
  • 3
    I don't want to know whether it is unique necessarily, mainly I want to know if it's there. – Michael Jan 23 '14 at 21:47
  • 55
    I think `'a' in s.values` should be faster for long Series. – DSM Jan 23 '14 at 21:48
  • 8
    @AndyHayden Do you know why, for `'a' in s`, pandas chooses to check the index rather than the values of the series? In dictionaries they check keys, but a pandas series should behave more like a list or array, no? – Lei Nov 22 '17 at 17:55
  • @LeiHuang a Series is a key-value datastructure (I guess so is a DataFrame which acts the same here), also the `in` can be _very_ efficient on an Index, it's `O(1)`, whereas on an array it'll be `O(n)`. But you're right this is something that you do find oneself doing occasionally... my claim is generally there's a better way/alternative... – Andy Hayden Nov 22 '17 at 20:08
  • 4
    Starting from pandas 0.24.0 , using `s.values` and `df.values` is highly discoraged. See [this](https://pandas-dev.github.io/pandas-blog/pandas-extension-arrays.html). Also, `s.values` is actually much slower in some cases. – Qusai Alothman Feb 01 '19 at 07:09
  • 2
    @QusaiAlothman neither `.to_numpy` or `.array` are available on a Series, so I'm not entirely sure what alternative they're advocating (I don't read "highly discouraged"). In fact they're saying that .values may not return a numpy array, e.g. in the case of a categorical... but that's fine as `in` will still work as expected (indeed more efficiently that it's numpy array counterpart) – Andy Hayden Feb 01 '19 at 07:44
  • Adding to the comment of @QusaiAlothman : Use ``"a" in s.series``! – MuellerSeb Jan 17 '20 at 19:45
  • 1
    this works for me. You can perform this on a dataframe column by executing: ``"a" in df["column_name"].values`` – Robvh Jan 29 '20 at 11:10
  • 1
    Since the question is whether a column (not the index) contains a value and using the `.values` property is the most efficient way to do that, using that property should appear first in this answer. – Mr. Lance E Sloan Aug 20 '20 at 19:33
  • Note that this doesn't work for the case `np.nan in s.values`. `s` may contain `np.nan` and will still return `False`. – Stalpotaten Feb 02 '21 at 20:47
56

You can also use pandas.Series.isin although it's a little bit longer than 'a' in s.values:

In [2]: s = pd.Series(list('abc'))

In [3]: s
Out[3]: 
0    a
1    b
2    c
dtype: object

In [3]: s.isin(['a'])
Out[3]: 
0    True
1    False
2    False
dtype: bool

In [4]: s[s.isin(['a'])].empty
Out[4]: False

In [5]: s[s.isin(['z'])].empty
Out[5]: True

But this approach can be more flexible if you need to match multiple values at once for a DataFrame (see DataFrame.isin)

>>> df = DataFrame({'A': [1, 2, 3], 'B': [1, 4, 7]})
>>> df.isin({'A': [1, 3], 'B': [4, 7, 12]})
       A      B
0   True  False  # Note that B didn't match 1 here.
1  False   True
2   True   True
Kirk Kittell
  • 112
  • 2
  • 9
Oleg Kuralenko
  • 11,003
  • 1
  • 30
  • 40
  • 7
    You could also use the [DataFrame.any()](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.any.html) function: `s.isin(['a']).any()` – thando Jul 06 '20 at 08:35
52

You can try this to check a particular value 'x' in a particular column named 'id'

if x in df['id'].values
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
Akram
  • 958
  • 5
  • 11
49
found = df[df['Column'].str.contains('Text_to_search')]
print(found.count())

the found.count() will contains number of matches

And if it is 0 then means string was not found in the Column.

Shahir Ansari
  • 1,682
  • 15
  • 21
  • 4
    worked for me, but I used len(found) to get the count – kztd Apr 06 '19 at 21:35
  • 3
    Yes len(found) is a somewhat better option. – Shahir Ansari Apr 08 '19 at 06:08
  • 1
    This approach worked for me but I had to include the parameters `na=False` and `regex=False` for my use case, as explained here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.str.contains.html – Mabyn Oct 07 '19 at 22:24
  • 1
    But string.contains does a substring search. Ex: If a value called "head_hunter" is present. Passing "head" in str.contains matches and gives True which is wrong. – karthikeyan Jun 08 '20 at 14:09
  • @karthikeyan Its not wrong. Depends on the context of your search. What if you are search for addresses or product. You'll need all product that fit description. – Shahir Ansari Jun 27 '20 at 08:48
33

I did a few simple tests:

In [10]: x = pd.Series(range(1000000))

In [13]: timeit 999999 in x.values
567 µs ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [16]: timeit (x == 999999).any()
6.86 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [21]: timeit x.eq(999999).any()
7.03 ms ± 33.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [22]: timeit x.eq(9).any()
7.04 ms ± 60 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [15]: timeit x.isin([999999]).any()
9.54 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [17]: timeit 999999 in set(x)
79.8 ms ± 1.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

Interestingly it doesn't matter if you look up 9 or 999999, it seems like it takes about the same amount of time using the in syntax (must be using some vectorized computation)

In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [25]: timeit 9999 in x.values
647 µs ± 5.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [26]: timeit 999999 in x.values
642 µs ± 2.11 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [27]: timeit 99199 in x.values
644 µs ± 5.31 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

In [28]: timeit 1 in x.values
667 µs ± 20.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Seems like using x.values is the fastest, but maybe there is a more elegant way in pandas?

Allen Wang
  • 2,426
  • 2
  • 24
  • 48
  • 2
    It would be great if you change order of results from smallest to largest. Nice work! – smm Jan 10 '20 at 00:06
  • 1
    Regarding there not being any big difference between checking if 9 or 999999 is `in x.values`, [see this answer](https://stackoverflow.com/a/53424069/17219897). Apparently the `in` syntax is pretty much equivalent to `(elem == x.values).any()`. – Paradox Oct 16 '22 at 21:46
11

Or use Series.tolist or Series.any:

>>> s = pd.Series(list('abc'))
>>> s
0    a
1    b
2    c
dtype: object
>>> 'a' in s.tolist()
True
>>> (s=='a').any()
True

Series.tolist makes a list about of a Series, and the other one i am just getting a boolean Series from a regular Series, then checking if there are any Trues in the boolean Series.

U13-Forward
  • 69,221
  • 14
  • 89
  • 114
2

Simple condition:

if any(str(elem) in ['a','b'] for elem in df['column'].tolist()):
U13-Forward
  • 69,221
  • 14
  • 89
  • 114
Eli B
  • 247
  • 2
  • 5
2

Use

df[df['id']==x].index.tolist()

If x is present in id then it'll return the list of indices where it is present, else it gives an empty list.

Vikas Periyadath
  • 3,088
  • 1
  • 21
  • 33
2

Use query() to find the rows where the condition holds and get the number of rows with shape[0]. If there exists at least one entry, this statement is True:

df.query('id == 123').shape[0] > 0
rachwa
  • 1,805
  • 1
  • 14
  • 17
2

I had a CSV file to read:

df = pd.read_csv('50_states.csv')

And after trying:

if value in df.column:
    print(True)

which never printed true, even though the value was in the column;

I tried:

for values in df.column:
    if value == values:
        print(True)
        #Or do something
    else:
        print(False)

Which worked. I hope this can help!

clazzy
  • 29
  • 3
  • The reason why this doesn't work.. df.columns is a list of your column names, what you can do is df['column'] == "value", which provides a boolean series. You can then add that inside a df[ ] call to return the rows which the boolean was True. df[df["column"] == "value"]. – Jenobi Nov 17 '22 at 14:22
  • @Jenobi Would you be able to elaborate or post this solution with your addition as I'm struggling to implement it – Will Jordan Jan 05 '23 at 21:03
  • @Will Jordan ```data = [{'name': "jenobi", "foo": "bar"}, {'name': "jenobi", "foo": "baz"}] df = pd.DataFrame(data,) _filter = df['foo'] == 'bar' df[_filter]``` returns all the rows which match your filter. you can also merge the second to last and last lines such as df[df['foo'] == 'bar'], as this includes a boolean filter from before. – Jenobi Jan 19 '23 at 15:10