74

Given two dataframes df_1 and df_2, how to join them such that datetime column df_1 is in between start and end in dataframe df_2:

print df_1

  timestamp              A          B
0 2016-05-14 10:54:33    0.020228   0.026572
1 2016-05-14 10:54:34    0.057780   0.175499
2 2016-05-14 10:54:35    0.098808   0.620986
3 2016-05-14 10:54:36    0.158789   1.014819
4 2016-05-14 10:54:39    0.038129   2.384590


print df_2

  start                end                  event    
0 2016-05-14 10:54:31  2016-05-14 10:54:33  E1
1 2016-05-14 10:54:34  2016-05-14 10:54:37  E2
2 2016-05-14 10:54:38  2016-05-14 10:54:42  E3

Get corresponding event where df1.timestamp is between df_2.start and df2.end

  timestamp              A          B          event
0 2016-05-14 10:54:33    0.020228   0.026572   E1
1 2016-05-14 10:54:34    0.057780   0.175499   E2
2 2016-05-14 10:54:35    0.098808   0.620986   E2
3 2016-05-14 10:54:36    0.158789   1.014819   E2
4 2016-05-14 10:54:39    0.038129   2.384590   E3
cs95
  • 379,657
  • 97
  • 704
  • 746
DougKruger
  • 4,424
  • 14
  • 41
  • 62
  • It is not so easy, I believe help [this](https://stackoverflow.com/q/30627968/2901002) answers. – jezrael Oct 02 '17 at 12:52
  • Take a look at [this solution](https://stackoverflow.com/questions/44367672/best-way-to-join-merge-by-range-in-pandas). – Scott Boston Oct 02 '17 at 13:02
  • 1
    What a bad duplicate label! The best solution is clearly the numpy approach posted here by @piRSquared : https://stackoverflow.com/questions/44367672/best-way-to-join-merge-by-range-in-pandas/44601120#44601120 It took me hours to find it. – devinbost May 24 '18 at 09:30

9 Answers9

72

One simple solution is create interval index from start and end setting closed = both then use get_loc to get the event i.e (Hope all the date times are in timestamps dtype )

df_2.index = pd.IntervalIndex.from_arrays(df_2['start'],df_2['end'],closed='both')
df_1['event'] = df_1['timestamp'].apply(lambda x : df_2.iloc[df_2.index.get_loc(x)]['event'])

Output :

            timestamp         A         B event
0 2016-05-14 10:54:33  0.020228  0.026572    E1
1 2016-05-14 10:54:34  0.057780  0.175499    E2
2 2016-05-14 10:54:35  0.098808  0.620986    E2
3 2016-05-14 10:54:36  0.158789  1.014819    E2
4 2016-05-14 10:54:39  0.038129  2.384590    E3
Bharath M Shetty
  • 30,075
  • 6
  • 57
  • 108
  • I know it's been a while since you answered the question but maybe you can elaborate \ explain the second line in the code? I'm having a similar problem and do not know how to adjust it to my code. Thank you – TaL Dec 24 '20 at 12:09
  • @TaL, its just mapping the data. `df_2.index.get_loc(x)` basically will return the `index` of time `x` based on the upper and lower bound of interval index, that `index` is used to get the event from the table. – Bharath M Shetty Dec 24 '20 at 13:40
  • @Bharath, I know we are going back on an old post. Question: what if we have multiple values for event. Can I use nunique() to count the number of events? I am unable to adjust the code based on your input. Any recommendations? – Joe Ferndz Aug 20 '21 at 10:26
  • @JoeFerndz it's been a while, you can post a new question in SO explaining your requirements, this is an old answer there might be better approaches out there. – Bharath M Shetty Aug 21 '21 at 04:45
  • 2
    As far as I can tell, this fails if some events are outside of the intervals. While the supplied code works on the example data, I don't think doesn't fully fulfil the question of how to _join_ on a time range, as that question implies that the answer will work more similarly to how SQL will join using the `between` -keyword – Olsgaard Apr 21 '22 at 07:34
  • 1
    @Olsgaard its been a while since I answered here mostly during early stages of my career, and there has to be better solution out there than this. Will certainly update this solution when I get time. – Bharath M Shetty Apr 21 '22 at 07:49
25

First use IntervalIndex to create a reference index based on the interval of interest, then use get_indexer to slice the dataframe which contains the discrete events of interest.

idx = pd.IntervalIndex.from_arrays(df_2['start'], df_2['end'], closed='both')
event = df_2.iloc[idx.get_indexer(df_1.timestamp), 'event']

event
0    E1
1    E2
1    E2
1    E2
2    E3
Name: event, dtype: object

df_1['event'] = event.to_numpy()
df_1
            timestamp         A         B event
0 2016-05-14 10:54:33  0.020228  0.026572    E1
1 2016-05-14 10:54:34  0.057780  0.175499    E2
2 2016-05-14 10:54:35  0.098808  0.620986    E2
3 2016-05-14 10:54:36  0.158789  1.014819    E2
4 2016-05-14 10:54:39  0.038129  2.384590    E3

Reference: A question on IntervalIndex.get_indexer.

Shawn
  • 573
  • 3
  • 7
  • 17
cs95
  • 379,657
  • 97
  • 704
  • 746
  • this works great, if the intervals do not overlap, else you might have to revert to [Bharath](https://stackoverflow.com/users/4800652/bharath)'s solution – sammywemmy Oct 11 '21 at 06:57
18

You can use the module pandasql

import pandasql as ps

sqlcode = '''
select df_1.timestamp
,df_1.A
,df_1.B
,df_2.event
from df_1 
inner join df_2 
on d1.timestamp between df_2.start and df2.end
'''

newdf = ps.sqldf(sqlcode,locals())
chris dorn
  • 817
  • 8
  • 13
14

Option 1

idx = pd.IntervalIndex.from_arrays(df_2['start'], df_2['end'], closed='both')
df_2.index=idx
df_1['event']=df_2.loc[df_1.timestamp,'event'].values

Option 2

df_2['timestamp']=df_2['end']
pd.merge_asof(df_1,df_2[['timestamp','event']],on='timestamp',direction ='forward',allow_exact_matches =True)
Out[405]: 
            timestamp         A         B event
0 2016-05-14 10:54:33  0.020228  0.026572    E1
1 2016-05-14 10:54:34  0.057780  0.175499    E2
2 2016-05-14 10:54:35  0.098808  0.620986    E2
3 2016-05-14 10:54:36  0.158789  1.014819    E2
4 2016-05-14 10:54:39  0.038129  2.384590    E3
BENY
  • 317,841
  • 20
  • 164
  • 234
6

In this method, we assume TimeStamp objects are used.

df2  start                end                  event    
   0 2016-05-14 10:54:31  2016-05-14 10:54:33  E1
   1 2016-05-14 10:54:34  2016-05-14 10:54:37  E2
   2 2016-05-14 10:54:38  2016-05-14 10:54:42  E3

event_num = len(df2.event)

def get_event(t):    
    event_idx = ((t >= df2.start) & (t <= df2.end)).dot(np.arange(event_num))
    return df2.event[event_idx]

df1["event"] = df1.timestamp.transform(get_event)

Explanation of get_event

For each timestamp in df1, say t0 = 2016-05-14 10:54:33,

(t0 >= df2.start) & (t0 <= df2.end) will contain 1 true. (See example 1). Then, take a dot product with np.arange(event_num) to get the index of the event that a t0 belongs to.

Examples:

Example 1

    t0 >= df2.start    t0 <= df2.end     After &     np.arange(3)    
0     True                True         ->  T              0        event_idx
1    False                True         ->  F              1     ->     0
2    False                True         ->  F              2

Take t2 = 2016-05-14 10:54:35 for another example

    t2 >= df2.start    t2 <= df2.end     After &     np.arange(3)    
0     True                False        ->  F              0        event_idx
1     True                True         ->  T              1     ->     1
2    False                True         ->  F              2

We finally use transform to transform each timestamp into an event.

Tai
  • 7,684
  • 3
  • 29
  • 49
3

You can make pandas index alignment work for you by the expedient of setting df_1's index to the timestamp field

import pandas as pd

df_1 = pd.DataFrame(
    columns=["timestamp", "A", "B"],
    data=[
        (pd.Timestamp("2016-05-14 10:54:33"), 0.020228, 0.026572),
        (pd.Timestamp("2016-05-14 10:54:34"), 0.057780, 0.175499),
        (pd.Timestamp("2016-05-14 10:54:35"), 0.098808, 0.620986),
        (pd.Timestamp("2016-05-14 10:54:36"), 0.158789, 1.014819),
        (pd.Timestamp("2016-05-14 10:54:39"), 0.038129, 2.384590),
    ],
)
df_2 = pd.DataFrame(
    columns=["start", "end", "event"],
    data=[
        (
            pd.Timestamp("2016-05-14 10:54:31"),
            pd.Timestamp("2016-05-14 10:54:33"),
            "E1",
        ),
        (
            pd.Timestamp("2016-05-14 10:54:34"),
            pd.Timestamp("2016-05-14 10:54:37"),
            "E2",
        ),
        (
            pd.Timestamp("2016-05-14 10:54:38"),
            pd.Timestamp("2016-05-14 10:54:42"),
            "E3",
        ),
    ],
)
df_2.index = pd.IntervalIndex.from_arrays(df_2["start"], df_2["end"], closed="both")

Just set df_1["event"] to df_2["event"]

df_1["event"] = df_2["event"]

and voila

df_1["event"]

timestamp
2016-05-14 10:54:33    E1
2016-05-14 10:54:34    E2
2016-05-14 10:54:35    E2
2016-05-14 10:54:36    E2
2016-05-14 10:54:39    E3
Name: event, dtype: object
iruvar
  • 22,736
  • 7
  • 53
  • 82
  • I think this is a better answer than the current accepted. The code is shorter and it works even if some of the `timestamps` are not inside the `timeintervals`. This method also works using the assign-method, e.g. `df_1.assign(events = df_2['event'])` – Olsgaard Apr 21 '22 at 07:45
  • I tried your solution, but I only get NA values instead of the events. – cwohlfart Jul 07 '23 at 14:35
1

One option is with the conditional_join from pyjanitor:

# pip install pyjanitor
import pandas as pd
import janitor

(df_1                         
.conditional_join(
          df_2, 
          # variable arguments
          # tuple is of the form:
          # col_from_left_df, col_from_right_df, comparator
          ('timestamp', 'start', '>='), 
          ('timestamp', 'end', '<='),
          how = 'inner')
.drop(columns=['start', 'end'])
)

            timestamp         A         B event
0 2016-05-14 10:54:33  0.020228  0.026572    E1
1 2016-05-14 10:54:34  0.057780  0.175499    E2
2 2016-05-14 10:54:35  0.098808  0.620986    E2
3 2016-05-14 10:54:36  0.158789  1.014819    E2
4 2016-05-14 10:54:39  0.038129  2.384590    E3

You can decide the join type => left, right, or inner, with the how parameter.

sammywemmy
  • 27,093
  • 4
  • 17
  • 31
0

In the solution by firelynx here on StackOverflow, that suggests that Polymorphism does not work. I have to agree with firelynx (after extensive testing). However, combining that idea of Polymorphism with the numpy broadcasting solution of piRSquared, it can work!

The only problem is that in the end, under the hood, the numpy broadcasting does actually do some sort of cross-join where we filter all elements that are equal, giving an O(n1*n2) memory and O(n1*n2) performance hit. Probably, there is someone who can make this more efficient in a generic sense.

The reason I post here is that the question of the solution by firelynx is closed as a duplicate of this question, where I tend to disagree. Because this question and the answers therein do not give a solution when you have multiple points belonging to multiple intervals, but only for one point belonging to multiple intervals. The solution I propose below, does take care of these n-m relations.

Basically, create the two following classes PointInTime and Timespan for the Polymorphism.

from datetime import datetime

class PointInTime(object):
    doPrint = True
    def __init__(self, year, month, day):
        self.dt = datetime(year, month, day)

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            r = (self.dt == other.dt)
            if self.doPrint:
                print(f'{self.__class__}: comparing {self} to {other} (equals) gives {r}')
            return (r)
        elif isinstance(other, Timespan):
            r = (other.start_date < self.dt < other.end_date)
            if self.doPrint:
                print(f'{self.__class__}: comparing {self} to {other} (Timespan in PointInTime) gives {r}')
            return (r)
        else:
            if self.doPrint:
                print(f'Not implemented... (PointInTime)')
            return NotImplemented

    def __repr__(self):
        return "{}-{}-{}".format(self.dt.year, self.dt.month, self.dt.day)

class Timespan(object):
    doPrint = True
    def __init__(self, start_date, end_date):
        self.start_date = start_date
        self.end_date   = end_date

    def __eq__(self, other):
        if isinstance(other, self.__class__):
            r = ((self.start_date == other.start_date) and (self.end_date == other.end_date))
            if self.doPrint:
                print(f'{self.__class__}: comparing {self} to {other} (equals) gives {r}')
            return (r)
        elif isinstance (other, PointInTime):
            r = self.start_date < other.dt < self.end_date
            if self.doPrint:
                print(f'{self.__class__}: comparing {self} to {other} (PointInTime in Timespan) gives {r}')
            return (r)
        else:
            if self.doPrint:
                print(f'Not implemented... (Timespan)')
            return NotImplemented

    def __repr__(self):
        return "{}-{}-{} -> {}-{}-{}".format(self.start_date.year, self.start_date.month, self.start_date.day, self.end_date.year, self.end_date.month, self.end_date.day)

BTW, if you wish to not use ==, but other operators (such as !=, <, >, <=, >=) you can create the respective function for them (__ne__, __lt__, __gt__, __le__, __ge__).

The way you can use this in combination with the broadcasting is as follows.

import pandas as pd
import numpy as np

df1 = pd.DataFrame({"pit":[(x) for x in [PointInTime(2015,1,1), PointInTime(2015,2,2), PointInTime(2015,3,3), PointInTime(2015,4,4)]], 'vals1':[1,2,3,4]})
df2 = pd.DataFrame({"ts":[(x) for x in [Timespan(datetime(2015,2,1), datetime(2015,2,5)), Timespan(datetime(2015,2,1), datetime(2015,4,1)), Timespan(datetime(2015,2,1), datetime(2015,2,5))]], 'vals2' : ['a', 'b', 'c']})
a = df1['pit'].values
b = df2['ts'].values
i, j = np.where((a[:,None] == b))

res = pd.DataFrame(
    np.column_stack([df1.values[i], df2.values[j]]),
    columns=df1.columns.append(df2.columns)
)
print(df1)
print(df2)
print(res)

This gives the output as expected.

<class '__main__.PointInTime'>: comparing 2015-1-1 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-1-1 to 2015-2-1 -> 2015-4-1 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-1-1 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-2-2 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives True
<class '__main__.PointInTime'>: comparing 2015-2-2 to 2015-2-1 -> 2015-4-1 (Timespan in PointInTime) gives True
<class '__main__.PointInTime'>: comparing 2015-2-2 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives True
<class '__main__.PointInTime'>: comparing 2015-3-3 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-3-3 to 2015-2-1 -> 2015-4-1 (Timespan in PointInTime) gives True
<class '__main__.PointInTime'>: comparing 2015-3-3 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-4-4 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-4-4 to 2015-2-1 -> 2015-4-1 (Timespan in PointInTime) gives False
<class '__main__.PointInTime'>: comparing 2015-4-4 to 2015-2-1 -> 2015-2-5 (Timespan in PointInTime) gives False
        pit  vals1
0  2015-1-1      1
1  2015-2-2      2
2  2015-3-3      3
3  2015-4-4      4
                     ts vals2
0  2015-2-1 -> 2015-2-5     a
1  2015-2-1 -> 2015-4-1     b
2  2015-2-1 -> 2015-2-5     c
        pit vals1                    ts vals2
0  2015-2-2     2  2015-2-1 -> 2015-2-5     a
1  2015-2-2     2  2015-2-1 -> 2015-4-1     b
2  2015-2-2     2  2015-2-1 -> 2015-2-5     c
3  2015-3-3     3  2015-2-1 -> 2015-4-1     b

Probably the overhead of having the classes might have an additional performance loss compared to basic Python types, but I have not looked into that.

The above is how we create the "inner" join. It should be straightforward to create the "(outer) left", "(outer) right" and "(full) outer" joins.

PrinsEdje80
  • 494
  • 4
  • 8
0

If the timespans in df_2 are not overlapping, you can use numpy broadcasting to compare the timestamp with all of the timespans and determine which timespan it falls between. Then use argmax to figure out which 'Event' to assign (since there can only be at most 1 with non-overlapping timespans).

The where condition is used to NaN any that could have fallen outside of all timespans (since argmax won't deal with this properly)

import numpy as np

m = ((df_1['timestamp'].to_numpy() >= df_2['start'].to_numpy()[:, None])
      & (df_1['timestamp'].to_numpy() <= df_2['end'].to_numpy()[:, None]))

df_1['Event'] = df_2['event'].take(np.argmax(m, axis=0)).where(m.sum(axis=0) > 0)

print(df_1)
            timestamp         A         B Event
0 2016-05-14 10:54:33  0.020228  0.026572    E1
1 2016-05-14 10:54:34  0.057780  0.175499    E2
2 2016-05-14 10:54:35  0.098808  0.620986    E2
3 2016-05-14 10:54:36  0.158789  1.014819    E2
4 2016-05-14 10:54:39  0.038129  2.384590    E3
ALollz
  • 57,915
  • 7
  • 66
  • 89