10

I have one record per ID with start date and end date

id  age state   start_date  end_date
123 18  CA     2/17/2019    5/4/2019
223 24  AZ     1/17/2019    3/4/2019

I want to create a record for each day between the start and end day, so I can join daily activity data to it. The target output would look something like this

id  age state   start_date
123 18  CA      2/17/2019
123 18  CA      2/18/2019
123 18  CA      2/19/2019
123 18  CA      2/20/2019
123 18  CA      2/21/2019
            …
123 18  CA      5/2/2019
123 18  CA      5/3/2019
123 18  CA      5/4/2019

And of course do this for all ids and their respective start dates in the dataset.

Mark Rotteveel
  • 100,966
  • 191
  • 140
  • 197
L Xandor
  • 1,659
  • 4
  • 24
  • 48

2 Answers2

18

Edit: I had to revisit this problem in a project, and looks like using DataFrame.apply with pd.date_range and DataFrame.explode is almost 3x faster:

df["date"] = df.apply(
    lambda row: pd.date_range(row["start_date"], row["end_date"]),
    axis=1
)
df = (
    df.explode("date", ignore_index=True)
    .drop(columns=["start_date", "end_date"])
)

Output

      id  age state       date
0    123   18    CA 2019-02-17
1    123   18    CA 2019-02-18
2    123   18    CA 2019-02-19
3    123   18    CA 2019-02-20
4    123   18    CA 2019-02-21
..   ...  ...   ...        ...
119  223   24    AZ 2019-02-28
120  223   24    AZ 2019-03-01
121  223   24    AZ 2019-03-02
122  223   24    AZ 2019-03-03
123  223   24    AZ 2019-03-04

[124 rows x 4 columns]

Original answer:

melt, GroupBy, resample & ffill

First we melt (unpivot) your two date columns to one. Then we resample on day basis:

melt = df.melt(id_vars=['id', 'age', 'state'], value_name='date').drop('variable', axis=1)
melt['date'] = pd.to_datetime(melt['date'])

melt = melt.groupby('id').apply(lambda x: x.set_index('date').resample('d').first())\
           .ffill()\
           .reset_index(level=1)\
           .reset_index(drop=True)

Output

          date     id   age state
0   2019-02-17  123.0  18.0    CA
1   2019-02-18  123.0  18.0    CA
2   2019-02-19  123.0  18.0    CA
3   2019-02-20  123.0  18.0    CA
4   2019-02-21  123.0  18.0    CA
..         ...    ...   ...   ...
119 2019-02-28  223.0  24.0    AZ
120 2019-03-01  223.0  24.0    AZ
121 2019-03-02  223.0  24.0    AZ
122 2019-03-03  223.0  24.0    AZ
123 2019-03-04  223.0  24.0    AZ

[124 rows x 4 columns]
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Erfan
  • 40,971
  • 8
  • 66
  • 78
  • If you don't want to do the changes in-place, you can use `.assign()` like so: `df.assign(date=df.apply(...)).explode("date", ...).drop(...)` – wjandrea Jan 17 '23 at 17:41
3

Use listcomp and pd.date_range on values of columns start_date and end_date to create list of date for each rec. Next, construct a new dataframe from result of listcomp and join back to the other 3 columns of df. Finally, set_index, stack and reset_index back

a = [pd.date_range(*r, freq='D') for r in df[['start_date', 'end_date']].values]
(df[['id', 'age', 'state']]
    .join(pd.DataFrame(a)).set_index(['id', 'age', 'state'])
    .stack().droplevel(-1).reset_index()
    )

Out[187]:
      id  age state          0
0    123   18    CA 2019-02-17
1    123   18    CA 2019-02-18
2    123   18    CA 2019-02-19
3    123   18    CA 2019-02-20
4    123   18    CA 2019-02-21
..   ...  ...   ...        ...
119  223   24    AZ 2019-02-28
120  223   24    AZ 2019-03-01
121  223   24    AZ 2019-03-02
122  223   24    AZ 2019-03-03
123  223   24    AZ 2019-03-04

[124 rows x 4 columns]
wjandrea
  • 28,235
  • 9
  • 60
  • 81
Andy L.
  • 24,909
  • 4
  • 17
  • 29