I have a dataframe that looks like this:
df = pd.DataFrame({'VisitorID': [1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000],
'EpochTime': [1554888560, 1554888560, 1554888560, 1554888560, 1554888560, 1521333510, 1521333510, 1521333510],
'HitTime': [1400, 5340, 7034, 11034, 13059, 990, 4149, 6450],
'HitNumber':[23, 54, 55, 65, 110, 14, 29, 54],
'PagePath':['orders/details', 'orders/payment', 'orders/afterpayment', 'orders/myorders', 'customercare', 'orders/details', 'orders/payment', 'orders/myorders']})
print(df)
VisitorID EpochTime HitTime HitNumber PagePath
0 1000 1554888560 1400 23 orders/details
1 1000 1554888560 5340 54 orders/payment
2 1000 1554888560 7034 55 orders/afterpayment
3 1000 1554888560 11034 65 orders/myorders
4 1000 1554888560 13059 110 customercare
5 1000 1521333510 990 14 orders/details
6 1000 1521333510 4149 29 orders/payment
7 1000 1521333510 6450 54 orders/myorders
In reality my dataframe is +- 10 million rows. And has twice the columns. The data consists of website data which shows the behavior of customers.
What I want to do
To analyze how long the customers are on the website before reaching the first page which is tracked, I want to add one row above each group which copies the values of the top row from columns:
- VisitorID
- EpochTime
But gives new values to columns:
- HitTime = 0
- HitNumber = 0
- PagePath =
Home
Info: The combination of VisitorID
+ EpochTime
makes a group unique.
I achieved this with the following code, but it takes +- 5 min to run, I think there should be a faster way:
lst = []
for x, y in df.groupby(['VisitorID', 'EpochTime']):
lst.append(y.iloc[:1])
df_first = pd.concat(lst, ignore_index=True)
df_first['HitTime'] = 0.0
df_first['HitNumber'] = 0.0
df_first['PagePath'] = 'Home'
print(df_first)
VisitorID EpochTime HitTime HitNumber PagePath
0 1000 1521333510 0.0 0.0 Home
1 1000 1554888560 0.0 0.0 Home
df_final = pd.concat([df, df_first], ignore_index=True).sort_values(['VisitorID', 'EpochTime', 'HitNumber']).reset_index(drop=True)
print(df_final)
VisitorID EpochTime HitTime HitNumber PagePath
0 1000 1521333510 0.0 0.0 Home
1 1000 1521333510 990.0 14.0 orders/details
2 1000 1521333510 4149.0 29.0 orders/payment
3 1000 1521333510 6450.0 54.0 orders/myorders
4 1000 1554888560 0.0 0.0 Home
5 1000 1554888560 1400.0 23.0 orders/details
6 1000 1554888560 5340.0 54.0 orders/payment
7 1000 1554888560 7034.0 55.0 orders/afterpayment
8 1000 1554888560 11034.0 65.0 orders/myorders
9 1000 1554888560 13059.0 110.0 customercare
The output of df_final
is my expected output.
So the question is, can I do this in a more efficient way?