1

I have a dataframe that looks like this:

df = pd.DataFrame({'VisitorID': [1000, 1000, 1000, 1000, 1000, 1000, 1000, 1000],
                   'EpochTime': [1554888560, 1554888560, 1554888560, 1554888560, 1554888560, 1521333510, 1521333510, 1521333510],
                   'HitTime': [1400, 5340, 7034, 11034, 13059, 990, 4149, 6450],
                   'HitNumber':[23, 54, 55, 65, 110, 14, 29, 54],
                   'PagePath':['orders/details', 'orders/payment', 'orders/afterpayment', 'orders/myorders', 'customercare', 'orders/details', 'orders/payment', 'orders/myorders']})

print(df)
   VisitorID   EpochTime  HitTime  HitNumber             PagePath
0       1000  1554888560     1400         23       orders/details
1       1000  1554888560     5340         54       orders/payment
2       1000  1554888560     7034         55  orders/afterpayment
3       1000  1554888560    11034         65      orders/myorders
4       1000  1554888560    13059        110         customercare
5       1000  1521333510      990         14       orders/details
6       1000  1521333510     4149         29       orders/payment
7       1000  1521333510     6450         54      orders/myorders

In reality my dataframe is +- 10 million rows. And has twice the columns. The data consists of website data which shows the behavior of customers.

What I want to do
To analyze how long the customers are on the website before reaching the first page which is tracked, I want to add one row above each group which copies the values of the top row from columns:

  • VisitorID
  • EpochTime

But gives new values to columns:

  • HitTime = 0
  • HitNumber = 0
  • PagePath = Home

Info: The combination of VisitorID + EpochTime makes a group unique.

I achieved this with the following code, but it takes +- 5 min to run, I think there should be a faster way:

lst = []
for x, y in df.groupby(['VisitorID', 'EpochTime']):
    lst.append(y.iloc[:1])

df_first = pd.concat(lst, ignore_index=True)

df_first['HitTime'] = 0.0
df_first['HitNumber'] = 0.0
df_first['PagePath'] = 'Home'

print(df_first)
   VisitorID   EpochTime  HitTime  HitNumber PagePath
0       1000  1521333510      0.0        0.0     Home
1       1000  1554888560      0.0        0.0     Home

df_final = pd.concat([df, df_first], ignore_index=True).sort_values(['VisitorID', 'EpochTime', 'HitNumber']).reset_index(drop=True)

print(df_final)
   VisitorID   EpochTime  HitTime  HitNumber             PagePath
0       1000  1521333510      0.0        0.0                 Home
1       1000  1521333510    990.0       14.0       orders/details
2       1000  1521333510   4149.0       29.0       orders/payment
3       1000  1521333510   6450.0       54.0      orders/myorders
4       1000  1554888560      0.0        0.0                 Home
5       1000  1554888560   1400.0       23.0       orders/details
6       1000  1554888560   5340.0       54.0       orders/payment
7       1000  1554888560   7034.0       55.0  orders/afterpayment
8       1000  1554888560  11034.0       65.0      orders/myorders
9       1000  1554888560  13059.0      110.0         customercare

The output of df_final is my expected output.

So the question is, can I do this in a more efficient way?

Erfan
  • 40,971
  • 8
  • 66
  • 78

1 Answers1

2

You can use DataFrame.drop_duplicates for improve performance a bit:

d = {'HitTime':0,'HitNumber':0,'PagePath':'Home'}
df_first = df.drop_duplicates(['VisitorID', 'EpochTime']).assign(**d)

df_final = (pd.concat([df, df_first], ignore_index=True)
             .sort_values(['VisitorID', 'EpochTime', 'HitNumber'])
             .reset_index(drop=True))

print(df_final)

   VisitorID   EpochTime  HitTime  HitNumber             PagePath
0       1000  1521333510        0          0                 Home
1       1000  1521333510      990         14       orders/details
2       1000  1521333510     4149         29       orders/payment
3       1000  1521333510     6450         54      orders/myorders
4       1000  1554888560        0          0                 Home
5       1000  1554888560     1400         23       orders/details
6       1000  1554888560     5340         54       orders/payment
7       1000  1554888560     7034         55  orders/afterpayment
8       1000  1554888560    11034         65      orders/myorders
9       1000  1554888560    13059        110         customercare

Another idea is change index values in df_first by subtracting and last sort by index:

d = {'HitTime':0,'HitNumber':0,'PagePath':'Home'}
df_first = df.drop_duplicates(['VisitorID', 'EpochTime']).assign(**d)
df_first.index -= .5

df_final = pd.concat([df, df_first]).sort_index().reset_index(drop=True)
print(df_final)
   VisitorID   EpochTime  HitTime  HitNumber             PagePath
0       1000  1554888560        0          0                 Home
1       1000  1554888560     1400         23       orders/details
2       1000  1554888560     5340         54       orders/payment
3       1000  1554888560     7034         55  orders/afterpayment
4       1000  1554888560    11034         65      orders/myorders
5       1000  1554888560    13059        110         customercare
6       1000  1521333510        0          0                 Home
7       1000  1521333510      990         14       orders/details
8       1000  1521333510     4149         29       orders/payment
9       1000  1521333510     6450         54      orders/myorders
jezrael
  • 822,522
  • 95
  • 1,334
  • 1,252
  • Could you explain what the `**d` in `assign()` does? – Erfan Apr 10 '19 at 11:23
  • @Erfan - check [this](https://stackoverflow.com/questions/21809112/what-does-tuple-and-dict-means-in-python) `**d means "treat the key-value pairs in the dictionary as additional named arguments to this function call."` – jezrael Apr 10 '19 at 11:27
  • Thank you, this ran approx 100 faster (5 sec). Btw what made my code so slow? – Erfan Apr 10 '19 at 11:32
  • @Erfan - Loop `for x, y in df.groupby(['VisitorID', 'EpochTime']): lst.append(y.iloc[:1])` and then `df_first = pd.concat(lst, ignore_index=True)` – jezrael Apr 10 '19 at 11:33
  • I see. And for the `**d`, could we also have used `map` for this? – Erfan Apr 10 '19 at 11:44
  • @Erfan - Not sure if understand, can you explain more? – jezrael Apr 10 '19 at 11:44
  • 1
    I thought about using `pandas.Series.map` to map the new values to the columns, but that would not work I guess since this goes over the dataframe – Erfan Apr 10 '19 at 11:46