1

I'm having the DataFrame that looks like this:

enter image description here

I need to convert it to the structure that looks like this:

{1234: [[(1504010302, 45678), (1504016546, 78908)], [(1506691286,23208)]],
 4576:  [[(1529577322, 789323)], [(1532173522, 1094738), (1532190922, 565980)]]}

So basically, I need to use the first-level index ('userID') as the key of the list of all sessions of a particular user and form distinct lists of particular sessions with page views as tuples based on the second-level index ('session_index'). I was trying to implement this solution: Convert dataframe to dictionary of list of tuples. But I couldn't figure out how to modify it to get the structure I need.

from datetime import datetime

# I'm creating the sample of different sessions
iterator = iter([{'user': 1234, 
            'timestamp': 1504010302,
            'pageid': 45678},
            {'user': 1234,
           'timestamp': 1504016546,
           'pageid':78908},
            {'user': 1234,
           'timestamp': 1506691286,
           'pageid':23208}
             ,
           {'user': 4567, 
            'timestamp': 1529577322,
            'pageid': 789323},
           {'user': 4567, 
            'timestamp': 1532173522,
            'pageid': 1094738},
           {'user': 4567, 
            'timestamp': 1532190922,
            'pageid': 565980}])                                      

# Then I'm creating an empty DataFrame
df = pd.DataFrame(columns=['userID', 'session_index', 'timestamp', 'pageid'])

# Then I'm filling the empty DataFrame based on the logic that I need to get in the final structure 
for entry in iterator:
    if not (df.userID == entry['user']).any():
        df = df.append([{'userID': entry['user'], 'session_index': 1, 
                       'timestamp': entry['timestamp'], 'pageid': entry['pageid']}], 
                        ignore_index=True)
    else:
        session_numbers = df[(df.userID == entry['user'])
                              &
                             (df.timestamp.apply(lambda x: abs(datetime.fromtimestamp(x) 
                              - datetime.fromtimestamp(entry['timestamp'])).days*24
                              + abs(datetime.fromtimestamp(x) 
                              - datetime.fromtimestamp(entry['timestamp'])).seconds // 3600  
                              ) <= 24)]        
        if len(session_numbers.session_index.values) == 0:
            df = df.append([{'userID': entry['user'], 'session_index': 
                             df.session_index[df.userID == entry['user']].max() + 1, 
                       'timestamp': entry['timestamp'], 'pageid': entry['pageid']}], 
                        ignore_index=True)
        else:
            df = df.append([{'userID': entry['user'], 'session_index': session_numbers.session_index.values[0], 
                       'timestamp': entry['timestamp'], 'pageid': entry['pageid']}], 
                        ignore_index=True)

# Then I'm setting the Multi Index
df = df.set_index(['userID', 'session_index'])
print(df.index)

# Then I'm trying to get t
new_dict = df.apply(tuple, axis=1)\
    .groupby(level=0)\
    .agg(lambda x: list(x.values))\
    .to_dict()
camille
  • 16,432
  • 18
  • 38
  • 60
Elena
  • 11
  • 2

1 Answers1

0

Your code is complicated to understand. I've re-wrote it in more pythonic way. Try it (it works with pandas 0.23.0):

rows = [{'user': 1234, 
            'timestamp': 1504010302,
            'pageid': 45678},
            {'user': 1234,
           'timestamp': 1504016546,
           'pageid':78908},
            {'user': 1234,
           'timestamp': 1506691286,
           'pageid':23208}
             ,
           {'user': 4567, 
            'timestamp': 1529577322,
            'pageid': 789323},
           {'user': 4567, 
            'timestamp': 1532173522,
            'pageid': 1094738},
           {'user': 4567, 
            'timestamp': 1532190922,
            'pageid': 565980}]

d = pd.DataFrame(rows)
d["time_diff"] = d.groupby("user")["timestamp"]\
    .rolling(2).apply(lambda x: x[1] - x[0] > 24 * 3600)\
    .fillna(0)\
    .values

d["session_index"] = d.groupby("user")["time_diff"].cumsum()\
    .astype(int) + 1

d.drop("time_diff", axis=1, inplace=True)
d = d.set_index(['user', 'session_index'])

d.apply(lambda x: list(x)[::-1], axis=1)\
    .groupby(level=0)\
    .agg(lambda x: list(x.values))\
    .to_dict()

Result:

{1234: [[1504010302, 45678], [1504016546, 78908], [1506691286, 23208]],
 4567: [[1529577322, 789323], [1532173522, 1094738], [1532190922, 565980]]}
koPytok
  • 3,453
  • 1
  • 14
  • 29
  • I'm sorry to mention this but when I'm running exactly your code I'm still getting this structure: `{'pageid': {1234: [45678, 78908, 23208], 4567: [789323, 1094738, 565980]}, 'timestamp': {1234: [1504010302, 1504016546, 1506691286], 4567: [1529577322, 1532173522, 1532190922]}}` Anyway thank you very much for trying to help! – Elena Jun 18 '18 at 14:36
  • @Elena Can you, please, show the result of the following script: `import sys; print("python version:", sys.version); print("pandas version:", pd.__version__)` – koPytok Jun 18 '18 at 14:44
  • Here it is: `python version: 3.5.2 (default, Nov 23 2017, 16:37:01) [GCC 5.4.0 20160609] pandas version: 0.22.0` – Elena Jun 18 '18 at 14:45
  • And even if your code is working I still need slightly another result with an additinal list structure inside the mainl list (based on the session index) `{1234: [[(1504010302, 45678), (1504016546, 78908)], [(1506691286,23208)]], 4576: [[(1529577322, 789323)], [(1532173522, 1094738), (1532190922, 565980)]]}` – Elena Jun 18 '18 at 14:48
  • @Elena I see the same behavior as yours in `pandas 0.22.0` The problem is that it doesn't return series of tuples after `d.apply(tuple, axis=1)`, but returns DataFrame instead. In `pandas 0.23.0` it works. You can upgrade your pandas library with `sudo -H pip3 install --upgrade pandas` if you want – koPytok Jun 18 '18 at 14:58
  • @Elena to achieve the desired order, I've replaced `d.apply(tuple, axis=1)` with `d.apply(lambda x: list(x)[::-1], axis=1)` – koPytok Jun 18 '18 at 15:04