I have a huge lod dataset of around 10 million rows and I have huge problems regarding performance and speed. I tried to use pandas
, numpy
(also using numba
library) and dask
. However I wasn't able to acchieve sufficient success.
Raw Data (minimal and simplified)
df = pd.read_csv('data.csv',sep=';', names=['ID', 'UserID'], error_bad_lines=False,
encoding='latin-1', dtype='category')
For problem reproduction:
df = pd.DataFrame({'ID': [999974708546523127, 999974708546523127, 999974708546520000], 'UserID': ['AU896', 'ZZ999', 'ZZ999']}, dtype='category')
df
ID UserID
999974708546523127 AU896
999974708546523127 ZZ999
999974708546520000 ZZ999
Expected Output
User 999974708546520000 999974708546523127
AU896 1 0
ZZ999 1 1
I am able to acchieve this using the following different scripts. However, on big datasets the scripts are terribly slow. Finally I need to compute a correlation matrix between all users, based on the expexted output. This is the reason for the structure of the output:
Pandas
results_id = pd.crosstab(df.UserID, df.ID, dropna=False)
Numpy and Numba
import numpy as np
import numba
records = df.to_numpy()
unique_id = np.unique(records[:, 0])
unique_userid = np.unique(records[:, 1])
results_id = np.zeros((len(unique_userid), len(unique_id)))
@numba.jit(nopython=True):
def ID_PreProcess(records, records_tcode, records_user):
for userid in range(len(unique_userid)):
user = np.where(records[:, 1] == unique_userid[userid])
for id in range(len(unique_id)):
tcode_row= np.where(records[:, 0] == unique_id[id])
inter = records[np.where((records[:,1] == id) * (records[:,0] == id))]
results_id[userid, id]=len(inter)
return results_id
results_id = ID_PreProcess(records, records_tcode, records_user)
Dask
import pandas as pd
import dask.dataframe as dd
dask_logs = dd.from_pandas(df, npartitions=2)
results_id = dd.concat([dask_logs.UserID ,dd.get_dummies(dask_logs.ID)],axis=1).groupby('UserID').sum().compute()
I hope I can show that I tried multiple different possibilities. However, none of the options is efficient enough for such an amount of rows.
I found this post which seems to be very close to my problem, but I wasn't able to incorporate the solutions to my problem.
Thank you for your much for your help!