0

I have a dict of time series as pandas.DataFrame objects each with an arbitrary number of columns.

I want to convert each DataFrame into a list of dict (eg, [{"col1": "row1", "col2": "row2", ..}, {"col1": "row2", ..}, ..], then sort them by the timestamp value of each dict (timestamp is mandatory in each DataFrame).

This is a performance improvement question. The code below works but I'm trying to find the fastest possible way to do it.

I know this problem could be parallelized, but am not sure if it's the optimal route.

import pandas as pd
import numpy as np


def gen_random_df(rows):
    df = pd.DataFrame({'x': np.random.normal(rows), 'y': np.random.normal(rows), 'z': np.random.normal(rows)},
                      index=pd.date_range('1900-01-01', '2049-12-31')[:rows])
    df.index.name = 'timestamp'
    return df


def to_list1(df, symbol):
    df = df.reset_index()
    return [dict(zip(df.columns, v), symbol=symbol) for v in df.values]


def method1(dict_of_dfs):
    data = []
    for symbol, df in dict_of_dfs.items():
        data.extend(to_list1(df, symbol))
    return sorted(data, key=lambda x: x['timestamp'])

Second method:


def method2(dict_of_dfs):
    dict_of_dfs = {symbol: df.assign(symbol=symbol) for symbol, df in dict_of_dfs.items()}
    data = pd.concat(dict_of_dfs.values(), axis=0).reset_index().to_dict('index').values()
    return list(data)

Here's the performance of the two approaches. Method1 is the fastest one, but can it be improved?

symbols = 10
rows = 10_000
dict_of_dfs = {str(symbol): gen_random_df(rows) for symbol in range(symbols)}

%timeit result = method1(dict_of_dfs)
1.46 s ± 64.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
it
%timeit result = method2(dict_of_dfs)
1.87 s ± 102 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Here's the expected result:

result[:3]
[{'timestamp': Timestamp('1900-01-01 00:00:00'),
  'x': 9998.31375178033,
  'y': 10000.298442533112,
  'z': 9999.538765089255,
  'symbol': '0'},
 {'timestamp': Timestamp('1900-01-02 00:00:00'),
  'x': 9998.31375178033,
  'y': 10000.298442533112,
  'z': 9999.538765089255,
  'symbol': '0'},
 {'timestamp': Timestamp('1900-01-03 00:00:00'),
  'x': 9998.31375178033,
  'y': 10000.298442533112,
  'z': 9999.538765089255,
  'symbol': '0'}]
sophros
  • 14,672
  • 11
  • 46
  • 75
ssm
  • 9
  • 1

1 Answers1

0

Based on this answer I assume the fastest approach for to_list1 would be not to use dict but rather a dict comprehension with chain for iterating over extended values list as well as preparing the list of column names (cols) in advance.

def to_list1(df, symbol):
    df = df.reset_index()
    cols = list(df.columns)
    cols.append('symbol')

    return [{kk:vv for kk,vv in zip(cols, chain(v, [symbol,]))} for v in df.values]

In my case (Python 3.7.2 64b Ubuntu 16.04) timeit returns:

to_list1: 2.211 s
to_list2: 6.629 s
sophros
  • 14,672
  • 11
  • 46
  • 75