2

I have some problems with pandas' HDFStore being far to slow and unfortunately I'm unable to put together a satisfying solution from other questions here.

Situation

I have a big DataFrame, containing mostly floats and sometimes integer columns which goes through multiple processing steps (renaming, removing bad entries, aggregating by 30min). Each row has a timestamp associated to it. I would like to save some middle steps to a HDF file, so that the user can do a single step iteratively without starting from scratch each time.

Additionally the user should be able to plot certain column from these saves in order to select bad data. Therefore I would like to retrieve only the column names without reading the data in the HDFStore. Concretely the user should get a list of all columns of all dataframes stored in the HDF then they should select which columns they would like to see whereafter I use matplotlib to present them the corresponding data.

Data

shape == (5730000, 339) does not seem large at all, that's why I'm confused... (Might get far more rows over time, columns should stay fixed) In the first step I append iteratively rows and columns (that runs okay), but once that's done I always process the entire DataFrame at once, only grouping or removing data.

My approach

  1. I do all manipulations in memory since pandas seems to be rather fast and I/O is slower (HDF is on different physical server, I think)
  2. I use datetime index and automatically selected float or integer columns
  3. I save the steps with hdf.put('/name', df, format='fixed') since hdf.put('/name'.format(grp), df, format='table', data_columns=True) seemed to be far too slow.
  4. I use e.g. df.groupby(df.index).first() and df.groupby(pd.Grouper(freq='30Min')).agg(agg_dict) to process the data, where agg_dict is a dictonary with one function per column. This is incredibly slow as well.
  5. For plotting, I have to read-in the entire dataframe and then get the columns: hdfstore.get('/name').columns

Question

  • How can I retrieve all columns without reading any data from the HDFStore?
  • What would be the most efficient way of storing my data? Is HDF the right option? Table or fixed?
  • Does it matter in term of efficiency if the index is a datetime index? Does there exists a more efficient format in general (e.g. all columns the same, fixed dtype?)
  • Is there a faster way to aggregate instead of groupby (df.groupby(pd.Grouper(freq='30Min')).agg(agg_dict))

similar questions

How to access single columns using .select I see that I can use this to retrieve only certain columns but only after I know the column names, I think.

Thank you for any advice!

Snow bunting
  • 1,120
  • 8
  • 28
  • Regarding faster way to aggregate: since you have a `DatetimeIndex` have you tried `df.resample('30T').agg(agg_dict)` instead? – jeschwar Mar 21 '18 at 15:43
  • Thanks for the input! It seems to be roughly the same speed, but `.groupby` seems to be more flexible (e.g. I can also use `.apply` instead of `.agg` for more evolved aggregation). – Snow bunting Mar 23 '18 at 12:19
  • 1
    No problem. In the latest versions of `pandas` the `.resample()` method when you have a `DatetimeIndex` works more like a `groupby`; you can even do `df.resample().apply()`. See the [docs](https://pandas.pydata.org/pandas-docs/stable/api.html#resampling) for more info. – jeschwar Mar 23 '18 at 14:14

3 Answers3

3

You may simply load 0 rows of the DataFrame by specifying same start and stop attributes. And leave all internal index/column processing for pandas itself:

idx = pd.MultiIndex.from_product([('A', 'B'), range(2)], names=('Alpha', 'Int'))
df = pd.DataFrame(np.random.randn(len(idx), 3), index=idx, columns=('I', 'II', 'III'))
df

>>>                 I           II          III
>>> Alpha   Int             
>>>     A     0     -0.472412    0.436486    0.354592
>>>           1     -0.095776   -0.598585   -0.847514
>>>     B     0      0.107897    1.236039   -0.196927
>>>           1     -0.154014    0.821511    0.092220

Following works both for fixed an table formats:

with pd.HDFStore('test.h5') as store:
    store.put('df', df, format='f')
    meta = store.select('df', start=1, stop=1)
    meta
    meta.index
    meta.columns

>>>               I     II    III
>>> Alpha   Int             
>>>
>>> MultiIndex(levels=[[], []],
>>>            codes=[[], []],
>>>            names=['Alpha', 'Int'])
>>>
>>> Index(['I', 'II', 'III'], dtype='object')

As for others question:

  1. As long as your data is mostly homogeneous (almost float columns as you mentioned) and you are able to store it in single file without need to distribute data across machines - HDF is the first thing to try.
  2. If you need to append/delete/query data - you must use table format. If you only need to write once and read many - fixed will improve performance.
  3. As for datetime index, i think here we may use same idea as in 1 clause. If u are able to convert all data into single type it should increase your performance.
  4. Nothing else that proposed in comment to your question comes to mind.
Xronx
  • 1,160
  • 7
  • 23
2

For a HDFStore hdf and a key (from hdf.keys()) you can get the column names with:

# Table stored with hdf.put(..., format='table')
columns = hdf.get_node('{}/table'.format(key)).description._v_names

# Table stored with hdf.put(..., format='fixed')
columns = list(hdf.get_node('{}/axis0'.format(key)).read().astype(str))

note that hdf.get(key).columns works as well, but it reads all the data into memory, while the approach above only reads the column names.


Full working example:

#!/usr/bin/env python
import pandas as pd

data = pd.DataFrame({'a': [1,1,1,2,3,4,5], 'b': [2,3,4,1,3,2,1]})

with pd.HDFStore(path='store.h5', mode='a') as hdf:
    hdf.put('/DATA/fixed_store', data, format='fixed')
    hdf.put('/DATA/table_store', data, format='table', data_columns=True)
    for key in hdf.keys():
        try:
            # column names of table store
            print(hdf.get_node('{}/table'.format(key)).description._v_names)
        except AttributeError:
            try:
                # column names of fixed store
                print(list(hdf.get_node('{}/axis0'.format(key)).read().astype(str)))
            except AttributeError:
                # e.g. a dataset created by h5py instead of pandas.
                print('unknown node in HDF.')
Snow bunting
  • 1,120
  • 8
  • 28
0
  1. Columns without reading any data:
store.get_storer('df').ncols # substitute 'df' with your key
# you can also access nrows and other useful fields
  1. From the docs (fixed format, table format): (important points in bold)

[fixed] These types of stores are not appendable once written (though you can simply remove them and rewrite). Nor are they queryable; they must be retrieved in their entirety. They also do not support dataframes with non-unique column names. The fixed format stores offer very fast writing and slightly faster reading than table stores.

[table] Conceptually a table is shaped very much like a DataFrame, with rows and columns. A table may be appended to in the same or other sessions. In addition, delete and query type operations are supported.

  1. You may try to use epochms (or epochns) (milliseconds or nanoseconds since epoch) in place of datetimes. This way, you are just dealing with integer indices.

  2. You may have a look at this answer if what you need is grouping by on large data.


An advice: if you have 4 questions to ask, it may be better to ask 4 separate questions on SO. This way, you'll get a higher number of (higher quality) answers, since each one is easier to tackle. And each will deal with a specific topic, making it easier to search for people that are looking for specific answers.

Michele Piccolini
  • 2,634
  • 16
  • 29