I currently have a data-set with a million rows and each around 10000 columns (variable length).
Now I want to write this data to a HDF5 file so I can use it later on. I got this to work, but it's incredibly slow. Even a 1000 values take up to a few minutes just to get stored in the HDF5 file.
I've been looking everywhere, including SO and the H5Py docs, but I really can't find anything that describes my use-case, yet I know it can be done.
Below I have made a demo-source code describing what I'm doing right now:
import h5py
import numpy as np
# I am using just random values here
# I know I can use h5py broadcasts and I have seen it being used before.
# But the issue I have is that I need to save around a million rows with each 10000 values
# so I can't keep the entire array in memory.
random_ints = np.random.random(size = (5000,10000))
# See http://stackoverflow.com/a/36902906/3991199 for "libver='latest'"
with h5py.File('my.data.hdf5', "w", libver='latest') as f:
X = f.create_dataset("X", (5000,10000))
for i1 in range(0, 5000):
for i2 in range(0, 10000):
X[i1,i2] = random_ints[i1,i2]
if i1 != 0 and i1 % 1000 == 0:
print "Done %d values..." % i1
This data comes from a database, it's not a pre-generated np array, as being seen in the source code.
If you run this code you can see it takes a long time before it prints out "Done 1000 values".
I'm on a laptop with 8GB ram, Ubuntu 16.04 LTS, and Intel Core M (which performs similar to Core i5) and SSD, that must be enough to perform a bit faster than this.
I've read about broadcasting here: http://docs.h5py.org/en/latest/high/dataset.html
When I use it like this:
for i1 in range(0, 5000):
X[i1,:] = random_ints[i1]
It already goes a magnitude faster (done is a few secs). But I don't know how to get that to work with a variable-length dataset (the columns are variable-length). It would be nice to get a bit of insights in how this should be done, as I think I'm not having a good idea of the concept of HDF5 right now :) Thanks a lot!