2

I have a very very big dataset in h5py and this leads to memory problem when loaded in full and subsequent processing. I need to randomly select a subset and work with it. This is doing "boosting" in the context in machine learning.

dataset = h5py.File(h5_file, 'r')

train_set_x_all = dataset['train_set_x'][:]
train_set_y_all = dataset['train_set_y'][:]

dataset.close()

p = np.random.permutation(len(train_set_x_all))[:2000]   # rand select 2000
train_set_x = train_set_x_all[p]   
train_set_y = train_set_y_all[p]

I still somehow need to get the full set and slice it with index array p. This works for me as subsequently training only worked on the smaller set. But I wonder if there's still a better way to let me do this without even keeping the full dataset in memory at all.

hpaulj
  • 221,503
  • 14
  • 230
  • 353
kawingkelvin
  • 3,649
  • 2
  • 30
  • 50
  • `arr = dataset['name'][:2000]` loads a slice efficiently. `arr = dataset['name'][p]` also works but is slower. And `p` has to be sorted. http://docs.h5py.org/en/latest/high/dataset.html#fancy-indexing – hpaulj Nov 18 '18 at 22:12
  • Depending on the selection range, it may be faster to load a slice (range), and pick randomly from that. Also selection from an array in memory won't be constrained by the sorted requirement. You may just have to try various alternatives and see which best suits your needs. – hpaulj Nov 18 '18 at 22:26
  • @hpaulj: 1st option is not a random selection. 2nd will error out since p isn't a boolean (i have already tried that). the page link talks about using boolean as fancy index, but it doesn't quite do what i want. It acts like a global mask and spit out a 1-dim array. – kawingkelvin Nov 19 '18 at 00:17
  • I tried p = np.sort(p) and then train_set_x = dataset['train_set_x'][p, ...] and this works. But it is very very very slow, and i would rather have a bit more memory load than this dramatic slow down. is there something i am doing wrong. Random indexing or fancy indexing doesn't appear efficient at all. – kawingkelvin Nov 19 '18 at 00:33
  • The documentation warns us that this sort of indexing is slow. With an array in memory, access to any point in the databuffer takes about the same time. But the `h5` array is on a file, which has serial access (or at least buffered). So selecting an item near the start of the dataset, another in the middle, and another near the end can require big jumps in the file access. Requiring sorted indices at least eliminates back-n-forth seeks. – hpaulj Nov 19 '18 at 00:40
  • What is the size of the dataset and what is the chunkshape? Chunks are always read or written as a whole. You need to set up a proper size of the chunk_cache to index the file efficiently. eg. You can run this example with and without setting a proper chunk-cache and see what happens https://stackoverflow.com/a/48405220/4045774 – max9111 Nov 19 '18 at 13:53

0 Answers0