13

I noticed that writing .h5 files takes much longer if I use the h5py library instead of the pytables library. What is the reason? This is also true when the shape of the array is known before. Further, i use the same chunksize and no compression filter.

The following script:

import h5py
import tables
import numpy as np
from time import time

dim1, dim2 = 64, 1527416

# append columns
print("PYTABLES: append columns")
print("=" * 32)
f = tables.open_file("/tmp/test.h5", "w")
a = f.create_earray(f.root, "time_data", tables.Float32Atom(), shape=(0, dim1))
t1 = time()
zeros = np.zeros((1, dim1), dtype="float32")
for i in range(dim2):
    a.append(zeros)
tcre = round(time() - t1, 3)
thcre = round(dim1 * dim2 * 4 / (tcre * 1024 * 1024), 1)
print("Time to append %d columns: %s sec (%s MB/s)" % (i+1, tcre, thcre))
print("=" * 32)
chunkshape = a.chunkshape
f.close()

print("H5PY: append columns")
print("=" * 32)
f = h5py.File(name="/tmp/test.h5",mode='w')
a = f.create_dataset(name='time_data',shape=(0, dim1),
                     maxshape=(None,dim1),dtype='f',chunks=chunkshape)
t1 = time()
zeros = np.zeros((1, dim1), dtype="float32")
samplesWritten = 0
for i in range(dim2):
    a.resize((samplesWritten+1, dim1))
    a[samplesWritten:(samplesWritten+1),:] = zeros
    samplesWritten += 1
tcre = round(time() - t1, 3)
thcre = round(dim1 * dim2 * 4 / (tcre * 1024 * 1024), 1)
print("Time to append %d columns: %s sec (%s MB/s)" % (i+1, tcre, thcre))
print("=" * 32)
f.close()

returns on my computer:

PYTABLES: append columns
================================
Time to append 1527416 columns: 22.679 sec (16.4 MB/s)
================================
H5PY: append columns
================================
Time to append 1527416 columns: 158.894 sec (2.3 MB/s)
================================

If I flush after every for loop, like:

for i in range(dim2):
    a.append(zeros)
    f.flush()

I get:

PYTABLES: append columns
================================
Time to append 1527416 columns: 67.481 sec (5.5 MB/s)
================================
H5PY: append columns
================================
Time to append 1527416 columns: 193.644 sec (1.9 MB/s)
================================
marc_s
  • 732,580
  • 175
  • 1,330
  • 1,459
adku1173
  • 181
  • 1
  • 5
  • This is likely due to a very high library overhead. If you write eg. chunks of 256,64 at once you should be >150 MB/s – max9111 Sep 16 '19 at 12:45

1 Answers1

18

This is an interesting comparison of PyTables and h5py write performance. Typically I use them to read HDF5 files (and usually with a few reads of large datasets), so haven't noticed this difference. My thoughts align with @max9111: that performance should improve as the number of write operations decreased as the size of the written dataset increased. To that end, I reworked your code to write N lines of data using fewer loops. (Code is at the end).
Results were surprising (to me). Key findings:
1. Total time to write all of the data was a linear function of the # of loops (for both PyTables and h5py).
2. The performance difference between PyTables and h5py only improved slightly as dataset I/O size increased.
3. Pytables was 5.4x faster writing 1 row at a time (1,527,416 writes), and was 3.5x faster writing 88 rows at a time (17,357 writes).

Here is a plot comparing performance.
Chart of Performance Comparison Chart with values for table above.
Performance Benchmark Data

Also, I noticed your code comments say "append columns", but you are extending the first dimension (rows of a HDF5 table/dataset). I rewrote your code to test performance extending the second dimension (adding columns to the HDF5 file), and saw very similar performance.

Initially I thought the I/O bottleneck was due to resizing the datasets. So, I rewrote the example to initially size the array to hold the all rows. This did NOT improve performance (and significantly degraded h5py performance). That was very surprising. Not sure what to make of it.

Here is my example. It uses 3 variables that size the array (as data is added):

  • cdim: # of columns (fixed)
  • row_loops: # of write loops
  • block_size: size of data block written on each loop
  • row_loops*block_size = total number of rows written

I also made a small change to the add Ones instead of Zeros (to verify data was written, and moved it to the top (and out of the timing loops).

My code here:

import h5py
import tables
import numpy as np
from time import time

cdim, block_size, row_loops = 64, 4, 381854 
vals = np.ones((block_size, cdim), dtype="float32")

# append rows
print("PYTABLES: append rows: %d blocks with: %d rows" % (row_loops, block_size))
print("=" * 32)
f = tables.open_file("rowapp_test_tb.h5", "w")
a = f.create_earray(f.root, "time_data", atom=tables.Float32Atom(), shape=(0, cdim))
t1 = time()
for i in range(row_loops):
    a.append(vals)
tcre = round(time() - t1, 3)
thcre = round(cdim * block_size * row_loops * 4 / (tcre * 1024 * 1024), 1)
print("Time to append %d rows: %s sec (%s MB/s)" % (block_size * row_loops, tcre, thcre))
print("=" * 32)
chunkshape = a.chunkshape
f.close()

print("H5PY: append rows %d blocks with: %d rows" % (row_loops, block_size))
print("=" * 32)
f = h5py.File(name="rowapp_test_h5.h5",mode='w')
a = f.create_dataset(name='time_data',shape=(0, cdim),
                     maxshape=(block_size*row_loops,cdim),
                     dtype='f',chunks=chunkshape)
t1 = time()
samplesWritten = 0
for i in range(row_loops):
    a.resize(((i+1)*block_size, cdim))
    a[samplesWritten:samplesWritten+block_size] = vals
    samplesWritten += block_size
tcre = round(time() - t1, 3)
thcre = round(cdim * block_size * row_loops * 4 / (tcre * 1024 * 1024), 1)
print("Time to append %d rows: %s sec (%s MB/s)" % (block_size * row_loops, tcre, thcre))
print("=" * 32)
f.close()
kcw78
  • 7,131
  • 3
  • 12
  • 44
  • Very nice answer! The chunk-size is also to small to get good results. eg. https://stackoverflow.com/a/44961222/4045774 In this case not really relevant, because the questioner didn't ask about how to read the data, but in general the chunk-chache size has also to be set to get good performance https://stackoverflow.com/a/48405220/4045774 In the meantime the functionallity of h5py_cache was added to h5py. – max9111 Sep 16 '19 at 20:18
  • Handy references on chunk sizing. Thanks. You mention '_h5py_cache was added to h5py_'. Is it implemented with the `rdcc_*` parameters? – kcw78 Sep 16 '19 at 21:32
  • Yes, these are the parameters. – max9111 Sep 17 '19 at 11:13
  • 1
    The size of your biggest blocks (88 * 64) is still pretty small. I continued it to blocks of (4096 * 64) numbers - 1 MiB of data - at which point the difference between h5py & pytables shrinks to about 2%. – Thomas K May 28 '21 at 17:03