I calculate very big arrays (about 100million integers, and this 9000 times) and the whole array doesn't fit in memory. So I write them to a .hdf5-file in chunks of a size that fits in my memory. Also I use "lzf" compression because the .hdf5 gets to big for my SSD.
After that I again read chunks from the hdf5-file rowwise and do some other calculations (which are only possible if all columns of the array are available for each row). Dimensions are about 100mil x 9000 then.
So to sum up:
calculate one column (100mil entrys) -> write to hdf5
read from hdf5 -> calculations on one row
The speed is kind of okay, but I don't know if there are better possibilites to speed this up. One additional information about the arrays I can give: there are sparse. So about 90% of the whole thing are zeros.
Thank you for your help.