1

I have a algorithm, where I aggregate several large numpy arrays (or other arrays, like tensorflow or pytorch) from several processes into one process. The problem is that those arrays are quite large and often use a lot of RAM memory.

I would like to cache these arrays to disk, but so that I don't lose performance. E.g. by buffering the file ahead by other process/thread.

I've tried Cache from DiskCache package, but it doesn't help when the objects are very large.

0 Answers0