0

I need to repeatedly calculate very large python arrays based on small input and very large constant bulk if data, stored on the drive. I can successfully parallelize it by splitting that input bulk and joining response. Here comes the problem: sending identical data bulk to the pool is too slow. Moreover, I double required memory. Ideally I would read data in the thread from the file, and keep it there for multiple re-use.

How do I do it? I can only think of creating multiple servers that will listen to requests from the pool. Somehow it looks unnatural solution to quite common problem. Do I miss better solution?

best regards, Vladimir

Vladimir
  • 19
  • 5
  • Related: https://stackoverflow.com/q/10721915/1025391 and https://stackoverflow.com/q/5549190/1025391 – moooeeeep Jan 15 '20 at 10:41
  • This looks a a solution for me, but would take a rewriting of class implementation of the functionality into a functional, right? Now I load pickle-saved classes with dozens of arrays. Doable but not really neat. Any more alternatives maybe? – Vladimir Jan 15 '20 at 10:59

0 Answers0