I need to repeatedly calculate very large python arrays based on small input and very large constant bulk if data, stored on the drive. I can successfully parallelize it by splitting that input bulk and joining response. Here comes the problem: sending identical data bulk to the pool is too slow. Moreover, I double required memory. Ideally I would read data in the thread from the file, and keep it there for multiple re-use.
How do I do it? I can only think of creating multiple servers that will listen to requests from the pool. Somehow it looks unnatural solution to quite common problem. Do I miss better solution?
best regards, Vladimir