Does it make sense to use numpy's memmap
across multiple cores (MPI)?
I have a file on disk.
Can I create a separate memmap
object on each core, and use it to read different slices from the file?
What about writing to it?
Does it make sense to use numpy's memmap
across multiple cores (MPI)?
I have a file on disk.
Can I create a separate memmap
object on each core, and use it to read different slices from the file?
What about writing to it?
Q : "Does it make sense to use numpy's
memmap
across multiple cores (MPI)?"
Yes ( ... even without MPI, using just Python native { thread- | process-}-based forms of concurrent-processing )
Q : "Can I create a separate
memmap
-object on each core, and use it to read different slices from the file?"
Yes.
Q : "What about writing to it?"
The same ( sure, if having been opened in write-able mode ... )