I recently learnt about CUDA programming in python and I was wondering if there was a way to load files into memory faster using GPU. I'm particularly trying to find a way to load ML datasets faster. Till now, I've been using the multiprocessing
library
Asked
Active
Viewed 333 times
0
-
2GPUs don't support any sort of file I/O, so no. – talonmies Jan 20 '18 at 10:31
-
@talonmies is this possible: memory mapped file, pinned gpu buffer(using pointer of memory map) and directly reading from CUDA workitems. Would it decrease number of I/O so it is faster? https://stackoverflow.com/questions/29518875/cuda-zero-copy-memory-memory-mapped-file – huseyin tugrul buyukisik Jan 20 '18 at 12:04
-
1Using `mmap` isn't the same thing as performing file system I/O on the GPU. Especially in Python where everything runs in a single host thread – talonmies Jan 20 '18 at 18:12
-
Did you ever figure out how to do memmap in python with cuda? – SantoshGupta7 Aug 12 '19 at 19:33