0

I have two GPU GeForce GTx 690 with 4Gb Ram each. Unfortunately, the RAM is divided in two blocks of 2Gb and I can access only one block. Is there any way that I can access (in a parallel fashion) the whole 4GB of the GPU? If so, could I use the two GPU's at the same time?

Gabs
  • 292
  • 5
  • 23
  • 4
    GTX690 is a dual-chip board, hence you're essentially dealing with two separate GPUs. There is no way you can access 4 GB of the on-board memory in one chunk. You can use the two GPUs at the same time utilizing all the regular multi-GPU programming techniques. – void_ptr Jul 29 '15 at 18:34
  • 1
    On linux, I believe you should also be able to do peer-to-peer access between the 2 GPUs (i.e. to map the memory of one GPU into the memory space of the other) on a single GTX690. However this will tend to be relatively slow global memory access compared to memory that is "local" to that GPU. – Robert Crovella Jul 29 '15 at 18:43
  • Where can I find more about it? And yes, I am using Linux... – Gabs Jul 29 '15 at 18:44
  • Have anyone here ever heard about GPUdirect? – Gabs Jul 29 '15 at 18:45
  • 4
    There are many questions on this SO cuda tag pertaining to multi-gpu usage, peer-to-peer access, and GPUDirect. All of these topics are also documented in the NVIDIA [documentation](http://docs.nvidia.com/cuda/index.html#axzz3hEPmGF2a) and there are also [CUDA sample codes](http://docs.nvidia.com/cuda/cuda-samples/index.html#abstract) demonstrating peer-to-peer access as well as multi-GPU programming techniques. – Robert Crovella Jul 29 '15 at 18:56

0 Answers0