10

Scenario:

I have two machines, a client and a server, connected with Infiniband. The server machine has an NVIDIA Fermi GPU, but the client machine has no GPU. I have an application running on the GPU machine that uses the GPU for some calculations. The result data on the GPU is never used by the server machine, but is instead sent directly to the client machine without any processing. Right now I'm doing a cudaMemcpy to get the data from the GPU to the server's system memory, then sending it off to the client over a socket. I'm using SDP to enable RDMA for this communication.

Question:

Is it possible for me to take advantage of NVIDIA's GPUDirect technology to get rid of the cudaMemcpy call in this situation? I believe I have the GPUDirect drivers correctly installed, but I don't know how to initiate the data transfer without first copying it to the host.

My guess is that it isn't possible to use SDP in conjunction with GPUDirect, but is there some other way to initiate an RDMA data transfer from the server machine's GPU to the client machine?

Bonus: If somone has a simple way to test if I have the GPUDirect dependencies correctly installed that would be helpful as well!

einpoklum
  • 118,144
  • 57
  • 340
  • 684
DaoWen
  • 32,589
  • 6
  • 74
  • 101
  • In CUDA code samples SDK you could find some sample code that demonstrates what you want - http://developer.nvidia.com/cuda/cuda-cc-sdk-code-samples. You would need to use `cudaMemcpyAsync` to asynchronously copy to the GPU w.r.t host. – Sayan Aug 15 '12 at 19:17
  • I have the CUDA SDK, but I don't see any examples using GPUDirect technology. Do you know of a specific sample program I should look at? – DaoWen Aug 16 '12 at 03:32
  • I currently don't have it downloaded, but I think "Simple Peer-to-Peer Transfers with Multi-GPU" example in the link I gave is what you want. – Sayan Aug 16 '12 at 16:47
  • I'll go take a look at that and post back if I'm wrong, but I'm not looking for GPU-to-GPU (P2P) transfers. I'm pretty sure I can do that with the normal `cudaMemcpy` call. What I'm looking for is a way to transfer directly from the GPU to memory on another host using RDMA and Infiniband. – DaoWen Aug 17 '12 at 03:05
  • 2
    Okay, in that case you would definitely need to use pinned memory (malloc via `cudaMallocHost`), or use `cudaHostRegister` function. I guess you just have to pin the memory, and GPUDirect would enable RDMA transfer if the setup is okay (if your throughput after doing this is any better than the current, then you could be certain about improvement). And as far as I know, GPUDirect would only accelerate cudaMemCpy, and that it cannot be removed, if you have many memcpy functions (H2D,D2H), then you could just use `cudaMemcpyDefault`. – Sayan Aug 17 '12 at 14:46
  • Thanks! I'll look into using `cudaHostRegister` to set up the client as a remote host and then do a `cudaMemcpy` call to transfer directly from the GPU to the client. – DaoWen Aug 17 '12 at 14:54

1 Answers1

4

Yes, it is possible with supporting networking hardware. See the GPUDirect RDMA documentation.

harrism
  • 26,505
  • 2
  • 57
  • 88
  • 1
    I've seen that feature, but it looks like it targets GPU P2P transfers. Will it also allow me to copy data directly to a remote node without involving the CPU on the source node? – DaoWen Aug 31 '12 at 11:46
  • Yes, that is what RDMA means -- "Remote Direct Memory Access". – harrism Sep 03 '12 at 00:50
  • 2
    To quote from the page you linked to: "Eliminate CPU bandwidth and latency bottlenecks using direct memory access (DMA) **between GPUs and other PCIe devices** ..." This leave me unclear as to whether or not the CUDA driver has RDMA support for the situation I described above, or if it's only for P2P transfers. It seems like it would be easily supported, but that page doesn't seem very explicit on the matter. This seems still like a good answer though so I'll accept it. – DaoWen Sep 03 '12 at 01:10
  • 2
    The key word here is "Remote", i.e. not peers on the same PCI-e bus. This will require support from specific Infiniband card makers that NVIDIA partners with. – harrism Sep 03 '12 at 01:17
  • 2
    @harrism But can we access peer-to-peer via Infiniband-RDMA, i.e. can GPU1-Core access by pointer in kernel<<<>>>-function to the GPU2-RAM? **GPU1-Core <-Infiniband->GPU2-RAM** – Alex Nov 19 '13 at 17:05
  • 2
    @Alex, no, GPU1 of PC1 can't access RAM (GPU2-RAM) of remote PC2 with normal memory read operations. RDMA means that PC1 can post requests with infiniband to copy some memory from PC2 (or GPU2-RAM) to some local memory (PC1 RAM or GPU1 RAM) without remote PC2 doing interrupt or memcpy. Request is posted explicitly in QP: http://www.mellanox.com/related-docs/prod_software/RDMA_Aware_Programming_user_manual.pdf page 106 "5.2.7 rdma_post_read... The contents of the remote memory region will be read into the local data buffer". You may access local copy of data only after this request completion. – osgx May 26 '17 at 01:35
  • I ended up here in 2021; I don't think this is the answer any longer – Grant Curell Nov 12 '21 at 14:35
  • 1
    @Grant edited to provide the current docs. – harrism Dec 14 '21 at 02:49