1

I am trying to explore the possibility of achieving global IO space across devices (GPUs, NIC, storage etc.). This might boil down to the question asked in this thread - Direct communication between two PCI devices.

I have been reading upon Nvidia GPUDirect where the memory region pinned and the physical address is obtained with the help of nvidia_p2p_* calls. I can't exactly understand how can GPU's physical address be used to program the 3rd party device's DMA controller for data transfers. I am confused by the fact that GPU memory is not visible unlike the cpu memory space (this maybe due to my poor knowledge on programming dma controllers). Any pointers on this would really helpful.

Also, many PCI devices expose memory regions in terms of PCI BARs (e.g. GPUs expose a memory region of 256M). Is there any way to know device physical addresses over which this BAR memory region maps to? Is there any overlap between the BAR memory regions and memory allocated via nvidia driver to CUDA runtime?

Thanks in advance.

Community
  • 1
  • 1
Sankar
  • 11
  • 3
  • There's a few questions similar to this you should check out! I answered one that I think might answer your question in a roundabout way: [ bit.ly/1lrRm7B ] and [ bit.ly/1grcxkP ]. – datboi Apr 03 '14 at 19:11
  • Thanks for the response @datboi. Your answers were very helpful. If I understand this correctly, nvidia_p2p_get_pages maps gpu memory onto pci space (if this is right, is this done by leveraging PCI BARs) and returns this physical address to the caller. This physical address could in turn be used to program any DMA controller for memory transfers. Is my understanding correct? – Sankar Apr 07 '14 at 04:56

0 Answers0