1

TL;DR version: "What's the best way to round-robin kernel calls to multiple GPUs with Python/PyCUDA such that CPU and GPU work can happen in parallel?" with a side of "I can't have been the first person to ask this; anything I should read up on?"

Full version:

I would like to know the best way to design context, etc. handling in an application that uses CUDA on a system with multiple GPUs. I've been trying to find literature that talks about guidelines for when context reuse vs. recreation is appropriate, but so far haven't found anything that outlines best practices, rules of thumb, etc.

The general overview of what we're needing to do is:

  • Requests come in to a central process.
  • That process forks to handle a single request.
  • Data is loaded from the DB (relatively expensive).

The the following is repeated an arbitrary number of times based on the request (dozens):

  • A few quick kernel calls to compute data that is needed for later kernels.
  • One slow kernel call (10 sec).

Finally:

  • Results from the kernel calls are collected and processed on the CPU, then stored.

At the moment, each kernel call creates and then destroys a context, which seems wasteful. Setup is taking about 0.1 sec per context and kernel load, and while that's not huge, it is precluding us from moving other quicker tasks to the GPU.

I am trying to figure out the best way to manage contexts, etc. so that we can use the machine efficiently. I think that in the single-gpu case, it's relatively simple:

  • Create a context before starting any of the GPU work.
  • Launch the kernels for the first set of data.
  • Record an event for after the final kernel call in the series.
  • Prepare the second set of data on the CPU while the first is computing on the GPU.
  • Launch the second set, repeat.
  • Insure that each event gets synchronized before collecting the results and storing them.

That seems like it should do the trick, assuming proper use of overlapped memory copies.

However, I'm unsure what I should do when wanting to round-robin each of the dozens of items to process over multiple GPUs.

The host program is Python 2.7, using PyCUDA to access the GPU. Currently it's not multi-threaded, and while I'd rather keep it that way ("now you have two problems" etc.), if the answer means threads, it means threads. Similarly, it would be nice to just be able to call event.synchronize() in the main thread when it's time to block on data, but for our needs efficient use of the hardware is more important. Since we'll potentially be servicing multiple requests at a time, letting other processes use the GPU when this process isn't using it is important.

I don't think that we have any explicit reason to use Exclusive compute modes (ie. we're not filling up the memory of the card with one work item), so I don't think that solutions that involve long-standing contexts are off the table.

Note that answers in the form of links to other content that covers my questions are completely acceptable (encouraged, even), provided they go into enough detail about the why, not just the API. Thanks for reading!

talonmies
  • 70,661
  • 34
  • 192
  • 269
Eli Stevens
  • 1,447
  • 1
  • 12
  • 21
  • Uhrm, "What's the best way to round-robin kernel calls to multiple GPUs with Python/PyCUDA such that CPU and GPU work can happen in parallel?" with a side of "I can't have been the first person to ask this; anything I should read up on?" – Eli Stevens Mar 08 '12 at 03:08
  • [this question](http://stackoverflow.com/q/5904872/681865) is along similar lines, but it predates the CUDA 4.0 release, which changed multi-gpu quite a lot. – talonmies Mar 08 '12 at 03:33
  • I was hoping to avoid context-per-kernel-call, but maybe that's unrealistic. – Eli Stevens Mar 08 '12 at 17:07
  • You don't need a context-per-kernel call, you need a context per GPU. Ideally persistent. That used to mean one thread per device. As of CUDA 4.0, it doesn't. You can use one thread. I have used pycuda a lot, but usually with mpi4py because I work mostly with clusters. I have not tried CUDA 4.0 style multigpu with CUDA 4. – talonmies Mar 08 '12 at 17:38

1 Answers1

1

Caveat: I'm not a PyCUDA user (yet).

With CUDA 4.0+ you don't even need an explicit context per GPU. You can just call cudaSetDevice (or the PyCUDA equivalent) before doing per-device stuff (cudaMalloc, cudaMemcpy, launch kernels, etc.).

If you need to synchronize between GPUs, you will need to potentially create streams and/or events and use cudaEventSynchronize (or the PyCUDA equivalent). You can even have one stream wait on an event inserted in another stream to do sophisticated dependencies.

So I suspect the answer to day is quite a lot simpler than talonmies' excellent pre-CUDA-4.0 answer.

You might also find this answer useful.

(Re)Edit by OP: Per my understanding, PyCUDA supports versions of CUDA prior to 4.0, and so still uses the old API/semantics (the driver API?), so talonmies' answer is still relevant.

Community
  • 1
  • 1
harrism
  • 26,505
  • 2
  • 57
  • 88
  • PyCUDA is driver API based, so things are a little bit different (but they mirror the driver API very closely). I haven't had the opportunity to see how CUDA 4 style multigpu works inside PyCUDA. Andreas posts here sometimes, maybe he will chip in with an answer. – talonmies Mar 09 '12 at 14:45
  • Hmmm, when you say "not yet available", I'm not sure what you mean. All of the API calls I mentioned are available before CUDA 4.0, they just have different capabilities/semantics after CUDA 4.0. – harrism Mar 13 '12 at 07:06
  • Right, those capabilities and semantics aren't exposed in PyCUDA, AFAICT. Per my understanding, this is to allow use with older versions of CUDA. – Eli Stevens Mar 13 '12 at 08:36