0

So I am currently looking to buy a server for complex numerical computations using CUDA code. In short I am trying to decide if I want to spend the money on having multiple GPUs.

I know as of CUDA 4.0, multi-GPU computation using a single CUDA code have been made available as discussed in here.

However, let's ignore that benefit. Let's say I am working on a server with two GPUs. I (Person A) run a standard CUDA code without setting the device. Presumably I would tie up one of GPUs for a while. Now let's say someone else (Person B) on the server also what to run their own CUDA code (also without setting the device to be used). If Person B simply runs their CUDA program, would their code run on the idle GPU server or would the execution of their code on the GPU be blocked until my code completes?

Community
  • 1
  • 1
John
  • 1
  • 4
  • 1
    You can use environment variables to steer codes to different GPUs, even if they are both coded to use logical device 0. – Robert Crovella May 15 '17 at 04:48
  • In addition to environment variables, the CUDA linux and Windows TCC drivers have explicit control settings to allow control of this – talonmies May 15 '17 at 05:47

0 Answers0