3

On Nvidia GPUs, when I call clEnqueueNDRange, the program waits for it to finish before continuing. More precisely, I'm calling its equivalent C++ binding, CommandQueue::enqueueNDRange, but this shouldn't make a difference. This only happens on Nvidia hardware (3 Tesla M2090s) remotely; on our office workstations with AMD GPUs, the call is nonblocking and returns immediately. I don't have local Nvidia hardware to test on - we used to, and I remember similar behavior then, too, but it's a bit hazy.

This makes spreading the work across multiple GPUs harder. I've tried starting a new thread for each call to enqueueNDRange using std::async/std::finish in the new C++11 spec, but that doesn't seem to work either - monitoring the GPU usage in nvidia-smi, I can see that the memory usage on GPU 0 goes up, then it does some work, then the memory on GPU 0 goes down and the memory on GPU 1 goes up, that one does some work, etc. My gcc version is 4.7.0.

Here's how I'm starting the kernels, where increment is the desired global work size divided by the number of devices, rounded up to the nearest multiple of the desired local work size:

std::vector<cl::CommandQueue> queues;
/* Population of queues happens somewhere
cl::NDrange offset, increment, local;
std::vector<std::future<cl_int>> enqueueReturns;
int numDevices  = queues.size();

/* Calculation of increment (local is gotten from the function parameters)*/

//Distribute the job among each of the devices in the context
for(int i = 0; i < numDevices; i++)
{   
    //Update the offset for the current device
    offset = cl::NDRange(i*increment[0], i*increment[1], i*increment[2]);

    //Start a new thread for each call to enqueueNDRangeKernel
    enqueueReturns.push_back(std::async(
                   std::launch::async,
                   &cl::CommandQueue::enqueueNDRangeKernel,
                   &queues[i],
                   kernels[kernel],
                   offset,
                   increment,
                   local,
                   (const std::vector<cl::Event>*)NULL,
                   (cl::Event*)NULL));
    //Without those last two casts, the program won't even compile
}   
//Wait for all threads to join before returning
for(int i = 0; i < numDevices; i++)
{   
    execError = enqueueReturns[i].get();

    if(execError != CL_SUCCESS)
        std::cerr << "Informative error omitted due to length" << std::endl
}   

The kernels definitely should be running on the call to std::async, since I can create a little dummy function, set a breakpoint on it in GDB and have it step into it the moment std::async is called. However, if I make a wrapper function for enqueueNDRangeKernel, run it there, and put in a print statement after the run, I can see that it takes some time between prints.

P.S. The Nvidia dev zone is down due to hackers and such, so I haven't been able to post the question there.

EDIT: Forgot to mention - The buffer that I'm passing to the kernel as an argment (and the one I mention, above, that seems to get passed between the GPUs) is declared as using CL_MEM_COPY_HOST_PTR. I had been using CL_READ_WRITE_BUFFER, with the same effect happening.

Chaosed0
  • 949
  • 1
  • 10
  • 20

2 Answers2

3

I emailed the Nvidia guys and actually got a pretty fair response. There's a sample in the Nvidia SDK that shows, for each device you need to create seperate:

  • queues - So you can represent each device and enqueue orders to it
  • buffers - One buffer for each array you need to pass to the device, otherwise the devices will pass around a single buffer, waiting for it to become available and effectively serializing everything.
  • kernel - I think this one's optional, but it makes specifying arguments a lot easier.

Furthermore, you have to call EnqueueNDRangeKernel for each queue in separate threads. That's not in the SDK sample, but the Nvidia guy confirmed that the calls are blocking.

After doing all this, I achieved concurrency on multiple GPUs. However, there's still a bit of a problem. On to the next question...

Community
  • 1
  • 1
Chaosed0
  • 949
  • 1
  • 10
  • 20
0

Yes, you're right. AFAIK - the nvidia implementation has a synchronous "clEnqueueNDRange". I have noticed this when using my library (Brahma) as well. I don't know if there is a workaround or a way of preventing this, save using a different implementation (and hence device).

Ani
  • 10,826
  • 3
  • 27
  • 46
  • So, does that mean that clEnqueueNDRange can only ever be called one at a time on any given context? That seems to be the case, as implied by my experiments with `std::async`. EDIT: Or, rather, not called one at a time, executed one at a time. – Chaosed0 Jul 19 '12 at 14:19
  • From my experimentation, yes. I use the AMD (or Intel) implementation whenever I want to do anything more complicated. However, you could create two contexts, each with one device, correct? – Ani Jul 19 '12 at 14:20
  • Well dangit. The problem with my setup is I need this code to run on a multi-GPU setup and the both the two GPU clusters I've gotten access to have Nvidia cards in them. If there's no workaround, then I'm kind of dead in the water. Would it be worthwhile to email Nvidia's support while their forums are down? arghedit: Yeah, I guess I could try separate contexts. – Chaosed0 Jul 19 '12 at 14:22
  • I would try that - but the last time I did (maybe 8 months ago), they came back with - hey try CUDA, it works great. *pfft* – Ani Jul 19 '12 at 14:23
  • Hah, alright. I guess I'll leave the question up a bit longer to see if anyone else has a workaround before I give you the checkmark. – Chaosed0 Jul 19 '12 at 14:24