First of all, let me state that I am fully aware that my question has been already asked: Block reduction in CUDA However, as I hope to make clear, my question is a follow-up to that and I have particular needs that make the solution found by that OP to be unsuitable.
So, let me explain. In my current code, I run a Cuda kernel at every iteration of a while-loop to do some computations over the values of an array. As an example, think of it like the following:
int max_iterations = 1000;
int iteration = 0;
while(iteration < max_iterations)
{
__global__ void calcKernel(int* idata, int* odata)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n)
{
odata[i] = (idata[i] + 2) * 5;
}
}
iteration++;
}
However, next I have to accomplish seemingly hard task for the GPU. At each iteration of the while-loop that calls the kernel, I have to sum all values generated within odata
and save the result in an int
array called result
, at a position within such array that corresponds to the current iteration. It has to be accomplished inside the kernel or at least still in the GPU because due to performance constrains, I can only retrieve the result
array in the very end after all iterations are completed.
A wrong naïve attempt woud look something like the following:
int max_iterations = 1000;
int iteration = 0;
while(iteration < max_iterations)
{
__global__ void calcKernel(int* idata, int* odata, int* result)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n)
{
odata[i] = (idata[i] + 2) * 5;
}
}
result[iteration] = 0;
for(int j=0; j < max_iterations; j++)
{
result[iteration] += odata[j];
}
iteration++;
}
Of course, the code above does not work due to the GPU distributing the code across threads. In order to lear how to properly do that, I have been reading other questions here in the site about array reduction using CUDA. In particular, I found a mention to a very good NVIDIA's pdf about such subject, which is also discussed in the former SO question I mentioned at the beginning: http://developer.download.nvidia.com/compute/cuda/1.1-Beta/x86_website/projects/reduction/doc/reduction.pdf
However, while I fully understand the steps of the code described in such slides, as well as the general optimizations, I don't get how that approach can sum-reduce an array to one number if the code actually ouptus a full array (and one of unclear dimensions). Could someone please shed some light about it and show me an example on how that would work (i.e. how to get the one-number out of the output array)?
Now, going back to that question I mentioned at the beginning (Block reduction in CUDA). Note that its accepted answer merely suggests one to read the pdf I linked above - which does not talk about what to do with the output array generated by the code. In the comments, the OP there mentions that he/she was able to finishi the job by summing the output array at the CPU - which is something I cannot do, since that would mean downloading the output array every iteration of my while-loop. Lastly, the third answer in that link suggests the use of a library to accomplish this - but I am interested in learning the native way of doing so.
Alternatively, I would be also very interested in any other propositions about how to implement what I am described above.