-1

I have a question that I found many threads in, but none did explicitly answer my question. I am trying to have a multidimensional array inside the kernel of the GPU using thrust. Flattening would be difficult, as all the dimensions are non-homogeneous and I go up to 4D. Now I know I cannot have device_vectors of device_vectors, for whichever underlying reason (explanation would be welcome), so I tried going the way over raw-pointers.

My reasoning is, a raw pointer points onto memory on the GPU, why else would I be able to access it from within the kernel. So I should technically be able to have a device_vector, which holds raw pointers, all pointers that should be accessible from within the GPU. This way I constructed the following code:

thrust::device_vector<Vector3r*> d_fluidmodelParticlePositions(nModels);
thrust::device_vector<unsigned int***> d_allFluidNeighborParticles(nModels);
thrust::device_vector<unsigned int**> d_nFluidNeighborsCrossFluids(nModels);

for(unsigned int fluidModelIndex = 0; fluidModelIndex < nModels; fluidModelIndex++)
{
    FluidModel *model = sim->getFluidModelFromPointSet(fluidModelIndex);
    const unsigned int numParticles = model->numActiveParticles();

    thrust::device_vector<Vector3r> d_neighborPositions(model->getPositions().begin(), model->getPositions().end());
    d_fluidmodelParticlePositions[fluidModelIndex] = CudaHelper::GetPointer(d_neighborPositions);

    thrust::device_vector<unsigned int**> d_fluidNeighborIndexes(nModels);
    thrust::device_vector<unsigned int*> d_nNeighborsFluid(nModels);

    for(unsigned int pid = 0; pid < nModels; pid++)
    {
        FluidModel *fm_neighbor = sim->getFluidModelFromPointSet(pid);

        thrust::device_vector<unsigned int> d_nNeighbors(numParticles);
        thrust::device_vector<unsigned int*> d_neighborIndexesArray(numParticles);

        for(unsigned int i = 0; i < numParticles; i++)
        {
            const unsigned int nNeighbors = sim->numberOfNeighbors(fluidModelIndex, pid, i);        
            d_nNeighbors[i] = nNeighbors;

            thrust::device_vector<unsigned int> d_neighborIndexes(nNeighbors);

            for(unsigned int j = 0; j < nNeighbors; j++)
            {
                d_neighborIndexes[j] = sim->getNeighbor(fluidModelIndex, pid, i, j);
            }

            d_neighborIndexesArray[i] = CudaHelper::GetPointer(d_neighborIndexes);
        }

        d_fluidNeighborIndexes[pid] = CudaHelper::GetPointer(d_neighborIndexesArray);
        d_nNeighborsFluid[pid] = CudaHelper::GetPointer(d_nNeighbors);
    }

    d_allFluidNeighborParticles[fluidModelIndex] = CudaHelper::GetPointer(d_fluidNeighborIndexes);
    d_nFluidNeighborsCrossFluids[fluidModelIndex] = CudaHelper::GetPointer(d_nNeighborsFluid);
}

Now the compiler won't complain, but accessing for example d_nFluidNeighborsCrossFluids from within the kernel will work, but return wrong values. I access it like this (again, from within a kernel):

d_nFluidNeighborsCrossFluids[iterator1][iterator2][iterator3];
// Note: out of bounds indexing guaranteed to not happen, indexing is definitely right

The question is, why does it return wrong values? The logic behind it should work in my opinion, since my indexing is correct and the pointers should be valid addresses from within the kernel.

Thank you already for your time and have a great day.

EDIT: Here is a minimal reproducable example. For some reason the values appear right despite of having the same structure as my code, but cuda-memcheck reveals some errors. Uncommenting the two commented lines leads me to my main problem I am trying to solve. What does the cuda-memcheck here tell me?

/* Part of this example has been taken from code of Robert Crovella 
   in a comment below */
#include <thrust/device_vector.h>
#include <stdio.h>

template<typename T>
static T* GetPointer(thrust::device_vector<T> &vector)
{
  return thrust::raw_pointer_cast(vector.data());
}

__global__ 
void k(unsigned int ***nFluidNeighborsCrossFluids, unsigned int ****allFluidNeighborParticles){

  const unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;

  if(i > 49)
    return;

  printf("i: %d nNeighbors: %d\n", i, nFluidNeighborsCrossFluids[0][0][i]);

  //for(int j = 0; j < nFluidNeighborsCrossFluids[0][0][i]; j++)
  //  printf("i: %d j: %d neighbors: %d\n", i, j, allFluidNeighborParticles[0][0][i][j]);
}


int main(){

  const unsigned int nModels = 2;
  const int numParticles = 50;

  thrust::device_vector<unsigned int**> d_nFluidNeighborsCrossFluids(nModels);
  thrust::device_vector<unsigned int***> d_allFluidNeighborParticles(nModels);

  for(unsigned int fluidModelIndex = 0; fluidModelIndex < nModels; fluidModelIndex++)
  {
    thrust::device_vector<unsigned int*> d_nNeighborsFluid(nModels);
    thrust::device_vector<unsigned int**> d_fluidNeighborIndexes(nModels);

    for(unsigned int pid = 0; pid < nModels; pid++)
    {

      thrust::device_vector<unsigned int> d_nNeighbors(numParticles);
      thrust::device_vector<unsigned int*> d_neighborIndexesArray(numParticles);

      for(unsigned int i = 0; i < numParticles; i++)
      {
        const unsigned int nNeighbors = i;        
        d_nNeighbors[i] = nNeighbors;

        thrust::device_vector<unsigned int> d_neighborIndexes(nNeighbors);

                for(unsigned int j = 0; j < nNeighbors; j++)
                {
                    d_neighborIndexes[j] = i + j;
        }
        d_neighborIndexesArray[i] = GetPointer(d_neighborIndexes);
      }
      d_nNeighborsFluid[pid] = GetPointer(d_nNeighbors);
      d_fluidNeighborIndexes[pid] = GetPointer(d_neighborIndexesArray);
    }
    d_nFluidNeighborsCrossFluids[fluidModelIndex] = GetPointer(d_nNeighborsFluid);
    d_allFluidNeighborParticles[fluidModelIndex] = GetPointer(d_fluidNeighborIndexes);

  }

  k<<<256, 256>>>(GetPointer(d_nFluidNeighborsCrossFluids), GetPointer(d_allFluidNeighborParticles));

  if (cudaGetLastError() != cudaSuccess) 
    printf("Sync kernel error: %s\n", cudaGetErrorString(cudaGetLastError()));

  cudaDeviceSynchronize();
}
RBaumgar
  • 15
  • 5
  • A device vector can hold raw pointers to device data, whether that data is in another device vector container or not. However, since you have defined `d_nFluidNeighborsCrossFluids` as a device vector, it is **not usable** in device code, which you've already stated in your question. If you want to use it in device code, pass a raw pointer that points to the data in `d_nFluidNeighborsCrossFluids` to your device code, and use that. If you want to know why your specific code is not working, you are supposed to provide a [mcve], see item 1 [here](https://stackoverflow.com/help/on-topic). – Robert Crovella Sep 18 '19 at 12:09
  • Dear Robert, thank you for the fast response. A quick reproducible example will be diffcult at this stage, because the whole structure is embedded in a large project. I do indeed pass a raw pointer to the kernel, and then inside the kernel in try to access again by a printf, but again, this gives me wrong values. The kernel is in the edited question. – RBaumgar Sep 18 '19 at 12:24
  • The answer I've given demonstrates that the general concept is workable. I wouldn't try to explain what is going on in your case without a complete example to work with. In the process of attempting to create that minimal but complete example, you may very well discover the problem yourself. – Robert Crovella Sep 18 '19 at 12:31
  • You're letting a bunch of device vectors go out-of-scope, before you attempt to use them. When you refer to data by pointer, you had better make sure that the thing the pointers point to is still valid. When a device vector goes out of scope, the underlying data is deallocated. This gives rise to the appearance that the code is working correctly, but the `cuda-memcheck` errors. This is fundamentally a lack of understanding of C++ programming, not really a CUDA specific issue. The same problem would be present if you did this with `std::vector` in host code. – Robert Crovella Sep 18 '19 at 13:57
  • Ok, here I am not sure what you mean. The printed values in this setting are correct, so how could it be an out of scope issue? Running the program without cuda-memcheck seems to terminate as expected. – RBaumgar Sep 18 '19 at 14:10
  • I've added some comments in my answer to try to explain this. You should not assume that just because a code gives the right answer that it is guaranteed to be correctly designed. Usage of UB (undefined behavior) is illegal in C++, although it may appear to work and it may appear to give you the correct answer. – Robert Crovella Sep 18 '19 at 14:51

2 Answers2

0

You should really provide a minimal, complete, verifiable/reproducible example; yours is neither minimal, nor complete, nor verifiable.

I will, however, answer your side-question:

I know I cannot have device_vectors of device_vectors, for whichever underlying reason (explanation would be welcome)

While a device_vector regards a bunch of data on the GPU, it's a host-side data structure - otherwise you would not have been able to use it in host-side code. On the host side, what it holds should be something like: The capacity, the size in elements, the device-side pointer to the actual data, and maybe more information. This is similar to how an std::vector variable may refer to data that's on the heap, but if you create the variable locally the fields I mentioned above will exist on the stack.

Now, those fields of the device vector that are located in host memory are not generally accessible from the device-side. In device-side code you would typically use the raw pointer to the device-side data the device_vector manages.

Also, note that if you have a thrust::device_vector<T> v, each use of operator[] means a bunch of separate CUDA calls to copy data to or from the device (unless there's some caching going on under the hoold). So you really want to avoid using square-brackets with this structure.

Finally, remember that pointer-chasing can be a performance killer, especially on a GPU. You might want to consider massaging your data structure somewhat in order to make it amenable to flattening.

einpoklum
  • 118,144
  • 57
  • 340
  • 684
0

A device_vector is a class definition. That class has various methods and operators associated with it. The thing that allows you to do this:

d_nFluidNeighborsCrossFluids[...]...;

is a square-bracket operator. That operator is a host operator (only). It is not usable in device code. Issues like this give rise to the general statements that "thrust::device_vector is not usable in device code." The device_vector object itself is generally not usable. However the data it contains is usable in device code, if you attempt to access it via a raw pointer.

Here is an example of a thrust device vector that contains an array of pointers to the data contained in other device vectors. That data is usable in device code, as long as you don't attempt to make use of the thrust::device_vector object itself:

$ cat t1509.cu
#include <thrust/device_vector.h>
#include <stdio.h>

template <typename T>
__global__ void k(T **data){

  printf("the first element of vector 1 is: %d\n", (int)(data[0][0]));
  printf("the first element of vector 2 is: %d\n", (int)(data[1][0]));
  printf("the first element of vector 3 is: %d\n", (int)(data[2][0]));
}


int main(){

  thrust::device_vector<int> vector_1(1,1);
  thrust::device_vector<int> vector_2(1,2);
  thrust::device_vector<int> vector_3(1,3);

  thrust::device_vector<int *> pointer_vector(3);
  pointer_vector[0] = thrust::raw_pointer_cast(vector_1.data());
  pointer_vector[1] = thrust::raw_pointer_cast(vector_2.data());
  pointer_vector[2] = thrust::raw_pointer_cast(vector_3.data());

  k<<<1,1>>>(thrust::raw_pointer_cast(pointer_vector.data()));
  cudaDeviceSynchronize();
}

$ nvcc -o t1509 t1509.cu
$ cuda-memcheck ./t1509
========= CUDA-MEMCHECK
the first element of vector 1 is: 1
the first element of vector 2 is: 2
the first element of vector 3 is: 3
========= ERROR SUMMARY: 0 errors
$

EDIT: In the mcve you have now posted, you point out that an ordinary run of the code appears to give correct results, but when you use cuda-memcheck, errors are reported. You have a general design problem that will cause this.

In C++, when an object is defined within a curly-braces region:

{
  {
    Object A;
    // object A is in-scope here
  }
  // object A is out-of-scope here
}
// object A is out of scope here
k<<<...>>>(anything that points to something in object A); // is illegal

and you exit that region, the object defined within the region is now out of scope. For objects with constructors/destructors, this usually means the destructor of the object will be called when it goes out-of-scope. For a thrust::device_vector (or std::vector) this will deallocate any underlying storage associated with that vector. That does not necessarily "erase" any data, but attempts to use that data are illegal and would be considered UB (undefined behavior) in C++.

When you establish pointers to such data inside an in-scope region, and then go out-of-scope, those pointers no longer point to anything that would be legal to access, so attempts to dereference the pointer would be illegal/UB. Your code is doing this. Yes, it does appear to give the correct answer, because nothing is actually erased on deallocation, but the code design is illegal, and cuda-memcheck will highlight that.

I suppose one fix would be to pull all this stuff out of the inner curly-braces, and put it at main scope, just like the d_nFluidNeighborsCrossFluids device_vector is. But you might also want to rethink your general data organization strategy and flatten your data.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257
  • Dear Robert, first of all thank you for all your efforts and your time, I really appreciate it. I punched out a minimal example as you recommended, it was really helpful. To my surprise, despite of being the same structure as in my original code it runs through with the right values. – RBaumgar Sep 18 '19 at 13:38
  • However, cuda-memcheck gives me some errors that you can find in the minimal reproducable example, that I will edit into my question now. Without cuda-memcheck though the example just runs through, which is a mystery to me. Please note I took your code as a base for the example, I hope I did not become guilty of stealing your IP here, I will reference you. Uncommenting the two commented lines will show the main problem that led me to this question, namely an illegal memory access I am searching for since two days – RBaumgar Sep 18 '19 at 13:41