1

When I run this code, the compiler says I am calling a host function from the device. I don't quite understand how.

   __global__ void kernel(thrust::device_vector<float*> d_V) {

       float *var = d_V[0];
   }

   int main() {

      thrust::host_vector<float*> V;
      thrust::host_vector<float*> d_V;

      float f[10];
      for (int i = 0; i < 10; i++) {
          f[i] = i;
      }
      V.push_back(f);
      d_V = V;
      kernel<<<1, 1>>>(d_V);

      return 0;     
   }
talonmies
  • 70,661
  • 34
  • 192
  • 269
user2529048
  • 21
  • 1
  • 4

1 Answers1

4

thrust functions/methods are designed to be used on host(CPU) side. They can not be called on device(GPU) side in CUDA kernels.

What you demonstrated in your code actually is passing some data to a kernel. The data can be referenced by raw pointers other than thrust containers in kernel's argument list.

__global__ void kernel(float* p)
{
   float *var = p;
}

int main()
{
    thrust::device_vector<float> d_v(
        thrust::make_counting_iterator((float)0),
        thrust::make_counting_iterator((float)0)+10);
    kernel<<<1,1>>>(thrust::raw_pointer_cast(&d_v[0])); 
}
kangshiyin
  • 9,681
  • 1
  • 17
  • 29
  • 1
    In addition, there are a variety of other issues with the code. The original code declares `d_V` as a `host_vector` which is probably just a simple error. However, more significantly, the original code deals with vectors of pointers. This will certainly be problematic to deal with in device code. – Robert Crovella Jul 23 '13 at 15:38