0

I've create a abstract class "Shape",and its child class "Sphere"(implement all the functions). Then I create a "Shape* my_sphere" point to "Sphere"(as follows):

Shape* my_sphere;
cudaMallocManaged(&my_sphere,sizeof(Sphere));
Shape* my_sphere_host = new Shphere;
cudaMemcpy(my_sphere,my_sphere_host,sizeof(Sphere),cudaMemcpyHostToDevice);

However when I use the "my_sphere" in a __global__ or __device__ function, cuda returned with the error code 700

__global__ testFunction(Shape* shape){
shape->getPosition();
}

could anybody help me?I'll appreciate it so much.

talonmies
  • 70,661
  • 34
  • 192
  • 269
dsukrect
  • 65
  • 6
  • You have no other choice but to create it on the device, if you intend to use virtual methods. Otherwise you must drop inheritance and dispatch based on a type field in your own implementation. – Ext3h Aug 04 '20 at 08:58
  • then,I have to give up inheritance....... :( ,anyway thanks for reply – dsukrect Aug 04 '20 at 09:05
  • @Ext3h: that isn't necessarily true. Pass by value and pass by reference (given the class is allocated in managed memory) will both work *if the class doesn't violate any restrictions imposed by the CUDA object model*. However, there isn't enough details in the question to say what the actual problem is. – talonmies Aug 04 '20 at 09:54
  • @talonmies Restrictions such as attempting to call virtual methods on the device, for an object created in host code? It works only for non-virtual methods, as RTTI and vtable are not portable. – Ext3h Aug 04 '20 at 09:57
  • 1
    Right, but who says there are virtual methods (I am guessing there are, but the question doesn't say) – talonmies Aug 04 '20 at 10:00

0 Answers0