There are two arrays named A and B, they are corresponding to each other, and their space are allocated during the kernels running. the details of A and B are that A[i] is the position and B[i] is value.All the threads do the things below:
- If the current thread's data is in the arrays update B,
- Else expanding A and B, and insert the current thread's data into the arrays.
- The initial size of A and B are zero.
Is the upper implementing supported by CUDA?