5

I am trying to run a package in R to animate GPS location data, but running the code takes several hours, and I need to do it several times. I have an AMD GPU in my laptop, but I am not sure how to use it to speed up the processing time.

First, let me say I'm not a computer scientist. I'm running a script in RStudio on the most recent version of RStudio and of R (3.6.0). I've looked into TensorFlow, though this seems to only work with Nvidia GPUs. The gpuR package claims to be written to work with AMD GPUs, but I'm not sure how I would get that to work with another package. I feel like there must be an easy way to tell my PC to just use the GPU to do the computing! Would love some help if anyone has been able to do this.

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
Tim
  • 51
  • 1
  • 3

1 Answers1

1

AMD and NVIDIA GPUs use different languages/platforms to program the device. CUDA and ROCm for AMD. Tensorflow uses CUDA and thus can only be used with NVIDIA devices. gpuR uses yet another platform OpenCl which can be used for many GPU devices including AMD and NVIDIA GPUs. Unfortunately, it is not a straightforward task to GPU-ify code. To use gpuR with an existing package, that package would need to call out to gpuR or dispatch in some more clever mechanism.

quasiben
  • 1,444
  • 1
  • 11
  • 19