I am trying to run a package in R to animate GPS location data, but running the code takes several hours, and I need to do it several times. I have an AMD GPU in my laptop, but I am not sure how to use it to speed up the processing time.
First, let me say I'm not a computer scientist. I'm running a script in RStudio on the most recent version of RStudio and of R (3.6.0). I've looked into TensorFlow, though this seems to only work with Nvidia GPUs. The gpuR package claims to be written to work with AMD GPUs, but I'm not sure how I would get that to work with another package. I feel like there must be an easy way to tell my PC to just use the GPU to do the computing! Would love some help if anyone has been able to do this.