I'm working in a CUDA parallel algorithm and I wonder how to compute the theoretical speedup. I know that the Amdahl law isn't valid for GPUs. Does anyone know how to compute the theoretical speedup for a GPU?
Thanks
I'm working in a CUDA parallel algorithm and I wonder how to compute the theoretical speedup. I know that the Amdahl law isn't valid for GPUs. Does anyone know how to compute the theoretical speedup for a GPU?
Thanks