I am developing a monitoring agent for GPU cards that is capable of providing real-time telemetry using CUDA and NVML libraries.
I want to understand a little more about GPU core operation vs how Intel/AMD CPU cores work.
One formula that can be used for CPUs is (cpumhz or Workload average peak CPU utilization (MHz)) as follows:
((CPUSPEED * CORES) /100) * CPULOAD = Workload average peak CPU utilization
More details are here https://vikernel.wordpress.com/tag/vmware-formulas/
So would it be correct that the same formula can be applied to GPUs. The exception would be CUDA cores/shaders in place of "CORES" or could I just multiple the current clock speed by the actual gpu clock usage being that a GPU has a core clock for its 1000s of cores/shaders.
For example:
((GRAPHICS_MHZ * CUDA_CORES) /100) * GPU_LOAD = GPU MHZ utilization