14

I have an open issue because I thought that my cuda code wasn't running in my GPU (here). I thougth that because I get a C in the type field of my process when I use nvidia-smi, but I see that my GPU-Util grows when I run my code so now I don't know if it is running in the cpu or gpu. Can someone explain to me what is the meaning of the C or G type, please? I found this: "Displayed as "C" for Compute Process, "G" for Graphics Process, and "C+G" for the process having both Compute and Graphics contexts." but I don't understand if it means that C is for CPU and G for GPU, because I don't know what "compute process" and "graphics process" are, or what differences are between them.

ipgvl
  • 161
  • 1
  • 1
  • 5

2 Answers2

21

They are both for GPU.

  • C = compute = CUDA or OpenCL
  • G = graphics = DirectX or OpenGL
Ander Biguri
  • 35,140
  • 11
  • 74
  • 120
15

enter image description here

According to the man pages of Ubuntu defined here: https://manpages.ubuntu.com/manpages/precise/man1/alt-nvidia-current-smi.1.html

  • C = Compute, which defines the processes that use the compute mode of Nvidia GPUs which use CUDA libraries, used in deep learning training and inferencing using Tensorflow-GPU, Pytorch, etc
  • G = Graphics, which defines the processes that use the graphics mode of Nvidia GPUs used by professional 3D graphics, gnome-shell (Ubuntu's GUI environment), Games, etc for the rendering of graphics or videos
  • C+G = Compute + Graphics, which defines the processes that use both the contexts defined above.

A developer document for nvidia-smi - NVIDIA System Management Interface program http://developer.download.nvidia.com/compute/DCGM/docs/nvidia-smi-367.38.pdf

If you want to have a deeper dive into the architectural components of the Nvidia Turing GPU, have a look at the whitepaper @ https://www.nvidia.com/content/dam/en-zz/Solutions/design-visualization/technologies/turing-architecture/NVIDIA-Turing-Architecture-Whitepaper.pdf

As a general rule, everyone working on a software stack that is as expansive as ML should have a good understanding of the hardware components they work on.

CATALUNA84
  • 724
  • 8
  • 19
  • This is an old question, but a minor clarification on your post: OP is not working on ML, and "C" is not for just for deep learning stuff. CUDA was well used before ML came, and is still used very widely in non ML applications. Particularly commenting because of your last remark: yes, people should have understanding of hardware, but not everything GPU is ML. – Ander Biguri Jan 11 '23 at 10:49