0

I am trying to understand how the memory organization of my GPU is working.

According to the technical specification which are tabulated below my GPU can have 8 active blocks/SM and 768 threads/SM. Based on that I was thinking that in order to take advantage of the above each block should have 96 (=768/8) threads. The closest block that has this number of threads I think it is a 9x9 block, 81 threads. Using the fact that 8 blocks can run simultaneously in one SM we will have 648 threads. What about the rest 120 (= 768-648)?

I know that something wrong is happening with these thoughts. A simple example describing the connection between the maximum number of SM threads the maximum number of threads per block and the warp size based on my GPU specifications it would be very helpful.

Device 0: "GeForce 9600 GT"
      CUDA Driver Version / Runtime Version          5.5 / 5.0
      CUDA Capability Major/Minor version number:    1.1
      Total amount of global memory:                 512 MBytes (536870912 bytes)
      ( 8) Multiprocessors x (  8) CUDA Cores/MP:    64 CUDA Cores
      GPU Clock rate:                                1680 MHz (1.68 GHz)
      Memory Clock rate:                             700 Mhz
      Memory Bus Width:                              256-bit
      Max Texture Dimension Size (x,y,z)             1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)
      Max Layered Texture Size (dim) x layers        1D=(8192) x 512, 2D=(8192,8192) x 512
      Total amount of constant memory:               65536 bytes
      Total amount of shared memory per block:       16384 bytes
      Total number of registers available per block: 8192
      Warp size:                                     32
      Maximum number of threads per multiprocessor:  768
      Maximum number of threads per block:           512
      Maximum sizes of each dimension of a block:    512 x 512 x 64
      Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
      Maximum memory pitch:                          2147483647 bytes
      Texture alignment:                             256 bytes
      Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
      Run time limit on kernels:                     Yes
      Integrated GPU sharing Host Memory:            No
      Support host page-locked memory mapping:       Yes
      Alignment requirement for Surfaces:            Yes
      Device has ECC support:                        Disabled
      Concurrent kernel execution:                   No
      Device supports Unified Addressing (UVA):      No
      Device PCI Bus ID / PCI location ID:           1 / 0   
talonmies
  • 70,661
  • 34
  • 192
  • 269
Darkmoor
  • 862
  • 11
  • 29
  • Have you read this post? http://stackoverflow.com/questions/17816136/cuda-what-is-the-threads-per-multiprocessor-and-threads-per-block-distinction?rq=1 – PhillipD Oct 02 '13 at 12:25
  • From what I have read from this post I understand that each SM can process a different number of blocks smaller/equal the max number of blocks that can be run on one SM. Does this means that both of the 8 blocks can have 512 threads? Does the warp scheduler organize the execution of these threads (512x8) even though this number of threads can be more than the maximum number of threads that can run simultaneously on the GPU ? – Darkmoor Oct 02 '13 at 15:16

1 Answers1

1

You could find the technical specification of your device in the cuda programming guide as follows, rather than the output of a sample program of cuda.

http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#compute-capabilities

From the hardware point of view, we generally try to maximize the warp occupancy per Multiprocessor (SM) to get max performance. The max occupancy is limited by 3 types of hardware resources: #warp/SM, #register/SM and #shared memory/SM.

You could try the following tool in your cuda installation dir to understand how to do the calculation. It will give you a clearer understanding of the connections between #threads/SM, #threads/block, #warp/SM, etc.

$CUDA_HOME/tools/CUDA_Occupancy_Calculator.xls
kangshiyin
  • 9,681
  • 1
  • 17
  • 29