8

We have a workstation with two Nvidia Quadro FX 5800 cards installed. Running the deviceQuery CUDA sample reveals that the maximum threads per multiprocessor (SM) is 1024, while the maximum threads per block is 512.

Given that only one block can be executed on each SM at a time, why is max threads / processor double the max threads / block? How do we utilise the other 512 threads per SM?

Device 1: "Quadro FX 5800"
  CUDA Driver Version / Runtime Version          5.0 / 5.0
  CUDA Capability Major/Minor version number:    1.3
  Total amount of global memory:                 4096 MBytes (4294770688 bytes)
  (30) Multiprocessors x (  8) CUDA Cores/MP:    240 CUDA Cores
  GPU Clock rate:                                1296 MHz (1.30 GHz)
  Memory Clock rate:                             800 Mhz
  Memory Bus Width:                              512-bit
  Max Texture Dimension Size (x,y,z)             1D=(8192), 2D=(65536,32768), 3D=(2048,2048,2048)
  Max Layered Texture Size (dim) x layers        1D=(8192) x 512, 2D=(8192,8192) x 512
  Total amount of constant memory:               65536 bytes
  Total amount of shared memory per block:       16384 bytes
  Total number of registers available per block: 16384
  Warp size:                                     32
  Maximum number of threads per multiprocessor:  1024
  Maximum number of threads per block:           512
  Maximum sizes of each dimension of a block:    512 x 512 x 64
  Maximum sizes of each dimension of a grid:     65535 x 65535 x 1
  Maximum memory pitch:                          2147483647 bytes
  Texture alignment:                             256 bytes
  Concurrent copy and kernel execution:          Yes with 1 copy engine(s)
  Run time limit on kernels:                     No
  Integrated GPU sharing Host Memory:            No
  Support host page-locked memory mapping:       Yes
  Alignment requirement for Surfaces:            Yes
  Device has ECC support:                        Disabled
  Device supports Unified Addressing (UVA):      No
  Device PCI Bus ID / PCI location ID:           4 / 0
  Compute Mode:
     < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

Cheers, James.

James Paul Turner
  • 791
  • 3
  • 8
  • 23
  • 4
    The statement "Given that only one block can be executed on each SM at a time" is incorrect. Take that away and it makes perfect sense. This has been asked a million times before. Once I find one I'll vote to clse this as a duplicate. – talonmies Jul 23 '13 at 16:47

1 Answers1

19

Given that only one block can be executed on each SM at a time,

This statement is fundamentally incorrect. Barring resource conflicts, and assuming enough threadblocks in a kernel (i.e. the grid), an SM will generally have multiple threadblocks assigned to it.

The basic unit of execution is the warp. A warp consists of 32 threads, executed together in lockstep by an SM, on an instruction-cycle by instruction-cycle basis.

Therefore, even within a single threadblock, an SM will generally have more than a single warp "in flight". This is essential for good performance to allow the machine to hide latency.

There is no conceptual difference between choosing warps from the same threadblock to execute, or warps from different threadblocks. SMs can have multiple threadblocks resident on them (i.e. with resources such as registers and shared memory assigned to each resident threadblock), and the warp scheduler will choose from amongst all the warps in all the resident threadblocks, to select the next warp for execution on any given instruction cycle.

Therefore, the SM has a greater number of threads that can be "resident" because it can support more than a single block, even if that block is maximally configured with threads (512, in this case). We utilize more than the threadblock limit by having multiple threadblocks resident.

You may also want to research the idea of occupancy in GPU programs.

Robert Crovella
  • 143,785
  • 11
  • 213
  • 257