Suppose a CUDA GPU can have 48 simultaneously active warps on one multiprocessor, that is 48 blocks of one warp, or 24 blocks of 2 warp, ..., since all the active warps from multiple blocks are scheduled for execution, it seems the size of the block is not important for the occupancy of the GPU (of course it should be multiple of 32), whether 32, 64, or 128 make no difference, right? So the size of the block is just determined by the computation task and the resource limit (shared memory or registers)?
Asked
Active
Viewed 2,436 times
2
-
Related to http://stackoverflow.com/questions/4391162/cuda-determining-threads-per-block-blocks-per-grid – Heatsink Mar 21 '11 at 17:46
-
@Heatsink This question is different because it is really asking why and how the block dimension effects performance, considering that there is a maximum number of active warps per mp (and , I will add, considering that multiple blocks can reside on one mp). – jmilloy Mar 22 '11 at 02:52
-
Currently my understanding is that, regardless of the computation task and requirement, you choose your block size to increase the occupancy which is limited by MIN{resource (shared/register); max active warps(e.g. 48); max active blocks (e.g. 8) * block size} as well as to partition blocks evenly to the number of multiprocessors. – platinor Mar 25 '11 at 11:38
2 Answers
3
There are multiple factors worth considering, that you ommit.
- There is a limit on the number of active blocks on a SM. Current limit is 8 (all devices), so if you want to achieve full occupancy, your blocks shouldn't be smaller than: 3-warps (devices 1.0, 1.1), 4-warps (1.2, 1.3), and 6-warps (2.x)
- Depending on the device, there are 8K, 16K or 32K registers available per multiprocessor. The bigger your blocks, the bigger "granularity" of how many registers the block needs. For big blocks, if full occupancy cannot be achieved, you loose a lot. For smaller blocks, the loss may be smaller. That's why personally, I prefer for example 2x256 rather than 1x512.
- If you do need synchronisation between warps in a block, bigger blocks allow you to have wider synchronisation.
- Single block is guaranteed to be scheduled on a single multiprocessor. If all its warps have some common data (e.g. control variables), you can reduce the number of global memory fetches. On the other hand, when you create lots of small blocks, each of them might need to load the same data separately. On Fermi, which has some caches, it is not as important as on GF-200 series. Keep in mind, however, that since there are so many multiprocessors, 1MB L2 cache is still very, very small!

CygnusX1
- 20,968
- 5
- 65
- 109
-
Yes, I see now, I omitted the limit of block number (8), regardless of the register/shared memory resource, if I assign 32 per block(1 warp), then the active warps is only 8 warps. – platinor Mar 25 '11 at 11:25
-1
No. The blocksize does matter.
If you have a blocksize of 32 threads you have a very low occupancy. If you have a blocksize of 256 you have a high occupancy. That means that all the 256 are concurrently active. More than 256 threads / block would rarely make some difference.
As the architecture involved is complex, testing it with your software is always the best approach.

fabrizioM
- 46,639
- 15
- 102
- 119
-
I don't think it's as simple as this. Yes, block dimension does matter, and yes, testing is the best approach. But, I don't think fewer threads per block necessarily means lower occupancy. More than one block can reside on a mp, raising occupancy. – jmilloy Mar 22 '11 at 02:47
-
Yes, is not that simple, there are other factors like resources that can affect the overall occupancy. But the response answer the question IMHO. – fabrizioM Mar 22 '11 at 04:10
-
I understood that the question is - "assuming we launch 48 threads/SM, does it matter how we split the task into blocks?" – CygnusX1 Mar 22 '11 at 08:30