Thread block
In GPU programming, a thread block is a subset of threads within a grid, which is the top-level organizational structure of the threads executing a kernel function. As the primary building block for workload distribution, thread blocks serve multiple crucial purposes:
-
First, they break down the overall workload — managed by the grid — of a kernel function into smaller, more manageable portions that can be processed independently. This division allows for better resource utilization and scheduling flexibility across multiple streaming multiprocessors (SMs) in the GPU.
-
Second, thread blocks provide a scope for threads to collaborate through shared memory and synchronization primitives, enabling efficient parallel algorithms and data sharing patterns.
-
Finally, thread blocks help with scalability by allowing the same program to run efficiently across different GPU architectures, as the hardware can automatically distribute blocks based on available resources.
The programmer specifies the number of thread blocks in a grid and how they are arranged across one, two, or three dimensions. Each block within the grid is assigned a unique block index that determines its position within the grid. Similarly, the programmer also specifies the number of threads per thread block and how they are arranged across one, two, or three dimensions. Each thread within a block is assigned a unique thread index that determines its position within the block.
The GPU assigns each thread block within the grid to a streaming multiprocessor (SM) for execution. The SM groups the threads within a block into fixed-size subsets called warps, consisting of either 32 or 64 threads each depending on the particular GPU architecture. The SM's warp scheduler manages the execution of warps on the SM's cores.
Threads within a block can share data through shared memory and synchronize using built-in mechanisms, but they cannot directly communicate with threads in other blocks.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!