Skip to main content

Mojo function

create_tma_tile

create_tma_tile[*tile_sizes: Int, *, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE](ctx: DeviceContext, tensor: LayoutTensor[dtype, layout, origin, address_space=address_space, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment]) -> TMATensorTile[dtype, Layout.row_major(_to_int_tuple[tile_sizes]())]

Creates a TMATensorTile with specified tile dimensions and swizzle mode.

This function creates a hardware-accelerated Tensor Memory Access (TMA) descriptor for efficient asynchronous data transfers between global memory and shared memory. It configures the tile dimensions and memory access patterns based on the provided parameters.

Constraints:

  • The last dimension's size in bytes must not exceed the swizzle mode's byte limit (32B for SWIZZLE_32B, 64B for SWIZZLE_64B, 128B for SWIZZLE_128B).
  • Only supports 2D tensors in this overload.

Parameters:

  • ​*tile_sizes (Int): The dimensions of the tile to be transferred. For 2D tensors, this should be [height, width]. The dimensions determine the shape of data transferred in each TMA operation.
  • ​swizzle_mode (TensorMapSwizzle): The swizzling mode to use for memory access optimization. Swizzling can improve memory access patterns for specific hardware configurations.

Args:

  • ​ctx (DeviceContext): The CUDA device context used to create the TMA descriptor.
  • ​tensor (LayoutTensor): The source tensor from which data will be transferred. This defines the global memory layout and data type.

Returns:

TMATensorTile: A TMATensorTile configured with the specified tile dimensions and swizzle mode, ready for use in asynchronous data transfer operations.

Was this page helpful?