Skip to main content

Mojo function

create_tma_tile

create_tma_tile[*tile_sizes: Int, *, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE](ctx: DeviceContext, tensor: LayoutTensor[dtype, layout, origin, address_space=address_space, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment]) -> TMATensorTile[dtype, Layout.row_major(_to_int_tuple[tile_sizes]())]

Creates a TMATensorTile with specified tile dimensions and swizzle mode.

This function creates a hardware-accelerated Tensor Memory Access (TMA) descriptor for efficient asynchronous data transfers between global memory and shared memory. It configures the tile dimensions and memory access patterns based on the provided parameters.

Constraints:

  • The last dimension's size in bytes must not exceed the swizzle mode's byte limit (32B for SWIZZLE_32B, 64B for SWIZZLE_64B, 128B for SWIZZLE_128B).
  • Only supports 2D tensors in this overload.

Parameters:

  • *tile_sizes (Int): The dimensions of the tile to be transferred. For 2D tensors, this should be [height, width]. The dimensions determine the shape of data transferred in each TMA operation.
  • swizzle_mode (TensorMapSwizzle): The swizzling mode to use for memory access optimization. Swizzling can improve memory access patterns for specific hardware configurations.

Args:

  • ctx (DeviceContext): The CUDA device context used to create the TMA descriptor.
  • tensor (LayoutTensor): The source tensor from which data will be transferred. This defines the global memory layout and data type.

Returns:

TMATensorTile: A TMATensorTile configured with the specified tile dimensions and swizzle mode, ready for use in asynchronous data transfer operations.

create_tma_tile[dtype: DType, rank: Int, //, tile_shape: IndexList[rank], /, k_major_tma: Bool = True, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE, *, __tile_layout: Layout = Layout.row_major(tile_shape.__getitem__[rank, DType.int64, Int](0), tile_shape.__getitem__[rank, DType.int64, Int](1)), __desc_layout: Layout = _tma_desc_tile_layout[dtype, rank, tile_shape, swizzle_mode]()](ctx: DeviceContext, tensor: LayoutTensor[dtype, layout, origin, address_space=address_space, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment]) -> TMATensorTile[dtype, __tile_layout, __desc_layout, k_major_tma]

Creates a TMATensorTile with advanced configuration options for 2D, 3D, 4D, or 5D tensors.

This overload provides more control over the TMA descriptor creation, allowing specification of data type, rank, and layout orientation. It supports 2D, 3D, 4D, and 5D tensors and provides fine-grained control over the memory access patterns.

Constraints:

  • Only supports 2D, 3D, 4D, and 5D tensors (rank must be 2, 3, 4, or 5).
  • For non-SWIZZLE_NONE modes, the K dimension size in bytes must be a multiple of the swizzle mode's byte size.
  • For MN-major layout, only SWIZZLE_128B is supported.
  • For 3D, 4D, and 5D tensors, only K-major layout is supported.

Parameters:

  • dtype (DType): DType The data type of the tensor elements.
  • rank (Int): Int The dimensionality of the tensor (must be 2, 3, 4, or 5).
  • tile_shape (IndexList): IndexList[rank] The shape of the tile to be transferred.
  • k_major_tma (Bool): Bool = True Whether the tma should copy desc into shared memory following a column-major (if True) or row-major (if False) pattern.
  • swizzle_mode (TensorMapSwizzle): TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE The swizzling mode to use for memory access optimization.
  • __tile_layout (Layout): Layout = Layout.row_major(tile_shape[0], tile_shape[1]) Internal parameter for the tile layout in shared memory.
  • __desc_layout (Layout): Layout = _tma_desc_tile_layout[...] Internal parameter for the descriptor layout, which may differ from the tile layout to accommodate hardware requirements.

Args:

  • ctx (DeviceContext): DeviceContext The CUDA device context used to create the TMA descriptor.
  • tensor (LayoutTensor): LayoutTensor[dtype, *, **] The source tensor from which data will be transferred. This defines the global memory layout and must match the specified data type.

Returns:

TMATensorTile: A TMATensorTile configured with the specified parameters, ready for use in asynchronous data transfer operations.

Was this page helpful?