Skip to main content

Mojo struct

LayoutTensorMHAOperand

struct LayoutTensorMHAOperand[origin: ImmutOrigin, scale_origin: ImmutOrigin, //, dtype_: DType, layout: Layout, scale_dtype_: DType = DType.float32, scale_layout: Layout = Layout()]

An implementation for LayoutTensor arguments to MHA kernels.

Fields

  • buffer (LayoutTensor[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, layout, origin]):
  • scale_buffer (LayoutTensor[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dtype, scale_layout, scale_origin]):

Implemented traits

AnyType, Copyable, DevicePassable, ImplicitlyCopyable, ImplicitlyDestructible, MHAOperand, Movable, RegisterPassable, TrivialRegisterPassable

comptime members

device_type

comptime device_type = LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout]

dtype

comptime dtype = dtype_

layout_dim

comptime layout_dim = layout.shape[(LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].layout_rank - 1)].value()

layout_rank

comptime layout_rank = layout.rank()

page_size

comptime page_size = 0

quantization_enabled

comptime quantization_enabled = (scale_layout.rank() != 0)

quantization_granularity

comptime quantization_granularity = ceildiv(LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].layout_dim, LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dim)

scale_dim

comptime scale_dim = scale_layout.shape[(LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_rank - 1)].value()

scale_dtype

comptime scale_dtype = scale_dtype_

scale_rank

comptime scale_rank = scale_layout.rank()

Methods

__init__

__init__(buffer: LayoutTensor[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, layout, origin], scale_buffer: LayoutTensor[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dtype, scale_layout, scale_origin] = LayoutTensor(UnsafePointer())) -> Self

get_type_name

static get_type_name() -> String

Returns:

String

block_paged_ptr

block_paged_ptr[tile_size: Int](self, batch_idx: UInt32, start_tok_idx: UInt32, head_idx: UInt32, head_dim_idx: UInt32 = 0) -> UnsafePointer[Scalar[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype], ImmutAnyOrigin]

Returns:

UnsafePointer

scales_block_paged_ptr

scales_block_paged_ptr(self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dtype], ImmutAnyOrigin]

Returns:

UnsafePointer

load_scale

load_scale[width: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int) -> SIMD[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dtype, width]

Returns:

SIMD

cache_length

cache_length(self, batch_idx: Int) -> Int

Returns:

Int

max_context_length

max_context_length(self) -> UInt32

Returns:

UInt32

num_kv_rows

num_kv_rows(self) -> Int

Returns the total number of virtual rows (batch * seq_len).

Returns:

Int

row_idx

row_idx(self, batch_idx: UInt32, start_tok_idx: UInt32) -> UInt32

Returns the row idx when viewing the memory as a matrix.

Returns:

UInt32

create_tma_tile

create_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: TMATensorTile[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, 3, _padded_shape[3, LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode](), _ragged_shape[3, LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode]()])

Creates a TMA tile for efficient GPU memory transfers.

Returns:

TMATensorTile

create_scale_tma_tile

create_scale_tma_tile[BMN: Int](self, ctx: DeviceContext, out tma: TMATensorTile[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].scale_dtype, 2, Index[Int, Int](VariadicPack(1, BMN))])

Returns:

TMATensorTile

create_ragged_tma_tile

create_ragged_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: RaggedTMA3DTile[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, swizzle_mode, BN, BK])

Returns:

RaggedTMA3DTile

create_rope_tma_tile

create_rope_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int, padded_depth: Int](self, ctx: DeviceContext, out tma: TMATensorTile[DType.bfloat16, 3, _padded_shape[3, DType.bfloat16, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode](), _ragged_shape[3, DType.bfloat16, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode]()])

Not supported for LayoutTensorMHAOperand.

Returns:

TMATensorTile

create_gather4_tma_tile

create_gather4_tma_tile[row_width: Int, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE](self, ctx: DeviceContext, out tma: TMATensorTile[LayoutTensorMHAOperand[dtype_, layout, scale_dtype_, scale_layout].dtype, 2, IndexList(VariadicList(4, row_width), Tuple()), IndexList(VariadicList(1, row_width), Tuple())])

Creates a 2D TMA gather4 descriptor for this LayoutTensor operand.

Returns:

TMATensorTile

scales_raw_ptr

scales_raw_ptr(self) -> UnsafePointer[Float32, MutAnyOrigin]

Returns a null pointer. LayoutTensor operands do not support quantization.

Returns:

UnsafePointer

Was this page helpful?