Mojo struct
RaggedMHAOperand
struct RaggedMHAOperand[origin: ImmutOrigin, cache_origin: ImmutOrigin, //, dtype_: DType, layout: Layout, cache_layout: Layout, scale_dtype_: DType = DType.invalid, scale_layout: Layout = Layout()]
An implementation for ragged LayoutTensor arguments to MHA kernels.
Fieldsβ
- βbuffer (
LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, layout, origin]): - βscale_buffer (
LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype, scale_layout, ImmutAnyOrigin]): - βcache_row_offsets (
LayoutTensor[DType.uint32, cache_layout, cache_origin]):
Implemented traitsβ
AnyType,
Copyable,
DevicePassable,
ImplicitlyCopyable,
ImplicitlyDestructible,
MHAOperand,
Movable,
RegisterPassable,
TrivialRegisterPassable
comptime membersβ
device_typeβ
comptime device_type = RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout]
dtypeβ
comptime dtype = dtype_
page_sizeβ
comptime page_size = 0
quantization_enabledβ
comptime quantization_enabled = False
quantization_granularityβ
comptime quantization_granularity = 0
scale_dtypeβ
comptime scale_dtype = scale_dtype_
Methodsβ
__init__β
__init__(buffer: LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, layout, origin], cache_row_offsets: LayoutTensor[DType.uint32, cache_layout, cache_origin]) -> Self
__init__(buffer: LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, layout, origin], scale_buffer: LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype, scale_layout, ImmutAnyOrigin], cache_row_offsets: LayoutTensor[DType.uint32, cache_layout, cache_origin]) -> Self
get_type_nameβ
block_paged_ptrβ
block_paged_ptr[tile_size: Int](self, batch_idx: UInt32, start_tok_idx: UInt32, head_idx: UInt32, head_dim_idx: UInt32 = UInt32(0)) -> UnsafePointer[Scalar[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype], ImmutAnyOrigin]
Returns:
scales_block_paged_ptrβ
scales_block_paged_ptr(self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype], ImmutAnyOrigin]
Returns:
load_scaleβ
load_scale[width: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int) -> SIMD[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype, width]
Returns:
SIMD[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype, width]
cache_lengthβ
max_context_lengthβ
num_kv_rowsβ
row_idxβ
row_idx(self, batch_idx: UInt32, start_tok_idx: UInt32) -> UInt32
Returns the row idx when viewing the memory as a matrix.
Returns:
get_tma_rowβ
get_tma_row(self, encoded_index: Int32) -> Int32
Convert an encoded sparse index to a physical TMA row.
Non-paged operand: identity (no paging translation needed).
Returns:
create_tma_tileβ
create_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: TMATensorTile[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, 3, _padded_shape[3, RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, IndexList(BN, 1, BK, __list_literal__=NoneType(None)), swizzle_mode](), _ragged_shape[3, RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, IndexList(BN, 1, BK, __list_literal__=NoneType(None)), swizzle_mode]()])
Creates a TMA tile for efficient GPU memory transfers.
Returns:
create_scale_tma_tileβ
create_scale_tma_tile[BMN: Int](self, ctx: DeviceContext, out tma: TMATensorTile[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].scale_dtype, 2, Index[Int, Int](1, BMN)])
Returns:
create_ragged_tma_tileβ
create_ragged_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: RaggedTMA3DTile[RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, swizzle_mode, BM=BN, BN=BK])
Returns:
create_rope_tma_tileβ
create_rope_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int, padded_depth: Int](self, ctx: DeviceContext, out tma: TMATensorTile[DType.bfloat16, 3, _padded_shape[3, DType.bfloat16, IndexList(BN, 1, BK, __list_literal__=NoneType(None)), swizzle_mode](), _ragged_shape[3, DType.bfloat16, IndexList(BN, 1, BK, __list_literal__=NoneType(None)), swizzle_mode]()])
Not supported for RaggedMHAOperand.
Returns:
create_gather4_tma_tileβ
create_gather4_tma_tile[tile_width: Int, tile_stride: Int = tile_width, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE, tile_height: Int = 4, tma_dtype: DType = RaggedMHAOperand[dtype_, layout, cache_layout, scale_dtype_, scale_layout].dtype, l2_promotion: TensorMapL2Promotion = TensorMapL2Promotion.NONE](self, ctx: DeviceContext, out tma: TMATensorTile[tma_dtype, 2, IndexList(tile_height, _gather4_box_width[tma_dtype, tile_width, swizzle_mode](), __list_literal__=NoneType(None)), IndexList(1, _gather4_box_width[tma_dtype, tile_width, swizzle_mode](), __list_literal__=NoneType(None))])
Creates a 2D TMA gather4 descriptor for this ragged operand.
Returns:
create_rope_gather4_tma_tileβ
create_rope_gather4_tma_tile[tile_width: Int, padded_depth: Int, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE, tile_height: Int = 4, l2_promotion: TensorMapL2Promotion = TensorMapL2Promotion.NONE](self, ctx: DeviceContext, out tma: TMATensorTile[DType.bfloat16, 2, IndexList(tile_height, _gather4_box_width[DType.bfloat16, tile_width, swizzle_mode](), __list_literal__=NoneType(None)), IndexList(1, _gather4_box_width[DType.bfloat16, tile_width, swizzle_mode](), __list_literal__=NoneType(None))])
Not supported for RaggedMHAOperand.
Returns:
scales_raw_ptrβ
scales_raw_ptr(self) -> UnsafePointer[Float32, MutAnyOrigin]
Returns a dangling pointer. Ragged operands do not support quantization.
Returns:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!