Mojo struct
RaggedMHAOperand
@register_passable(trivial)
struct RaggedMHAOperand[dtype_: DType, layout: Layout, cache_layout: Layout]
An implementation for ragged LayoutTensor arguments to MHA kernels.
Fields
- buffer (
LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, layout, ImmutAnyOrigin]): - cache_row_offsets (
LayoutTensor[DType.uint32, cache_layout, ImmutAnyOrigin]):
Implemented traits
AnyType,
Copyable,
DevicePassable,
ImplicitlyCopyable,
ImplicitlyDestructible,
MHAOperand,
Movable,
RegisterPassable,
TrivialRegisterPassable
comptime members
__copy_ctor_is_trivial
comptime __copy_ctor_is_trivial = True
__del__is_trivial
comptime __del__is_trivial = True
__move_ctor_is_trivial
comptime __move_ctor_is_trivial = True
device_type
comptime device_type = RaggedMHAOperand[dtype_, layout, cache_layout]
dtype
comptime dtype = dtype_
page_size
comptime page_size = 0
quantization_enabled
comptime quantization_enabled = False
quantization_granularity
comptime quantization_granularity = 0
scale_dtype
comptime scale_dtype = DType.invalid
Methods
__init__
__init__(buffer: LayoutTensor[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, layout, ImmutAnyOrigin], cache_row_offsets: LayoutTensor[DType.uint32, cache_layout, ImmutAnyOrigin]) -> Self
get_type_name
block_paged_ptr
block_paged_ptr[tile_size: Int](self, batch_idx: UInt32, start_tok_idx: UInt32, head_idx: UInt32, head_dim_idx: UInt32 = 0) -> UnsafePointer[Scalar[RaggedMHAOperand[dtype_, layout, cache_layout].dtype], ImmutAnyOrigin]
Returns:
scales_block_paged_ptr
scales_block_paged_ptr(self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[DType.invalid], MutAnyOrigin]
Returns:
load_scale
load_scale[width: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int) -> SIMD[DType.invalid, width]
Returns:
cache_length
max_context_length
row_idx
row_idx(self, batch_idx: UInt32, start_tok_idx: UInt32) -> UInt32
Returns the row idx when viewing the memory as a matrix.
Returns:
create_tma_tile
create_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: TMATensorTile[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, _split_last_layout[RaggedMHAOperand[dtype_, layout, cache_layout].dtype](IndexList(BN, 1, BK, Tuple()), swizzle_mode, True), _ragged_desc_layout[RaggedMHAOperand[dtype_, layout, cache_layout].dtype](IndexList(BN, 1, BK, Tuple()), swizzle_mode)])
Creates a TMA tile for efficient GPU memory transfers.
Returns:
create_ragged_tma_tile
create_ragged_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, depth: Int, BK: Int = padded_depth[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, swizzle_mode, depth]()](self, ctx: DeviceContext, out tma: RaggedTMA3DTile[RaggedMHAOperand[dtype_, layout, cache_layout].dtype, swizzle_mode, BN, BK])
Returns:
scales_raw_ptr
scales_raw_ptr(self) -> UnsafePointer[Float32, MutAnyOrigin]
Returns a null pointer. Ragged operands do not support quantization.
Returns:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!