Mojo struct
LayoutTensorMHAOperand
@register_passable(trivial)
struct LayoutTensorMHAOperand[dtype_: DType, layout: Layout]
An implementation for NDBuffer arguments to MHA kernels.
Fields
- buffer (
LayoutTensor[dtype_, layout, MutableAnyOrigin]
):
Implemented traits
AnyType
,
Copyable
,
DevicePassable
,
ImplicitlyCopyable
,
MHAOperand
,
Movable
,
UnknownDestructibility
Aliases
__copyinit__is_trivial
alias __copyinit__is_trivial = LayoutTensor[dtype_, layout, MutableAnyOrigin].__copyinit__is_trivial
__del__is_trivial
alias __del__is_trivial = LayoutTensor[dtype_, layout, MutableAnyOrigin].__del__is_trivial
__moveinit__is_trivial
alias __moveinit__is_trivial = LayoutTensor[dtype_, layout, MutableAnyOrigin].__moveinit__is_trivial
device_type
alias device_type = LayoutTensorMHAOperand[dtype_, layout]
dtype
alias dtype = dtype_
Methods
__init__
__init__(buffer: LayoutTensor[dtype_, layout, MutableAnyOrigin]) -> Self
get_type_name
get_device_type_name
block_paged_ptr
block_paged_ptr[tile_size: Int](self, batch_idx: UInt32, start_tok_idx: UInt32, head_idx: UInt32, head_dim_idx: UInt32 = 0) -> UnsafePointer[Scalar[dtype_]]
Returns:
cache_length
max_context_length
row_idx
row_idx(self, batch_idx: UInt32, start_tok_idx: UInt32) -> UInt32
Returns the row idx when viewing the memory as a matrix.
Returns:
col_idx
col_idx(self, head_idx: UInt32) -> UInt32
Returns the col idx when viewing the memory as a matrix.
Returns:
create_tma_tile
create_tma_tile[tile_m: Int, tile_n: Int, swizzle_mode: TensorMapSwizzle, *, is_k_major: Bool](self, ctx: DeviceContext) -> TMATensorTile[dtype_, tile_layout_k_major[dtype_, tile_m, tile_n, swizzle_mode]() if is_k_major else tile_layout_mn_major[dtype_, tile_n, tile_m, swizzle_mode](), _tma_desc_tile_layout[dtype_, 2, IndexList[2, DType.int64](tile_m, tile_n, Tuple[]()), is_k_major, swizzle_mode](), is_k_major]
Creates a TMA tile for efficient GPU memory transfers.
Returns:
TMATensorTile
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!