Skip to main content

Mojo struct

ContinuousBatchingKVCache

@register_passable(trivial) struct ContinuousBatchingKVCache[dtype_: DType, kv_params_: KVCacheStaticParams]

Wrapper for the ContinuousKVCache of a given layer in the transformer model.

This abstracts the Pointer indirection for accessing the ContinuousKVCache for a given batch entry.

THIS IS THE TYPE THAT IS PASSED TO KV PROJECTION AND FLASH ATTENTION KERNELS.

Parameters

Fields

  • blocks (NDBuffer[dtype_, 4, MutableAnyOrigin, DimList(Dim(-31337), Dim(-31337), Dim(kv_params_.num_heads), Dim(kv_params_.head_size)), _strides_from_shape[::DimList,::Int]()]):
  • cache_lengths (NDBuffer[uint32, 1, MutableAnyOrigin]):
  • lookup_table (NDBuffer[uint32, 1, MutableAnyOrigin]):
  • max_seq_length (SIMD[uint32, 1]):
  • max_cache_length (SIMD[uint32, 1]):

Implemented traits

AnyType, Copyable, ExplicitlyCopyable, KVCacheT, Movable, UnknownDestructibility

Aliases

blocks_shape

alias blocks_shape = DimList(Dim(-31337), Dim(-31337), Dim(kv_params_.num_heads), Dim(kv_params_.head_size))

blocks_stride

alias blocks_stride = _strides_from_shape[::DimList,::Int]()

blocks_type

alias blocks_type = NDBuffer[dtype_, 4, MutableAnyOrigin, DimList(Dim(-31337), Dim(-31337), Dim(kv_params_.num_heads), Dim(kv_params_.head_size)), _strides_from_shape[::DimList,::Int]()]

dtype

alias dtype = dtype_

kv_params

alias kv_params = kv_params_

Methods

__init__

__init__(blocks: NDBuffer[dtype_, 4, MutableAnyOrigin, DimList(Dim(-31337), Dim(-31337), Dim(kv_params_.num_heads), Dim(kv_params_.head_size)), _strides_from_shape[::DimList,::Int]()], cache_lengths: NDBuffer[uint32, 1, MutableAnyOrigin], lookup_table: NDBuffer[uint32, 1, MutableAnyOrigin], max_seq_length: SIMD[uint32, 1], max_cache_length: SIMD[uint32, 1]) -> Self

max_tile_size

static max_tile_size() -> Int

Returns the maximum tile size for the KVCache.

Returns:

Int

cache_lengths_nd

cache_lengths_nd(self) -> NDBuffer[uint32, 1, MutableAnyOrigin]

Returns:

NDBuffer

cache_length

cache_length(self, batch_idx: Int) -> Int

Returns:

Int

load

load[width: Int](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[dtype_, width]

Returns:

SIMD

store

store(self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int, val: SIMD[dtype_, size])

empty_cache

empty_cache(self) -> Bool

Returns true if the cache_lengths for all requests is 0, false otherwise.

Returns:

Bool

max_prompt_length

max_prompt_length(self) -> SIMD[uint32, 1]

Returns the maximum sequence length across all batches of the current request.

Returns:

SIMD

max_context_length

max_context_length(self) -> SIMD[uint32, 1]

Returns the maximum cache length used across all batches of the current request.

Returns:

SIMD

row_idx

row_idx(self, batch_idx: SIMD[uint32, 1], tok_idx: SIMD[uint32, 1]) -> SIMD[uint32, 1]

Returns the row idx when viewing the memory as a matrix.

Returns:

SIMD

col_idx

col_idx(self, head_idx: SIMD[uint32, 1]) -> SIMD[uint32, 1]

Returns the col idx when viewing the memory as a matrix.

Returns:

SIMD

create_tma_tile

create_tma_tile[tile_m: Int, tile_n: Int, swizzle_mode: TensorMapSwizzle, *, is_k_major: Bool](self, ctx: DeviceContext) -> TMATensorTile[dtype_, tile_layout_k_major[::DType,::Int,::Int,::TensorMapSwizzle]() if is_k_major else tile_layout_mn_major[::DType,::Int,::Int,::TensorMapSwizzle](), _tma_desc_tile_layout[::DType,::Int,::IndexList[$1, ::DType(), is_k_major]

Creates a TMA tile for this KV cache.

Returns:

TMATensorTile

block_paged_ptr

block_paged_ptr[tile_size: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[SIMD[dtype_, 1]]

Returns:

UnsafePointer

Was this page helpful?