Skip to main content

Mojo struct

ContinuousBatchingKVCache

struct ContinuousBatchingKVCache[dtype_: DType, kv_params_: KVCacheStaticParams]

Wrapper for the ContinuousKVCache of a given layer in the transformer model.

This abstracts the Pointer indirection for accessing the ContinuousKVCache for a given batch entry.

THIS IS THE TYPE THAT IS PASSED TO KV PROJECTION AND FLASH ATTENTION KERNELS.

Parameters

Fields

  • blocks (ContinuousBatchingKVCache[dtype_, kv_params_].blocks_tt_type):
  • cache_lengths (ContinuousBatchingKVCache[dtype_, kv_params_].cache_lengths_tt_type):
  • lookup_table (ContinuousBatchingKVCache[dtype_, kv_params_].lookup_table_tt_type):
  • max_seq_length (UInt32):
  • max_cache_length (UInt32):

Implemented traits

AnyType, Copyable, DevicePassable, ImplicitlyCopyable, ImplicitlyDestructible, KVCacheT, Movable, RegisterPassable, TrivialRegisterPassable

comptime members

blocks_layout

comptime blocks_layout = Layout.row_major(ContinuousBatchingKVCache[dtype_, kv_params_].blocks_shape)

blocks_shape

comptime blocks_shape = IntTuple(-1, -1, Int[UInt](ContinuousBatchingKVCache[dtype_, kv_params_].kv_params.num_heads), Int[UInt](ContinuousBatchingKVCache[dtype_, kv_params_].kv_params.head_size))

blocks_tt_layout

comptime blocks_tt_layout = Layout[#kgen.variadic.reduce(#kgen.variadic.tabulate(len[IntTuple](ContinuousBatchingKVCache[dtype_, kv_params_].blocks_layout.shape), [idx: __mlir_type.index] _int_to_dim(ContinuousBatchingKVCache[dtype_, kv_params_].blocks_layout.shape[idx].value())), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64])), #kgen.variadic.reduce(#kgen.variadic.tabulate(len[IntTuple](ContinuousBatchingKVCache[dtype_, kv_params_].blocks_layout.stride), [idx: __mlir_type.index] _int_to_dim(ContinuousBatchingKVCache[dtype_, kv_params_].blocks_layout.stride[idx].value())), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64]))]

blocks_tt_type

comptime blocks_tt_type = TileTensor[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, ContinuousBatchingKVCache[dtype_, kv_params_].blocks_tt_layout, MutAnyOrigin]

cache_lengths_tt_layout

comptime cache_lengths_tt_layout = Layout[RuntimeInt[DType.int64], ComptimeInt[1]]

cache_lengths_tt_type

comptime cache_lengths_tt_type = TileTensor[DType.uint32, ContinuousBatchingKVCache[dtype_, kv_params_].cache_lengths_tt_layout, ImmutAnyOrigin]

device_type

comptime device_type = ContinuousBatchingKVCache[dtype_, kv_params_]

dtype

comptime dtype = dtype_

kv_params

comptime kv_params = kv_params_

lookup_table_tt_layout

comptime lookup_table_tt_layout = Layout[RuntimeInt[DType.int64], ComptimeInt[1]]

lookup_table_tt_type

comptime lookup_table_tt_type = TileTensor[DType.uint32, ContinuousBatchingKVCache[dtype_, kv_params_].lookup_table_tt_layout, ImmutAnyOrigin]

page_size_

comptime page_size_ = 0

quantization_enabled

comptime quantization_enabled = False

quantization_granularity

comptime quantization_granularity = 1

scale_dtype

comptime scale_dtype = DType.float32

Methods

__init__

__init__(blocks: TileTensor[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, ContinuousBatchingKVCache[dtype_, kv_params_].blocks_tt_layout, MutAnyOrigin], cache_lengths: TileTensor[DType.uint32, ContinuousBatchingKVCache[dtype_, kv_params_].cache_lengths_tt_layout, ImmutAnyOrigin], lookup_table: TileTensor[DType.uint32, ContinuousBatchingKVCache[dtype_, kv_params_].lookup_table_tt_layout, ImmutAnyOrigin], max_seq_length: UInt32, max_cache_length: UInt32) -> Self

get_type_name

static get_type_name() -> String

Returns:

String

max_tile_size

static max_tile_size() -> Int

Returns the maximum tile size for the KVCache.

Returns:

Int

cache_lengths_nd

cache_lengths_nd(self) -> ContinuousBatchingKVCache[dtype_, kv_params_].cache_lengths_tt_type

Returns:

ContinuousBatchingKVCache

cache_length

cache_length(self, batch_idx: Int) -> Int

Returns:

Int

load

load[width: Int, output_dtype: DType = ContinuousBatchingKVCache[dtype_, kv_params_].dtype](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[output_dtype, width]

Returns:

SIMD

store

store(self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int, val: SIMD[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, val.size])

load_scale

load_scale[width: Int](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[DType.float32, width]

Loads a quantization scale from the given index.

Note: ContinuousBatchingKVCache does not support KVCache quantization.

Returns:

SIMD

store_scale

store_scale(self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int, scales: SIMD[DType.float32, scales.size])

Stores the quantization scales at the given index.

Note: ContinuousBatchingKVCache does not support KVCache quantization.

load_quantized

load_quantized[width: Int](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, width]

Loads a quantized element from the given index.

Note: ContinuousBatchingKVCache does not support KVCache quantization.

Returns:

SIMD

empty_cache

empty_cache(self) -> Bool

Returns true if the cache_lengths for all requests is 0, false otherwise.

Returns:

Bool

max_prompt_length

max_prompt_length(self) -> UInt32

Returns the maximum sequence length across all batches of the current request.

Returns:

UInt32

max_context_length

max_context_length(self) -> UInt32

Returns the maximum cache length used across all batches of the current request.

Returns:

UInt32

num_kv_rows

num_kv_rows(self) -> Int

Returns the total number of virtual rows in this KV cache view.

Returns:

Int

row_idx

row_idx(self, batch_idx: UInt32, tok_idx: UInt32) -> UInt32

Returns the row idx when viewing the memory as a matrix.

Returns:

UInt32

create_tma_tile

create_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int = padded_depth[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, swizzle_mode, Int[UInt](ContinuousBatchingKVCache[dtype_, kv_params_].kv_params.head_size)]()](self, ctx: DeviceContext) -> TMATensorTile[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, 3, _padded_shape[3, ContinuousBatchingKVCache[dtype_, kv_params_].dtype, IndexList(BN, 1, BK, __list_literal__=Tuple()), swizzle_mode](), _ragged_shape[3, ContinuousBatchingKVCache[dtype_, kv_params_].dtype, IndexList(BN, 1, BK, __list_literal__=Tuple()), swizzle_mode]()]

Creates a TMA tile for this KV cache.

Returns:

TMATensorTile

create_gather4_tma_tile

create_gather4_tma_tile[row_width: Int, swizzle_mode: TensorMapSwizzle = TensorMapSwizzle.SWIZZLE_NONE](self, ctx: DeviceContext) -> TMATensorTile[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, 2, IndexList(4, row_width, __list_literal__=Tuple()), IndexList(1, row_width, __list_literal__=Tuple())]

Creates a 2D TMA gather4 descriptor for this KV cache.

The descriptor views the KV cache as a flat 2D matrix of [num_kv_rows, row_width] and is configured for gather4 operations that load 4 non-contiguous rows per TMA instruction.

Parameters:

  • row_width (Int): Number of elements per row (innermost dimension).
  • swizzle_mode (TensorMapSwizzle): TMA swizzle mode for shared memory access pattern. Defaults to SWIZZLE_NONE.

Args:

  • ctx (DeviceContext): The CUDA device context used to create the TMA descriptor.

Returns:

TMATensorTile: A TMATensorTile with tile_shape=(4, row_width) and desc_shape=(1, row_width).

create_ragged_tma_tile

create_ragged_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int = padded_depth[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, swizzle_mode, Int[UInt](ContinuousBatchingKVCache[dtype_, kv_params_].kv_params.head_size)]()](self, ctx: DeviceContext, out tma: RaggedTMA3DTile[ContinuousBatchingKVCache[dtype_, kv_params_].dtype, swizzle_mode, BN, BK])

Returns:

RaggedTMA3DTile

create_rope_tma_tile

create_rope_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int, padded_depth: Int](self, ctx: DeviceContext, out tma: TMATensorTile[DType.bfloat16, 3, _padded_shape[3, DType.bfloat16, IndexList(BN, 1, BK, __list_literal__=Tuple()), swizzle_mode](), _ragged_shape[3, DType.bfloat16, IndexList(BN, 1, BK, __list_literal__=Tuple()), swizzle_mode]()])

Not supported for ContinuousBatchingKVCache.

Returns:

TMATensorTile

block_paged_ptr

block_paged_ptr[tile_size: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[ContinuousBatchingKVCache[dtype_, kv_params_].dtype], MutAnyOrigin]

Returns:

UnsafePointer

scales_block_paged_ptr

scales_block_paged_ptr(self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Float32, MutAnyOrigin]

Returns a pointer to the scales block at the requested indices.

Note: ContinuousBatchingKVCache does not support KVCache quantization. This function returns a NULL pointer.

Returns:

UnsafePointer

scales_raw_ptr

scales_raw_ptr(self) -> UnsafePointer[Float32, MutAnyOrigin]

Returns a null pointer. ContinuousBatchingKVCache does not support quantization.

Returns:

UnsafePointer

Was this page helpful?