Skip to main content

Mojo struct

PagedKVCache

struct PagedKVCache[dtype_: DType, kv_params_: KVCacheStaticParams, page_size: Int, scale_dtype_: DType = DType.invalid, quantization_granularity_: Int = 1]

The PagedKVCache is a wrapper around the KVCache blocks for a given layer. It is used to access the KVCache blocks for PagedAttention.

Note: This struct represents a 4D view of a 6D PagedKVCacheCollection tensor. The compile-time layout has UNKNOWN_VALUE for stride[0] because the actual stride depends on num_layers from the parent tensor, which is only known at runtime. This ensures offset calculations use the correct runtime strides rather than incorrect compile-time values.

Parameters

  • dtype_ (DType): The dtype of the kv-cache.
  • kv_params_ (KVCacheStaticParams): The kv-cache static parameters.
  • page_size (Int): The size of the page.
  • scale_dtype_ (DType): Dtype of the quantization scales (if quantization enabled).
  • quantization_granularity_ (Int): Block size used for quantization (e.g. 128).

Fields

  • blocks (PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_tt_type):
  • cache_lengths (PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].cache_lengths_tt_type):
  • lookup_table (PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].lookup_table_tt_type):
  • max_seq_length (UInt32):
  • max_cache_length (UInt32):
  • scales (OptionalReg[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scales_tt_type]):

Implemented traits

AnyType, Copyable, DevicePassable, ImplicitlyCopyable, ImplicitlyDestructible, KVCacheT, Movable, RegisterPassable, TrivialRegisterPassable

comptime members

blocks_layout

comptime blocks_layout = Layout(PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_shape, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_strides)

blocks_shape

comptime blocks_shape = IntTuple(VariadicList(-1, page_size, Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads), Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size)))

blocks_strides

comptime blocks_strides = IntTuple(VariadicList(-1, (Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads) * Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size)), Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size), 1))

blocks_tt_layout

comptime blocks_tt_layout = Layout[#kgen.variadic.reduce(#kgen.variadic.tabulate(len[IntTuple](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout.shape), [idx: __mlir_type.index] _int_to_dim(PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout.shape[idx].value())), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64])), #kgen.variadic.reduce(#kgen.variadic.tabulate(len[IntTuple](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout.stride), [idx: __mlir_type.index] _int_to_dim(PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout.stride[idx].value())), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64]))]

blocks_tt_type

comptime blocks_tt_type = TileTensor[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_tt_layout, MutAnyOrigin]

blocks_type

comptime blocks_type = LayoutTensor[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout, MutAnyOrigin]

cache_lengths_tt_layout

comptime cache_lengths_tt_layout = Layout[RuntimeInt[DType.int64], ComptimeInt[1]]

cache_lengths_tt_type

comptime cache_lengths_tt_type = TileTensor[DType.uint32, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].cache_lengths_tt_layout, ImmutAnyOrigin]

device_type

comptime device_type = PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_]

dtype

comptime dtype = dtype_

head_dim_granularity

comptime head_dim_granularity = ceildiv(Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size), PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].quantization_granularity)

kv_params

comptime kv_params = kv_params_

lookup_table_tt_layout

comptime lookup_table_tt_layout = Layout[RuntimeInt[DType.int64], RuntimeInt[DType.int64], RuntimeInt[DType.int64], ComptimeInt[1]]

lookup_table_tt_type

comptime lookup_table_tt_type = TileTensor[DType.uint32, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].lookup_table_tt_layout, ImmutAnyOrigin]

page_size_

comptime page_size_ = page_size

quantization_enabled

comptime quantization_enabled = (scale_dtype_ != DType.invalid)

quantization_granularity

comptime quantization_granularity = quantization_granularity_

scale_dtype

comptime scale_dtype = scale_dtype_

scales_block_type

comptime scales_block_type = LayoutTensor[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scales_layout, MutAnyOrigin]

scales_layout

comptime scales_layout = Layout.row_major(PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scales_shape)

scales_shape

comptime scales_shape = IntTuple(VariadicList(-1, page_size, Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads), PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].head_dim_granularity))

scales_tt_layout

comptime scales_tt_layout = Layout[RuntimeInt[DType.int64], ComptimeInt[page_size], ComptimeInt[Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads)], ComptimeInt[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].head_dim_granularity], ComptimeInt[(ComptimeInt[page_size].static_value * ComptimeInt[(ComptimeInt[Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads)].static_value * ComptimeInt[(ComptimeInt[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].head_dim_granularity].static_value * ComptimeInt[1].static_value)].static_value)].static_value)], ComptimeInt[(ComptimeInt[Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.num_heads)].static_value * ComptimeInt[(ComptimeInt[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].head_dim_granularity].static_value * ComptimeInt[1].static_value)].static_value)], ComptimeInt[(ComptimeInt[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].head_dim_granularity].static_value * ComptimeInt[1].static_value)], ComptimeInt[1]]

scales_tt_type

comptime scales_tt_type = TileTensor[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scales_tt_layout, MutAnyOrigin]

Methods

__init__

__init__(blocks: LayoutTensor[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].blocks_layout, MutAnyOrigin], cache_lengths: LayoutTensor[DType.uint32, Layout(IntTuple(-1)), ImmutAnyOrigin], lookup_table: LayoutTensor[DType.uint32, Layout.row_major[2](), ImmutAnyOrigin], max_seq_length: UInt32, max_cache_length: UInt32, scales: OptionalReg[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scales_block_type] = None) -> Self

get_type_name

static get_type_name() -> String

Returns:

String

max_tile_size

static max_tile_size() -> Int

Returns the maximum tile size for the KVCache.

Returns:

Int

cache_lengths_nd

cache_lengths_nd(self) -> PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].cache_lengths_tt_type

Returns:

PagedKVCache

cache_length

cache_length(self, batch_idx: Int) -> Int

Returns the length of the cache for a given batch index.

Returns:

Int

num_kv_rows

num_kv_rows(self) -> Int

Returns the total number of virtual rows in this KV cache view.

Returns:

Int

row_idx

row_idx(self, batch_idx: UInt32, tok_idx: UInt32) -> UInt32

Returns the row idx when viewing the memory as a matrix.

Returns:

UInt32

create_tma_tile

create_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int = padded_depth[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, swizzle_mode, Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size)]()](self, ctx: DeviceContext) -> TMATensorTile[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, 3, _padded_shape[3, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode](), _ragged_shape[3, PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode]()]

Creates a TMA tile for this KV cache.

Returns:

TMATensorTile

create_ragged_tma_tile

create_ragged_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int = padded_depth[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, swizzle_mode, Int.__init__[UInt](PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].kv_params.head_size)]()](self, ctx: DeviceContext, out tma: RaggedTMA3DTile[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, swizzle_mode, BN, BK])

Returns:

RaggedTMA3DTile

create_rope_tma_tile

create_rope_tma_tile[swizzle_mode: TensorMapSwizzle, *, BN: Int, BK: Int, padded_depth: Int](self, ctx: DeviceContext, out tma: TMATensorTile[DType.bfloat16, 3, _padded_shape[3, DType.bfloat16, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode](), _ragged_shape[3, DType.bfloat16, IndexList(VariadicList(BN, 1, BK), Tuple()), swizzle_mode]()])

Creates a BF16 TMA tile for the rope portion of the per-tensor rope-aware KV cache.

In the per-tensor rope-aware layout each token row is: padded_depth FP8 bytes (content) | BK BF16 elements (rope) Total row bytes = padded_depth + BK * 2.

The TMA descriptor points at the rope data by offsetting blocks.ptr by padded_depth bytes, then reinterpreting as BF16. The global memory stride dimension (last dim of gmem_shape) is the total row size expressed in BF16 units: (padded_depth + BK * 2) // 2.

Returns:

TMATensorTile

load

load[width: Int, output_dtype: DType = PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[output_dtype, width]

Loads an element from the given index.

Returns:

SIMD

store

store(self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int, val: SIMD[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, val.size])

Stores an element at the given index.

load_scale

load_scale[width: Int](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype, width]

Loads a quantization scale from the given index.

Returns:

SIMD

store_scale

store_scale(self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int, scales: SIMD[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype, scales.size])

Stores the quantization scales at the given index.

load_quantized

load_quantized[width: Int](self, bs: Int, head_idx: Int, tok_idx: Int, head_dim_idx: Int) -> SIMD[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype, width]

Loads a quantized element from the given index.

Returns:

SIMD

empty_cache

empty_cache(self) -> Bool

Returns true if the cache_lengths for all requests is 0, false otherwise.

Returns:

Bool

max_prompt_length

max_prompt_length(self) -> UInt32

Returns the maximum sequence length across all batches of the current request.

Returns:

UInt32

max_context_length

max_context_length(self) -> UInt32

Returns the maximum cache length used across all batches of the current request.

Returns:

UInt32

block_paged_ptr

block_paged_ptr[tile_size: Int](self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].dtype], MutAnyOrigin]

Returns:

UnsafePointer

scales_block_paged_ptr

scales_block_paged_ptr(self, batch_idx: Int, start_tok_idx: Int, head_idx: Int, head_dim_idx: Int = 0) -> UnsafePointer[Scalar[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype], MutAnyOrigin]

Returns a pointer to the scales block at the requested indices.

Returns:

UnsafePointer

scales_raw_ptr

scales_raw_ptr(self) -> UnsafePointer[Scalar[PagedKVCache[dtype_, kv_params_, page_size, scale_dtype_, quantization_granularity_].scale_dtype], MutAnyOrigin]

Returns the base pointer to the scales tensor, or null if scales are not set.

Returns:

UnsafePointer

Was this page helpful?