Skip to main content
Log in

Mojo function

generic_fused_qkv_matmul_kv_cache_paged_ragged

generic_fused_qkv_matmul_kv_cache_paged_ragged[type: DType, weight_type: DType, target: StringSlice[StaticConstantOrigin] = __init__[__mlir_type.!kgen.string]("cpu"), group_size: OptionalReg[Int] = OptionalReg[Int]({:i1 0, 1}), has_zp: OptionalReg[Bool] = OptionalReg[Bool]({:i1 0, 1})](hidden_state: NDBuffer[type, 2, origin, shape], input_row_offsets: NDBuffer[uint32, 1, origin, shape, strides], weight: NDBuffer[weight_type, 2, origin, shape], kv_collection: PagedKVCacheCollection[type_, kv_params_, page_size], layer_idx: SIMD[uint32, 1], output: NDBuffer[type, 2, origin, shape], ctx: DeviceContextPtr)

Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.

Args:

  • hidden_state (NDBuffer[type, 2, origin, shape]): Tensor with shape (sum(seq_lens), num_heads * head_size).
  • input_row_offsets (NDBuffer[uint32, 1, origin, shape, strides]): Tensor with shape (batch_size + 1,). The value at each index is the start_idx of the corresponding batch in hidden_state.
  • weight (NDBuffer[weight_type, 2, origin, shape]): Tensor with shape (num_heads * head_size, num_kv_heads * head_size).
  • kv_collection (PagedKVCacheCollection[type_, kv_params_, page_size]): The object storing the KVCache for this layer.
  • layer_idx (SIMD[uint32, 1]): The current layer, used to retrieve the KVCache object from kv_collection.
  • output (NDBuffer[type, 2, origin, shape]): The pre-allocated output buffer for Q projections. K and V projections are written in-place to k_cache and v_cache. Shape: (sum(seq_lens), num_heads * head_size).
  • ctx (DeviceContextPtr): The call context pointer, passed by the graph compiler.

Was this page helpful?