Skip to main content
Log in

Mojo function

generic_fused_qkv_matmul_kv_cache_bshd_continuous_batch

generic_fused_qkv_matmul_kv_cache_bshd_continuous_batch[type: DType, target: StringSlice[StaticConstantOrigin] = __init__[__mlir_type.!kgen.string]("cpu")](hidden_state: NDBuffer[type, 3, origin, shape], weight: NDBuffer[type, 2, origin, shape], kv_collection: ContinuousBatchingKVCacheCollection[type_, kv_params_], layer_idx: SIMD[uint32, 1], output: NDBuffer[type, 3, origin, shape], ctx: DeviceContextPtr)

Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.

Args:

  • hidden_state (NDBuffer[type, 3, origin, shape]): Tensor with shape (batch_size, seq_len, num_heads * head_size).
  • weight (NDBuffer[type, 2, origin, shape]): Tensor with shape (num_heads * head_size, num_kv_heads * head_size).
  • kv_collection (ContinuousBatchingKVCacheCollection[type_, kv_params_]): The historical KVCache for keys and values. The KVCache for this layer is retrieved via layer_idx.
  • layer_idx (SIMD[uint32, 1]): The index of the layer being executed. Used to retrieve the KVCache for the given layer from kv_collection.
  • output (NDBuffer[type, 3, origin, shape]): The pre-allocated output buffer for Q projections. K and V projections are written in-place to k_cache and v_cache.
  • ctx (DeviceContextPtr): The call context pointer, passed by the graph compiler.

Was this page helpful?