Mojo function
generic_fused_qkv_matmul_kv_cache_bshd_paged
generic_fused_qkv_matmul_kv_cache_bshd_paged[dtype: DType, target: StringSlice[StaticConstantOrigin] = "cpu"](hidden_state: LayoutTensor[dtype, hidden_state.layout, hidden_state.origin, element_layout=hidden_state.element_layout, layout_int_type=hidden_state.layout_int_type, linear_idx_type=hidden_state.linear_idx_type, masked=hidden_state.masked, alignment=hidden_state.alignment], weight: LayoutTensor[dtype, weight.layout, weight.origin, element_layout=weight.element_layout, layout_int_type=weight.layout_int_type, linear_idx_type=weight.linear_idx_type, masked=weight.masked, alignment=weight.alignment], kv_collection: PagedKVCacheCollection[kv_collection.dtype_, kv_collection.kv_params_, kv_collection.page_size, kv_collection.scale_dtype_, kv_collection.quantization_granularity_], layer_idx: UInt32, valid_lengths: LayoutTensor[DType.uint32, Layout.row_major(-1), ImmutAnyOrigin], output: LayoutTensor[dtype, output.layout, output.origin, address_space=output.address_space, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment], ctx: DeviceContextPtr)
Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.
Only positions within valid_lengths are written to the KV cache.
Args:
- hidden_state (
LayoutTensor): Tensor with shape (batch_size, seq_len, num_heads * head_size). - weight (
LayoutTensor): Tensor with shape (num_heads * head_size, num_kv_heads * head_size). - kv_collection (
PagedKVCacheCollection): The historical KVCache for keys and values. The KVCache for this layer is retrieved via layer_idx. - layer_idx (
UInt32): The index of the layer being executed. Used to retrieve the KVCache for the given layer from kv_collection. - valid_lengths (
LayoutTensor): Tensor of shape [batch] containing the valid length for each sequence. K and V are only written to cache for positions within these lengths. - output (
LayoutTensor): The pre-allocated output buffer for Q projections. K and V projections are written in-place to k_cache and v_cache. - ctx (
DeviceContextPtr): The call context pointer, passed by the graph compiler.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!