Skip to main content

Mojo function

generic_fused_qkv_matmul_kv_cache_bshd_paged

generic_fused_qkv_matmul_kv_cache_bshd_paged[dtype: DType, target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](hidden_state: LayoutTensor[dtype, element_layout=hidden_state.element_layout, layout_int_type=hidden_state.layout_int_type, linear_idx_type=hidden_state.linear_idx_type, masked=hidden_state.masked, alignment=hidden_state.alignment], weight: LayoutTensor[dtype, element_layout=weight.element_layout, layout_int_type=weight.layout_int_type, linear_idx_type=weight.linear_idx_type, masked=weight.masked, alignment=weight.alignment], kv_collection: PagedKVCacheCollection, layer_idx: UInt32, valid_lengths: LayoutTensor[DType.uint32, Layout.row_major(-1), ImmutAnyOrigin], output: LayoutTensor[dtype, address_space=output.address_space, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment], ctx: DeviceContextPtr)

Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.

Only positions within valid_lengths are written to the KV cache.

Args: