Skip to main content

Mojo function

generic_fused_qkv_matmul_kv_cache_paged_ragged_scale_float4

generic_fused_qkv_matmul_kv_cache_paged_ragged_scale_float4[dtype: DType, weight_dtype: DType, output_dtype: DType, scale_dtype: DType, a_layout: Layout, b_layout: Layout, sfa_layout: Layout, sfb_layout: Layout, SF_VECTOR_SIZE: Int, target: StringSlice[StaticConstantOrigin] = "cpu"](hidden_state: LayoutTensor[dtype, a_layout, MutAnyOrigin], input_row_offsets: LayoutTensor[DType.uint32, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], weight: LayoutTensor[weight_dtype, b_layout, MutAnyOrigin], input_scale: LayoutTensor[scale_dtype, sfa_layout, MutAnyOrigin], weight_scale: LayoutTensor[scale_dtype, sfb_layout, MutAnyOrigin], tensor_sf: Float32, kv_collection: PagedKVCacheCollection[dtype_, kv_params_, page_size, scale_dtype_], layer_idx: UInt32, output: LayoutTensor[output_dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], ctx: DeviceContextPtr)

Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.

Args:

  • hidden_state (LayoutTensor): Tensor with shape (sum(seq_lens), num_heads * head_size // 2).
  • input_row_offsets (LayoutTensor): Tensor with shape (batch_size + 1,). The value at each index is the start_idx of the corresponding batch in hidden_state.
  • weight (LayoutTensor): Tensor with shape (num_heads * head_size, num_kv_heads * head_size // 2).
  • input_scale (LayoutTensor): 5D blockwise scale tensor to be multiplied to the input Tensor.
  • weight_scale (LayoutTensor): 5D blockwise scale tensor to the weight Tensor.
  • tensor_sf (Float32): Per-tensor scaling factor.
  • kv_collection (PagedKVCacheCollection): The object storing the KVCache for this layer.
  • layer_idx (UInt32): The current layer, used to retrieve the KVCache object from kv_collection.
  • output (LayoutTensor): The pre-allocated output buffer for Q projections. K and V projections are written in-place to k_cache and v_cache. Shape: (sum(seq_lens), num_heads * head_size).
  • ctx (DeviceContextPtr): The call context pointer, passed by the graph compiler.

Was this page helpful?