Mojo function
unfused_qkv_matmul_ragged_paged_gguf_quantized
unfused_qkv_matmul_ragged_paged_gguf_quantized[dtype: DType, params: KVCacheStaticParams, page_size: Int, //, quantization_encoding_q: StringSlice[StaticConstantOrigin], quantization_encoding_k: StringSlice[StaticConstantOrigin], quantization_encoding_v: StringSlice[StaticConstantOrigin]](hidden_state: LayoutTensor[DType.float32, element_layout=hidden_state.element_layout, layout_int_type=hidden_state.layout_int_type, linear_idx_type=hidden_state.linear_idx_type, masked=hidden_state.masked, alignment=hidden_state.alignment], input_row_offsets: LayoutTensor[DType.uint32, element_layout=input_row_offsets.element_layout, layout_int_type=input_row_offsets.layout_int_type, linear_idx_type=input_row_offsets.linear_idx_type, masked=input_row_offsets.masked, alignment=input_row_offsets.alignment], q_weight: LayoutTensor[DType.uint8, element_layout=q_weight.element_layout, layout_int_type=q_weight.layout_int_type, linear_idx_type=q_weight.linear_idx_type, masked=q_weight.masked, alignment=q_weight.alignment], k_weight: LayoutTensor[DType.uint8, element_layout=k_weight.element_layout, layout_int_type=k_weight.layout_int_type, linear_idx_type=k_weight.linear_idx_type, masked=k_weight.masked, alignment=k_weight.alignment], v_weight: LayoutTensor[DType.uint8, element_layout=v_weight.element_layout, layout_int_type=v_weight.layout_int_type, linear_idx_type=v_weight.linear_idx_type, masked=v_weight.masked, alignment=v_weight.alignment], kv_collection: PagedKVCacheCollection[dtype, params, page_size], layer_idx: UInt32, output: LayoutTensor[DType.float32, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment], ctx: DeviceContextPtr)
Performs a quantized matmul, writing the output into a mutable PagedKVCacheCollection object.
Unlike the un-quantized version (kv_matmul_ragged_continuous_batching), this implementation does not concat the q, k, and v weights together. Instead, it performs three matmuls. This allows the q, k, and v weights to have different quantization encodings.
This is only supported on CPU.
Args:
- βhidden_state (
LayoutTensor[DType.float32, element_layout=hidden_state.element_layout, layout_int_type=hidden_state.layout_int_type, linear_idx_type=hidden_state.linear_idx_type, masked=hidden_state.masked, alignment=hidden_state.alignment]): Tensor with shape (sum(seq_lens), num_heads * head_size). - βinput_row_offsets (
LayoutTensor[DType.uint32, element_layout=input_row_offsets.element_layout, layout_int_type=input_row_offsets.layout_int_type, linear_idx_type=input_row_offsets.linear_idx_type, masked=input_row_offsets.masked, alignment=input_row_offsets.alignment]): Tensor with shape (batch_size + 1,) denoting the start of each sequence along the seq_len dimension. - βq_weight (
LayoutTensor[DType.uint8, element_layout=q_weight.element_layout, layout_int_type=q_weight.layout_int_type, linear_idx_type=q_weight.linear_idx_type, masked=q_weight.masked, alignment=q_weight.alignment]): Tensor with shape (num_heads * head_size, num_kv_heads * head_size). - βk_weight (
LayoutTensor[DType.uint8, element_layout=k_weight.element_layout, layout_int_type=k_weight.layout_int_type, linear_idx_type=k_weight.linear_idx_type, masked=k_weight.masked, alignment=k_weight.alignment]): Tensor with shape (num_heads * head_size, num_kv_heads * head_size). - βv_weight (
LayoutTensor[DType.uint8, element_layout=v_weight.element_layout, layout_int_type=v_weight.layout_int_type, linear_idx_type=v_weight.linear_idx_type, masked=v_weight.masked, alignment=v_weight.alignment]): Tensor with shape (num_heads * head_size, num_kv_heads * head_size). - βkv_collection (
PagedKVCacheCollection[dtype, params, page_size]): The Collection object storing KVCache entries. - βlayer_idx (
UInt32): The index of the layer being executed. Used to retrieve the KVCache for the given layer from kv_collection. - βoutput (
LayoutTensor[DType.float32, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment]): Tensor with shape (sum(seq_lens), num_kv_heads * head_size). This is the output buffer for the Q matmul. - βctx (
DeviceContextPtr): The call context pointer, passed by the graph compiler.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!