Mojo module
kv_cache_ragged
Functions
- generic_cross_attention_kv_cache:
- generic_flare_mla_decode_kv_cache_ragged:
- generic_flare_mla_decompress_k_cache_ragged_paged:
- generic_flare_mla_prefill_kv_cache_ragged:
- generic_flare_mla_prefill_ragged_paged_plan:
- generic_flash_attention_kv_cache_ragged:
- generic_flash_attention_kv_cache_ragged_sink:
- generic_fused_qk_rope_bshd_continuous_batch_ragged:
- generic_fused_qk_rope_bshd_paged_ragged: Performs a fused RoPE projection for Q and K projections.
- generic_fused_qkv_matmul_kv_cache_cont_batch_ragged: Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.
- generic_fused_qkv_matmul_kv_cache_paged_ragged: Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.
- generic_fused_qkv_matmul_kv_cache_paged_ragged_bias: Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.
- generic_fused_qkv_matmul_kv_cache_paged_ragged_scale: Performs a fused QKV matmul. Q outputs are written to the output argument while K and V outputs are written in-place into k_cache and v_cache.
- generic_kv_cache_radd_dispatch:
- k_matmul_ragged_paged: Performs a matmul, writing the output into a mutable PagedKVCacheCollection object.
- kv_cache_store_ragged:
- kv_matmul_ragged_paged: Performs a matmul, writing the output into a mutable ContinuousBatchingKVCacheCollection object.
- unfused_qkv_matmul_ragged_paged_gguf_quantized: Performs a quantized matmul, writing the output into a mutable PagedKVCacheCollection object.
- valid_length_managed_tensor_slice_to_ndbuffer:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!
