Mojo function
flare_mla_decoding
flare_mla_decoding[rank: Int, cache_t: KVCacheT, mask_t: MHAMask, dtype: DType, //, config: MHAConfig[dtype], ragged: Bool = False, decoding_warp_split_k: Bool = False, per_token_scale_rope_aware: Bool = False](output: TileTensor[output.dtype, output.LayoutType, output.origin, linear_idx_type=output.linear_idx_type, element_size=output.element_size], q: TileTensor[dtype, q.LayoutType, q.origin, linear_idx_type=q.linear_idx_type, element_size=q.element_size], k: cache_t, mask_functor: mask_t, valid_length: TileTensor[DType.uint32, valid_length.LayoutType, valid_length.origin, linear_idx_type=valid_length.linear_idx_type, element_size=valid_length.element_size], scale: Float32, ctx: DeviceContext, scalar_args_buf: TileTensor[DType.int64, scalar_args_buf.LayoutType, scalar_args_buf.origin, linear_idx_type=scalar_args_buf.linear_idx_type, element_size=scalar_args_buf.element_size], q_max_seq_len: OptionalReg[Int] = None, kv_input_row_offsets: OptionalReg[LayoutTensor[DType.uint32, Layout.row_major(VariadicList(-1)), ImmutAnyOrigin]] = None, num_partitions: Optional[Int] = None, q_scale_ptr: UnsafePointer[Float32, MutAnyOrigin] = UnsafePointer())
MLA decoding kernel that would only be called in the optimized compute graph.
The Q input has a shape of [seq_len, num_heads, depth]. The K input has a shape of [seq_len, 1, depth]. The V tensor is derived by reusing K, where V = K[:, :, :depth_v].
Specifically, for DeepSeek V2/3, depth = 576 and depth_v = 512.
When per_token_scale_rope_aware is True, Q and KV cache have an interleaved FP8+BF16 layout: FP8 content (512 bytes) + BF16 rope (128 bytes) = 640 bytes/row. Q's last dimension is 640 (FP8 elements) but represents 576 logical dimensions (512 nope + 64 rope).
This kernel computes attention without needing to load V twice. This kernel only handles decoding requests. In this case q_max_seq_len = 1.
This kernel handles batches with different valid lengths (i.e., before the padding). Such lengths are passed in valid_length argument.
flare_mla_decoding[mask_t: MHAMask, dtype: DType, q_layout: Layout, //, config: MHAConfig[dtype] = MHAConfig(SIMD(Int[IntTuple](q_layout.shape[2])), SIMD(Int[IntTuple](q_layout.shape[3])), Optional(None), Optional(None), Optional(None), Optional(None), Optional(None), SIMD(4), SIMD(1), FlashAttentionAlgorithm(-1), TensorMapSwizzle.SWIZZLE_128B), decoding_warp_split_k: Bool = False](output: LayoutTensor[output.dtype, output.layout, output.origin, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment], q: LayoutTensor[dtype, q_layout, q.origin, element_layout=q.element_layout, layout_int_type=q.layout_int_type, linear_idx_type=q.linear_idx_type, masked=q.masked, alignment=q.alignment], k: LayoutTensor[k.dtype, k.layout, k.origin, element_layout=k.element_layout, layout_int_type=k.layout_int_type, linear_idx_type=k.linear_idx_type, masked=k.masked, alignment=k.alignment], mask_functor: mask_t, scale: Float32, ctx: DeviceContext, scalar_args_buf: LayoutTensor[DType.int64, Layout.row_major(VariadicList(3)), MutAnyOrigin], num_partitions: Optional[Int] = None)
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!