Skip to main content

Mojo function

mla_prefill_branch_bf16

mla_prefill_branch_bf16[collection_t: KVCollectionT, //, mask_str: StringSlice[StaticConstantOrigin], kv_input_fn: def[width: Int](IndexList[2]) capturing -> SIMD[DType.bfloat16, width], target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](output: TileTensor[DType.bfloat16, output.LayoutType, output.origin, linear_idx_type=output.linear_idx_type, element_size=output.element_size], q: TileTensor[DType.bfloat16, q.LayoutType, q.origin, linear_idx_type=q.linear_idx_type, element_size=q.element_size], input_row_offsets: TileTensor[DType.uint32, input_row_offsets.LayoutType, input_row_offsets.origin, linear_idx_type=input_row_offsets.linear_idx_type, element_size=input_row_offsets.element_size], freqs_cis: TileTensor[freqs_cis.dtype, freqs_cis.LayoutType, freqs_cis.origin, linear_idx_type=freqs_cis.linear_idx_type, element_size=freqs_cis.element_size], kv_norm_gamma: TileTensor[kv_norm_gamma.dtype, kv_norm_gamma.LayoutType, kv_norm_gamma.origin, linear_idx_type=kv_norm_gamma.linear_idx_type, element_size=kv_norm_gamma.element_size], kv_collection: collection_t, layer_idx: UInt32, scale: Float32, epsilon: Float32, buffer_row_offsets: TileTensor[DType.uint32, buffer_row_offsets.LayoutType, buffer_row_offsets.origin, linear_idx_type=buffer_row_offsets.linear_idx_type, element_size=buffer_row_offsets.element_size], cache_offsets: TileTensor[DType.uint32, cache_offsets.LayoutType, cache_offsets.origin, linear_idx_type=cache_offsets.linear_idx_type, element_size=cache_offsets.element_size], buffer_length: Int, w_k: TileTensor[DType.bfloat16, w_k.LayoutType, w_k.origin, linear_idx_type=w_k.linear_idx_type, element_size=w_k.element_size], w_uv: TileTensor[DType.bfloat16, w_uv.LayoutType, w_uv.origin, linear_idx_type=w_uv.linear_idx_type, element_size=w_uv.element_size], ctx: DeviceContext)

BF16 MLA prefill path.

Applies RoPE and RMSNorm, up-projects latent KV to full K and V, then runs prefill attention.

Was this page helpful?