Mojo function
generic_fused_qk_rope_bshd_paged_ragged
generic_fused_qk_rope_bshd_paged_ragged[dtype: DType, freq_dtype: DType, //, *, interleaved: Bool, has_position_ids: Bool, target: StringSlice[StaticConstantOrigin], mrope_types: Variadic[CoordLike] = , mrope_section: Optional[Coord[mrope_types]] = None](q_proj: TileTensor[dtype, LayoutType, origin, linear_idx_type=linear_idx_type, element_shape_types=element_shape_types], input_row_offsets: TileTensor[DType.uint32, LayoutType, origin, linear_idx_type=linear_idx_type, element_shape_types=element_shape_types], kv_collection: PagedKVCacheCollection[dtype_, kv_params_, page_size, scale_dtype_], freqs_cis: TileTensor[freq_dtype, LayoutType, origin, linear_idx_type=linear_idx_type, element_shape_types=element_shape_types], position_ids: TileTensor[DType.uint32, LayoutType, origin, linear_idx_type=linear_idx_type, element_shape_types=element_shape_types], layer_idx: UInt32, output: TileTensor[dtype, LayoutType, origin, linear_idx_type=linear_idx_type, element_shape_types=element_shape_types], context: DeviceContextPtr = DeviceContextPtr())
Performs a fused RoPE projection for Q and K projections.
We have a manually fused QKV projection with mo.opaque dtypes in our Llama model. Due to a limitation in custom op definitions, we can't declare both a tensor and opaque dtype as output from a custom kernel. This requires us to only note Q_proj as an output from the QKV projection. If we immediately follow the QKV proj kernel with a RoPE kernel applied to K, we'll get a race condition because the graph compiler doesn't know about the dependency between these kernels in the graph definition. Here we fuse the RoPE kernel applied to Q_proj with K_proj, so K_proj RoPE is only executed after QKV completes.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!