Skip to main content

Mojo function

generic_flash_attention_kv_cache_ragged_sink

generic_flash_attention_kv_cache_ragged_sink[collection_t: KVCollectionT, dtype: DType, //, *, target: StringSlice[StaticConstantOrigin], mask_str: StringSlice[StaticConstantOrigin], score_mod_str: StringSlice[StaticConstantOrigin], local_window_size: Int = -1](q: LayoutTensor[dtype, q.layout, q.origin, element_layout=q.element_layout, layout_int_type=q.layout_int_type, linear_idx_type=q.linear_idx_type, masked=q.masked, alignment=q.alignment], input_row_offsets: LayoutTensor[DType.uint32, Layout.row_major(-1), ImmutAnyOrigin], kv_collection: collection_t, layer_idx: UInt32, scale: Float32, output: LayoutTensor[dtype, output.layout, output.origin, element_layout=output.element_layout, layout_int_type=output.layout_int_type, linear_idx_type=output.linear_idx_type, masked=output.masked, alignment=output.alignment], context: DeviceContextPtr, sink_weights: LayoutTensor[dtype, sink_weights.layout, sink_weights.origin, element_layout=sink_weights.element_layout, layout_int_type=sink_weights.layout_int_type, linear_idx_type=sink_weights.linear_idx_type, masked=sink_weights.masked, alignment=sink_weights.alignment])

Was this page helpful?