Mojo function
generic_flash_attention_kv_cache_padded
generic_flash_attention_kv_cache_padded[collection_t: KVCollectionT, dtype: DType, //, *, target: StringSlice[StaticConstantOrigin], mask_str: StringSlice[StaticConstantOrigin], score_mod_str: StringSlice[StaticConstantOrigin], local_window_size: Int = -1, num_heads: Int = -1](q: LayoutTensor[dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], kv_collection: collection_t, layer_idx: UInt32, valid_lengths: ManagedTensorSlice[io_spec, static_spec=static_spec], scale: Float32, output: LayoutTensor[dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], context: DeviceContextPtr, sink_weights: OptionalReg[LayoutTensor[dtype, Layout.row_major(-1), MutableAnyOrigin]] = None)
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!