Mojo function
block_scaled_matmul_with_epilogue
block_scaled_matmul_with_epilogue[c_type: DType, a_type: DType, b_type: DType, scales_dtype: DType, c_layout: Layout, a_layout: Layout, b_layout: Layout, sfa_layout: Layout, sfb_layout: Layout, //, *, SF_VECTOR_SIZE: Int, transpose_b: Bool = True, elementwise_lambda_fn: OptionalReg[fn[dtype: DType, width: Int, *, alignment: Int = 1](IndexList[2], SIMD[dtype, width]) capturing -> None] = None](c: LayoutTensor[c_type, c_layout, MutAnyOrigin], a: LayoutTensor[a_type, a_layout, MutAnyOrigin], b: LayoutTensor[b_type, b_layout, MutAnyOrigin], a_scales: LayoutTensor[scales_dtype, sfa_layout, MutAnyOrigin], b_scales: LayoutTensor[scales_dtype, sfb_layout, MutAnyOrigin], tensor_sf: Float32, ctx: DeviceContext)
Our sm100 block scaled matmul kernel still does not support fusion of elementwise operations. This is a temporary implementation that uses our sm100 block scaled matmul kernel and dispatch a separate epilogue kernel to apply the elementwise operations.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!