Skip to main content

Mojo function

blockwise_scaled_fp8_with_epilogue

blockwise_scaled_fp8_with_epilogue[c_type: DType, a_type: DType, b_type: DType, a_scales_type: DType, b_scales_type: DType, //, *, scales_granularity_mnk: IndexList[3], transpose_b: Bool = False, elementwise_lambda_fn: Optional[elementwise_epilogue_type] = None](c: LayoutTensor[c_type, c.layout, c.origin, element_layout=c.element_layout, layout_int_type=c.layout_int_type, linear_idx_type=c.linear_idx_type, masked=c.masked, alignment=c.alignment], a: LayoutTensor[a_type, a.layout, a.origin, element_layout=a.element_layout, layout_int_type=a.layout_int_type, linear_idx_type=a.linear_idx_type, masked=a.masked, alignment=a.alignment], b: LayoutTensor[b_type, b.layout, b.origin, element_layout=b.element_layout, layout_int_type=b.layout_int_type, linear_idx_type=b.linear_idx_type, masked=b.masked, alignment=b.alignment], a_scales: LayoutTensor[a_scales_type, a_scales.layout, a_scales.origin, element_layout=a_scales.element_layout, layout_int_type=a_scales.layout_int_type, linear_idx_type=a_scales.linear_idx_type, masked=a_scales.masked, alignment=a_scales.alignment], b_scales: LayoutTensor[b_scales_type, b_scales.layout, b_scales.origin, element_layout=b_scales.element_layout, layout_int_type=b_scales.layout_int_type, linear_idx_type=b_scales.linear_idx_type, masked=b_scales.masked, alignment=b_scales.alignment], ctx: DeviceContext)

Our sm100 blockwise scaled fp8 matmul kernel still does not support fusion of elementwise operations. This is a temporary implementation that uses our sm100 blockwise scaled fp8 matmul kernel and dispatch a separate epilogue kernel to apply the elementwise operations. For non B200 GPUs, we use the naive blockwise scaled fp8 matmul which support normal epilogue natively.

Was this page helpful?