Skip to main content

Mojo function

quantize_and_bmm_fp8_helper

quantize_and_bmm_fp8_helper[dtype: DType, fp8_dtype: DType, fp8_scale_dtype: DType, m_scale_granularity: Int, n_scale_granularity: Int, k_scale_granularity: Int, target: StringSlice[StaticConstantOrigin] = "cpu"](c: TileTensor[dtype, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_shape_types=c.element_shape_types], a: TileTensor[dtype, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_shape_types=a.element_shape_types], b: TileTensor[fp8_dtype, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_shape_types=b.element_shape_types], b_scales: TileTensor[fp8_scale_dtype, b_scales.LayoutType, b_scales.origin, linear_idx_type=b_scales.linear_idx_type, element_shape_types=b_scales.element_shape_types], ctx: DeviceContext)

Helper function to quantize and perform a batched matrix multiplication. This function uses the transposed view of the input tensor a.

Was this page helpful?