Skip to main content

Mojo function

quantize_and_bmm_fp8_helper

quantize_and_bmm_fp8_helper[dtype: DType, fp8_dtype: DType, fp8_scale_dtype: DType, m_scale_granularity: Int, n_scale_granularity: Int, k_scale_granularity: Int, target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](c: TileTensor[dtype, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[dtype, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[fp8_dtype, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_size=b.element_size], b_scales: TileTensor[fp8_scale_dtype, b_scales.LayoutType, b_scales.origin, linear_idx_type=b_scales.linear_idx_type, element_size=b_scales.element_size], ctx: DeviceContext)

Helper function to quantize and perform a batched matrix multiplication. This function uses the transposed view of the input tensor a.

Was this page helpful?