Skip to main content

Mojo function

quantize_and_bmm_fp8_helper

quantize_and_bmm_fp8_helper[dtype: DType, fp8_dtype: DType, fp8_scale_dtype: DType, m_scale_granularity: Int, n_scale_granularity: Int, k_scale_granularity: Int, target: StringSlice[StaticConstantOrigin] = "cpu"](c: LayoutTensor[dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], a: LayoutTensor[dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], b: LayoutTensor[fp8_dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], b_scales: LayoutTensor[fp8_scale_dtype, layout, origin, element_layout=element_layout, layout_int_type=layout_int_type, linear_idx_type=linear_idx_type, masked=masked, alignment=alignment], ctx: DeviceContext)

Helper function to quantize and perform a batched matrix multiplication.

Was this page helpful?