Skip to main content

Mojo function

matmul_dynamic_scaled_fp8

matmul_dynamic_scaled_fp8[c_type: DType, a_type: DType, b_type: DType, a_scales_type: DType, b_scales_type: DType, //, input_scale_granularity: StringSlice[StaticConstantOrigin], weight_scale_granularity: StringSlice[StaticConstantOrigin], m_scale_granularity: Int, n_scale_granularity: Int, k_scale_granularity: Int, transpose_b: Bool = False, target: StringSlice[StaticConstantOrigin] = "cpu"](c: TileTensor[c_type, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a_type, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b_type, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_size=b.element_size], a_scales: TileTensor[a_scales_type, a_scales.LayoutType, a_scales.origin, linear_idx_type=a_scales.linear_idx_type, element_size=a_scales.element_size], b_scales: TileTensor[b_scales_type, b_scales.LayoutType, b_scales.origin, linear_idx_type=b_scales.linear_idx_type, element_size=b_scales.element_size], ctx: DeviceContext)

TileTensor primary implementation of dynamic scaled FP8 matmul.

matmul_dynamic_scaled_fp8[c_type: DType, a_type: DType, b_type: DType, a_scales_type: DType, b_scales_type: DType, //, input_scale_granularity: StringSlice[StaticConstantOrigin], weight_scale_granularity: StringSlice[StaticConstantOrigin], m_scale_granularity: Int, n_scale_granularity: Int, k_scale_granularity: Int, transpose_b: Bool = False, target: StringSlice[StaticConstantOrigin] = "cpu"](c: NDBuffer[c_type, c.origin, c.shape, c.strides], a: NDBuffer[a_type, a.origin, a.shape, DimList.create_unknown[2]()], b: NDBuffer[b_type, b.origin, b.shape, DimList.create_unknown[2]()], a_scales: NDBuffer[a_scales_type, a_scales.origin, a_scales.shape, a_scales.strides], b_scales: NDBuffer[b_scales_type, b_scales.origin, b_scales.shape, b_scales.strides], ctx: DeviceContext)

NDBuffer implementation of dynamic scaled FP8 matmul.

Was this page helpful?