Skip to main content

Mojo function

mxfp4_block_scaled_matmul_amd

mxfp4_block_scaled_matmul_amd(c: TileTensor[c.dtype, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size], a_scales: TileTensor[a_scales.dtype, a_scales.LayoutType, a_scales.origin, address_space=a_scales.address_space, linear_idx_type=a_scales.linear_idx_type, element_size=a_scales.element_size], b_scales: TileTensor[b_scales.dtype, b_scales.LayoutType, b_scales.origin, address_space=b_scales.address_space, linear_idx_type=b_scales.linear_idx_type, element_size=b_scales.element_size], ctx: DeviceContext)

Launch native MXFP4 block-scaled matmul on AMD CDNA4.

Uses cdna4_block_scaled_mfma with FLOAT4_E2M1 directly — no dequantization to FP8. Both A and B must be packed uint8 with E8M0 scaling factors.

Args:

  • c (TileTensor): Output [M, N] float32.
  • a (TileTensor): Packed A [M, K//2] uint8 (two MXFP4 elements per byte).
  • b (TileTensor): Packed B [N, K//2] uint8 (transposed, two MXFP4 per byte).
  • a_scales (TileTensor): A scales [M, K//32] float8_e8m0fnu.
  • b_scales (TileTensor): B scales [N, K//32] float8_e8m0fnu.
  • ctx (DeviceContext): Device context for kernel launch.

Was this page helpful?