Skip to main content

Mojo function

grouped_matmul_dynamic_scaled_nvfp4

grouped_matmul_dynamic_scaled_nvfp4[transpose_b: Bool = True, target: StringSlice[StaticConstantOrigin] = "cpu"](c: TileTensor[c.dtype, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_shape_types=c.element_shape_types], a: TileTensor[a.dtype, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_shape_types=a.element_shape_types], b: TileTensor[b.dtype, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_shape_types=b.element_shape_types], a_scales: TileTensor[a_scales.dtype, a_scales.LayoutType, a_scales.origin, address_space=a_scales.address_space, linear_idx_type=a_scales.linear_idx_type, element_shape_types=a_scales.element_shape_types], b_scales: TileTensor[b_scales.dtype, b_scales.LayoutType, b_scales.origin, address_space=b_scales.address_space, linear_idx_type=b_scales.linear_idx_type, element_shape_types=b_scales.element_shape_types], a_offsets: TileTensor[a_offsets.dtype, a_offsets.LayoutType, a_offsets.origin, address_space=a_offsets.address_space, linear_idx_type=a_offsets.linear_idx_type, element_shape_types=a_offsets.element_shape_types], a_scale_offsets: TileTensor[a_scale_offsets.dtype, a_scale_offsets.LayoutType, a_scale_offsets.origin, address_space=a_scale_offsets.address_space, linear_idx_type=a_scale_offsets.linear_idx_type, element_shape_types=a_scale_offsets.element_shape_types], expert_ids: TileTensor[expert_ids.dtype, expert_ids.LayoutType, expert_ids.origin, address_space=expert_ids.address_space, linear_idx_type=expert_ids.linear_idx_type, element_shape_types=expert_ids.element_shape_types], expert_scales: TileTensor[expert_scales.dtype, expert_scales.LayoutType, expert_scales.origin, address_space=expert_scales.address_space, linear_idx_type=expert_scales.linear_idx_type, element_shape_types=expert_scales.element_shape_types], num_active_experts: Int, ctx: DeviceContext)

Performs grouped matrix multiplication with NVFP4 quantization.

This is a compatibility wrapper that creates the default config and calls the structured kernel implementation.

Computes C = A @ B^T for multiple expert groups in a Mixture of Experts (MoE) layer. Inputs A and B are NVFP4 quantized (4-bit floating point), packed as uint8 (2 values per byte), with float8_e4m3fn scale factors.

Parameters:

  • transpose_b (Bool): Whether B is transposed (must be True).
  • target (StringSlice): Target device (ignored, always runs on GPU).

Args:

  • c (TileTensor): Output tensor (total_tokens, N).
  • a (TileTensor): Input A tensor (total_tokens, K).
  • b (TileTensor): Weight tensor B (num_experts, N, K).
  • a_scales (TileTensor): Scale factors for A.
  • b_scales (TileTensor): Scale factors for B.
  • a_offsets (TileTensor): Per-expert token offsets.
  • a_scale_offsets (TileTensor): Per-expert scale offsets.
  • expert_ids (TileTensor): Active expert IDs.
  • expert_scales (TileTensor): Per-expert output scaling.
  • num_active_experts (Int): Number of active experts.
  • ctx (DeviceContext): Device context.

Was this page helpful?