Mojo function
grouped_matmul_nvfp4_dispatch
grouped_matmul_nvfp4_dispatch[transpose_b: Bool = True, target: StringSlice[StaticConstantOrigin] = StringSlice("cpu"), override: Bool = False, AB_swapped: Bool = True, mma_bn: Int = 8, cta_group: Int = 1, num_pipeline_stages: Int = -1](c: TileTensor[c.dtype, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size], a_scales: TileTensor[a_scales.dtype, a_scales.LayoutType, a_scales.origin, address_space=a_scales.address_space, linear_idx_type=a_scales.linear_idx_type, element_size=a_scales.element_size], b_scales: TileTensor[b_scales.dtype, b_scales.LayoutType, b_scales.origin, address_space=b_scales.address_space, linear_idx_type=b_scales.linear_idx_type, element_size=b_scales.element_size], a_offsets: TileTensor[a_offsets.dtype, a_offsets.LayoutType, a_offsets.origin, address_space=a_offsets.address_space, linear_idx_type=a_offsets.linear_idx_type, element_size=a_offsets.element_size], a_scale_offsets: TileTensor[a_scale_offsets.dtype, a_scale_offsets.LayoutType, a_scale_offsets.origin, address_space=a_scale_offsets.address_space, linear_idx_type=a_scale_offsets.linear_idx_type, element_size=a_scale_offsets.element_size], expert_ids: TileTensor[expert_ids.dtype, expert_ids.LayoutType, expert_ids.origin, address_space=expert_ids.address_space, linear_idx_type=expert_ids.linear_idx_type, element_size=expert_ids.element_size], expert_scales: TileTensor[expert_scales.dtype, expert_scales.LayoutType, expert_scales.origin, address_space=expert_scales.address_space, linear_idx_type=expert_scales.linear_idx_type, element_size=expert_scales.element_size], num_active_experts: Int, estimated_total_m: Int, ctx: DeviceContext)
Dispatch grouped NVFP4 matmul with shape-tuned configuration.
When override=False (default, production), selects kernel parameters from the tuning table keyed on (N, K). The caller's AB_swapped/mma_bn/cta_group/num_pipeline_stages are ignored.
When override=True (ablation/benchmarking), uses the caller's parameter values directly. num_pipeline_stages=-1 = auto-compute.
Parameters:
- transpose_b (
Bool): Whether B is transposed (must be True). - target (
StringSlice): Target device (unused, for MOGG interface compatibility). - override (
Bool): If True, use caller's config params directly. If False, use tuning table. - AB_swapped (
Bool): A/B swap (only used when override=True). - mma_bn (
Int): MMA tile N dimension (only used when override=True). - cta_group (
Int): CTA group size (only used when override=True). - num_pipeline_stages (
Int): Pipeline depth (only used when override=True). -1 = auto-compute.
Args:
- c (
TileTensor): Output tensor (total_tokens, N). - a (
TileTensor): Input A tensor (total_tokens, K//2 packed). - b (
TileTensor): Weight tensor B (num_experts, N, K//2 packed). - a_scales (
TileTensor): Scale factors for A (5D). - b_scales (
TileTensor): Scale factors for B (6D). - a_offsets (
TileTensor): Per-expert token offsets (num_active_experts + 1). - a_scale_offsets (
TileTensor): Per-expert scale offsets (num_active_experts). - expert_ids (
TileTensor): Active expert IDs (num_active_experts). - expert_scales (
TileTensor): Per-expert output scaling (num_experts). - num_active_experts (
Int): Number of active experts. - estimated_total_m (
Int): Estimated number of total non-padded tokens. - ctx (
DeviceContext): Device context.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!