Skip to main content

Mojo function

grouped_matmul_block_scaled_dispatch

grouped_matmul_block_scaled_dispatch[transpose_b: Bool = True, target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](c: TileTensor[c.dtype, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size], a_scales: TileTensor[a_scales.dtype, a_scales.LayoutType, a_scales.origin, address_space=a_scales.address_space, linear_idx_type=a_scales.linear_idx_type, element_size=a_scales.element_size], b_scales: TileTensor[b_scales.dtype, b_scales.LayoutType, b_scales.origin, address_space=b_scales.address_space, linear_idx_type=b_scales.linear_idx_type, element_size=b_scales.element_size], a_offsets: TileTensor[a_offsets.dtype, a_offsets.LayoutType, a_offsets.origin, address_space=a_offsets.address_space, linear_idx_type=a_offsets.linear_idx_type, element_size=a_offsets.element_size], a_scale_offsets: TileTensor[a_scale_offsets.dtype, a_scale_offsets.LayoutType, a_scale_offsets.origin, address_space=a_scale_offsets.address_space, linear_idx_type=a_scale_offsets.linear_idx_type, element_size=a_scale_offsets.element_size], expert_ids: TileTensor[expert_ids.dtype, expert_ids.LayoutType, expert_ids.origin, address_space=expert_ids.address_space, linear_idx_type=expert_ids.linear_idx_type, element_size=expert_ids.element_size], expert_scales: TileTensor[expert_scales.dtype, expert_scales.LayoutType, expert_scales.origin, address_space=expert_scales.address_space, linear_idx_type=expert_scales.linear_idx_type, element_size=expert_scales.element_size], num_active_experts: Int, estimated_total_m: Int, ctx: DeviceContext)

Dispatch grouped block-scaled matmul to format-specific implementation.

Currently NVFP4, MXFP4, and MXFP8 on SM100 are supported. See grouped_matmul_block_scaled_sm100_dispatch for parameter documentation.

Was this page helpful?