Skip to main content

Mojo function

naive_grouped_matmul

naive_grouped_matmul[c_type: DType, c_shape: DimList[c_shape.values], a_type: DType, a_shape: DimList[a_shape.values], b_type: DType, b_shape: DimList[b_shape.values], //, *, transpose_b: Bool = True, elementwise_lambda_fn: Optional[elementwise_epilogue_type] = None](c: NDBuffer[c_type, MutAnyOrigin, c_shape, DimList.create_unknown[2]()], a: NDBuffer[a_type, ImmutAnyOrigin, a_shape, DimList.create_unknown[2]()], b: NDBuffer[b_type, ImmutAnyOrigin, b_shape, DimList.create_unknown[3]()], a_offsets: NDBuffer[DType.uint32, ImmutAnyOrigin, DimList.create_unknown[1](), DimList.create_unknown[1]()], expert_ids: NDBuffer[DType.int32, ImmutAnyOrigin, DimList.create_unknown[1](), DimList.create_unknown[1]()], max_num_tokens_per_expert: Int, num_active_experts: Int, ctx: DeviceContext)

NDBuffer implementation of naive grouped matmul.

naive_grouped_matmul[*, transpose_b: Bool = True, elementwise_lambda_fn: Optional[elementwise_epilogue_type] = None](c: TileTensor[c.dtype, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_size=b.element_size], a_offsets: TileTensor[DType.uint32, a_offsets.LayoutType, a_offsets.origin, linear_idx_type=a_offsets.linear_idx_type, element_size=a_offsets.element_size], expert_ids: TileTensor[DType.int32, expert_ids.LayoutType, expert_ids.origin, linear_idx_type=expert_ids.linear_idx_type, element_size=expert_ids.element_size], max_num_tokens_per_expert: Int, num_active_experts: Int, ctx: DeviceContext)

TileTensor primary implementation of naive_grouped_matmul.

Was this page helpful?