Skip to main content

Mojo function

dispatch_gemv

dispatch_gemv[c_type: DType, a_type: DType, b_type: DType, //, transpose_b: Bool = False, elementwise_lambda_fn: Optional[def[dtype: DType, width: Int, *, alignment: Int = 1](IndexList[2], SIMD[dtype, width]) capturing -> None] = None, elementwise_lambda_wrapper: Optional[def[dtype: DType, width: Int, *, alignment: Int = 1](IndexList[2], SIMD[dtype, width]) capturing -> None] = None, elementwise_compute_lambda_fn: Optional[def[dtype: DType, width: Int, *, alignment: Int = 1](IndexList[2], SIMD[dtype, width]) capturing -> SIMD[dtype, width]] = None, pdl_level: PDLLevel = PDLLevel()](c: TileTensor[c_type, c.LayoutType, c.origin, address_space=c.address_space, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a_type, a.LayoutType, a.origin, address_space=a.address_space, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b_type, b.LayoutType, b.origin, address_space=b.address_space, linear_idx_type=b.linear_idx_type, element_size=b.element_size], ctx: DeviceContext)

Dispatch M=1 (or N=1) matmul to GEMV or SM100 GEMM based on (N, K).

For most M=1 shapes GEMV is preferred, but for certain large (N, K) combinations the SM100 GEMM kernel achieves higher throughput. Add new (N, K) pairs to SM100_GEMV_SHAPES as they are identified through benchmarking.

N=1 always routes to GEMV: SM100 TMA requires N * sizeof(c_type) % 16 == 0.

Was this page helpful?