Skip to main content
Log in

Mojo function

matmul

matmul[use_tf32: Bool = False](ctx: DeviceContext, c: NDBuffer[type, 2, origin, shape], a: NDBuffer[type, 2, origin, shape], b: NDBuffer[type, 2, origin, shape], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = __init__[__mlir_type.!pop.float_literal](1), beta: SIMD[float32, 1] = __init__[__mlir_type.!pop.float_literal](0))

Matmul using the vendor BLAS library. With a global handle.

matmul[use_tf32: Bool = False](ctx: DeviceContext, handle: Handle[backend], c: NDBuffer[type, 2, origin, shape], a: NDBuffer[type, 2, origin, shape], b: NDBuffer[type, 2, origin, shape], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = __init__[__mlir_type.!pop.float_literal](1), beta: SIMD[float32, 1] = __init__[__mlir_type.!pop.float_literal](0))

Was this page helpful?