Mojo function
matmul
matmul[use_tf32: Bool = False](ctx: DeviceContext, c: NDBuffer[dtype, 2, origin, shape], a: NDBuffer[dtype, 2, origin, shape], b: NDBuffer[dtype, 2, origin, shape], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = 1, beta: SIMD[float32, 1] = 0)
Matmul using the vendor BLAS library. With a global handle.
matmul[c_type: DType, a_type: DType, b_type: DType, c_layout: Layout, a_layout: Layout, b_layout: Layout, use_tf32: Bool = False](ctx: DeviceContext, c_tensor: LayoutTensor[c_type, c_layout, origin], a_tensor: LayoutTensor[a_type, a_layout, origin], b_tensor: LayoutTensor[b_type, b_layout, origin], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = 1, beta: SIMD[float32, 1] = 0)
matmul[c_type: DType, a_type: DType, b_type: DType, c_layout: Layout, a_layout: Layout, b_layout: Layout, use_tf32: Bool = False](ctx: DeviceContext, handle: Handle[backend], c_tensor: LayoutTensor[c_type, c_layout, origin], a_tensor: LayoutTensor[a_type, a_layout, origin], b_tensor: LayoutTensor[b_type, b_layout, origin], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = 1, beta: SIMD[float32, 1] = 0)
matmul[use_tf32: Bool = False](ctx: DeviceContext, handle: Handle[backend], c: NDBuffer[dtype, 2, origin, shape], a: NDBuffer[dtype, 2, origin, shape], b: NDBuffer[dtype, 2, origin, shape], *, c_row_major: Bool = False, transpose_a: Bool = False, transpose_b: Bool = False, alpha: SIMD[float32, 1] = 1, beta: SIMD[float32, 1] = 0)
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!