Skip to main content

Mojo function

matmul

matmul[transpose_a: Bool = False, transpose_b: Bool = False, b_packed: Bool = False, elementwise_lambda_fn: Optional[elementwise_epilogue_type] = None, elementwise_compute_lambda_fn: Optional[elementwise_compute_lambda_type] = None, saturated_vnni: Bool = False, _trace_description: StringSlice[StaticConstantOrigin] = StringSlice(""), target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](c: TileTensor[c.dtype, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_size=b.element_size], ctx: DeviceContextPtr = DeviceContextPtr())

TileTensor overload of matmul with DeviceContextPtr.

matmul[transpose_a: Bool = False, transpose_b: Bool = False, b_packed: Bool = False, elementwise_lambda_fn: Optional[elementwise_epilogue_type] = None, elementwise_compute_lambda_fn: Optional[elementwise_compute_lambda_type] = None, saturated_vnni: Bool = False, _trace_description: StringSlice[StaticConstantOrigin] = StringSlice(""), target: StringSlice[StaticConstantOrigin] = StringSlice("cpu")](c: TileTensor[c.dtype, c.LayoutType, c.origin, linear_idx_type=c.linear_idx_type, element_size=c.element_size], a: TileTensor[a.dtype, a.LayoutType, a.origin, linear_idx_type=a.linear_idx_type, element_size=a.element_size], b: TileTensor[b.dtype, b.LayoutType, b.origin, linear_idx_type=b.linear_idx_type, element_size=b.element_size], ctx: Optional[DeviceContext])

Primary TileTensor matmul implementation. Routes GPU directly, delegates CPU path to cpu.matmul.

Was this page helpful?