Skip to main content

Mojo function

dispatch_1x1x1_matmul_conv3d

dispatch_1x1x1_matmul_conv3d[input_type: DType, filter_type: DType, output_type: DType, filter_is_fcrs: Bool, maybe_epilogue_func: Optional[def[dtype: DType, rank: Int, width: Int](IndexList[rank], SIMD[dtype, width]) capturing -> None] = None](input: TileTensor[input_type, input.LayoutType, input.origin, address_space=input.address_space, linear_idx_type=input.linear_idx_type, element_size=input.element_size], filter: TileTensor[filter_type, filter.LayoutType, filter.origin, address_space=filter.address_space, linear_idx_type=filter.linear_idx_type, element_size=filter.element_size], output: TileTensor[output_type, output.LayoutType, output.origin, address_space=output.address_space, linear_idx_type=output.linear_idx_type, element_size=output.element_size], stride: IndexList[3], dilation: IndexList[3], symmetric_padding: IndexList[3], num_groups: Int, ctx: DeviceContext) -> Bool

Try to dispatch a 1x1x1 3D conv directly as a single _matmul_gpu.

Returns True if the conv was handled; False if the caller should fall back to another implementation.

Skips on: non-bf16 dtype, grouped conv, dilation != 1, stride != 1, non-zero padding, kernel size other than 1x1x1, and K (= C_in) below the matmul's minimum useful size.

Returns:

Bool

Was this page helpful?