Mojo struct
DeviceAttribute
@register_passable(trivial)
struct DeviceAttribute
Aliases
MAX_THREADS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](1))
: Maximum number of threads per blockMAX_BLOCK_DIM_X = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](2))
: Maximum block dimension XMAX_BLOCK_DIM_Y = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](3))
: Maximum block dimension YMAX_BLOCK_DIM_Z = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](4))
: Maximum block dimension ZMAX_GRID_DIM_X = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](5))
: Maximum grid dimension XMAX_GRID_DIM_Y = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](6))
: Maximum grid dimension YMAX_GRID_DIM_Z = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](7))
: Maximum grid dimension ZMAX_SHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](8))
: Maximum shared memory available per block in bytesSHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](8))
: Deprecated, use alias MAX_SHARED_MEMORY_PER_BLOCKTOTAL_CONSTANT_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](9))
: Memory available on device for constant variables in a CUDA C kernel in bytesWARP_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](10))
: Warp size in threadsMAX_PITCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](11))
: Maximum pitch in bytes allowed by memory copiesMAX_REGISTERS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](12))
: Maximum number of 32-bit registers available per blockREGISTERS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](12))
: Deprecated, use alias MAX_REGISTERS_PER_BLOCKCLOCK_RATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](13))
: Typical clock frequency in kilohertzTEXTURE_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](14))
: Alignment requirement for texturesGPU_OVERLAP = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](15))
: Device can possibly copy memory and execute a kernel concurrently. Deprecated. Use instead alias ASYNC_ENGINE_COUNT.)MULTIPROCESSOR_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](16))
: Number of multiprocessors on deviceKERNEL_EXEC_TIMEOUT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](17))
: Specifies whether there is a run time limit on kernelsINTEGRATED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](18))
: Device is integrated with host memoryCAN_MAP_HOST_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](19))
: Device can map host memory into CUDA address spaceCOMPUTE_MODE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](20))
: Compute mode (See ::CUcomputemode for details))MAXIMUM_TEXTURE1D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](21))
: Maximum 1D texture widthMAXIMUM_TEXTURE2D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](22))
: Maximum 2D texture widthMAXIMUM_TEXTURE2D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](23))
: Maximum 2D texture heightMAXIMUM_TEXTURE3D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](24))
: Maximum 3D texture widthMAXIMUM_TEXTURE3D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](25))
: Maximum 3D texture heightMAXIMUM_TEXTURE3D_DEPTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](26))
: Maximum 3D texture depthMAXIMUM_TEXTURE2D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](27))
: Maximum 2D layered texture widthMAXIMUM_TEXTURE2D_LAYERED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](28))
: Maximum 2D layered texture heightMAXIMUM_TEXTURE2D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](29))
: Maximum layers in a 2D layered textureMAXIMUM_TEXTURE2D_ARRAY_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](27))
: Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_WIDTHMAXIMUM_TEXTURE2D_ARRAY_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](28))
: Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_HEIGHTMAXIMUM_TEXTURE2D_ARRAY_NUMSLICES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](29))
: Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_LAYERSSURFACE_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](30))
: Alignment requirement for surfacesCONCURRENT_KERNELS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](31))
: Device can possibly execute multiple kernels concurrentlyECC_ENABLED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](32))
: Device has ECC support enabledPCI_BUS_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](33))
: PCI bus ID of the devicePCI_DEVICE_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](34))
: PCI device ID of the deviceTCC_DRIVER = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](35))
: Device is using TCC driver modelMEMORY_CLOCK_RATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](36))
: Peak memory clock frequency in kilohertzGLOBAL_MEMORY_BUS_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](37))
: Global memory bus width in bitsL2_CACHE_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](38))
: Size of L2 cache in bytesMAX_THREADS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](39))
: Maximum resident threads per multiprocessorASYNC_ENGINE_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](40))
: Number of asynchronous enginesUNIFIED_ADDRESSING = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](41))
: Device shares a unified address space with the hostMAXIMUM_TEXTURE1D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](42))
: Maximum 1D layered texture widthMAXIMUM_TEXTURE1D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](43))
: Maximum layers in a 1D layered textureCAN_TEX2D_GATHER = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](44))
: Deprecated, do not use.)MAXIMUM_TEXTURE2D_GATHER_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](45))
: Maximum 2D texture width if CUDA_ARRAY3D_TEXTURE_GATHER is setMAXIMUM_TEXTURE2D_GATHER_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](46))
: Maximum 2D texture height if CUDA_ARRAY3D_TEXTURE_GATHER is setMAXIMUM_TEXTURE3D_WIDTH_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](47))
: Alternate maximum 3D texture widthMAXIMUM_TEXTURE3D_HEIGHT_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](48))
: Alternate maximum 3D texture heightMAXIMUM_TEXTURE3D_DEPTH_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](49))
: Alternate maximum 3D texture depthPCI_DOMAIN_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](50))
: PCI domain ID of the deviceTEXTURE_PITCH_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](51))
: Pitch alignment requirement for texturesMAXIMUM_TEXTURECUBEMAP_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](52))
: Maximum cubemap texture width/heightMAXIMUM_TEXTURECUBEMAP_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](53))
: Maximum cubemap layered texture width/heightMAXIMUM_TEXTURECUBEMAP_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](54))
: Maximum layers in a cubemap layered textureMAXIMUM_SURFACE1D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](55))
: Maximum 1D surface widthMAXIMUM_SURFACE2D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](56))
: Maximum 2D surface widthMAXIMUM_SURFACE2D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](57))
: Maximum 2D surface heightMAXIMUM_SURFACE3D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](58))
: Maximum 3D surface widthMAXIMUM_SURFACE3D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](59))
: Maximum 3D surface heightMAXIMUM_SURFACE3D_DEPTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](60))
: Maximum 3D surface depthMAXIMUM_SURFACE1D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](61))
: Maximum 1D layered surface widthMAXIMUM_SURFACE1D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](62))
: Maximum layers in a 1D layered surfaceMAXIMUM_SURFACE2D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](63))
: Maximum 2D layered surface widthMAXIMUM_SURFACE2D_LAYERED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](64))
: Maximum 2D layered surface heightMAXIMUM_SURFACE2D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](65))
: Maximum layers in a 2D layered surfaceMAXIMUM_SURFACECUBEMAP_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](66))
: Maximum cubemap surface widthMAXIMUM_SURFACECUBEMAP_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](67))
: Maximum cubemap layered surface widthMAXIMUM_SURFACECUBEMAP_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](68))
: Maximum layers in a cubemap layered surfaceMAXIMUM_TEXTURE1D_LINEAR_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](69))
: Deprecated, do not use. Use cudaDeviceGetTexture1DLinearMaxWidth() or cuDeviceGetTexture1DLinearMaxWidth() instead.)MAXIMUM_TEXTURE2D_LINEAR_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](70))
: Maximum 2D linear texture widthMAXIMUM_TEXTURE2D_LINEAR_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](71))
: Maximum 2D linear texture heightMAXIMUM_TEXTURE2D_LINEAR_PITCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](72))
: Maximum 2D linear texture pitch in bytesMAXIMUM_TEXTURE2D_MIPMAPPED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](73))
: Maximum mipmapped 2D texture widthMAXIMUM_TEXTURE2D_MIPMAPPED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](74))
: Maximum mipmapped 2D texture heightCOMPUTE_CAPABILITY_MAJOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](75))
: Major compute capability version numberCOMPUTE_CAPABILITY_MINOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](76))
: Minor compute capability version numberMAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](77))
: Maximum mipmapped 1D texture widthSTREAM_PRIORITIES_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](78))
: Device supports stream prioritiesGLOBAL_L1_CACHE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](79))
: Device supports caching globals in L1LOCAL_L1_CACHE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](80))
: Device supports caching locals in L1MAX_SHARED_MEMORY_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](81))
: Maximum shared memory available per multiprocessor in bytesMAX_REGISTERS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](82))
: Maximum number of 32-bit registers available per multiprocessorMANAGED_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](83))
: Device can allocate managed memory on this systemMULTI_GPU_BOARD = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](84))
: Device is on a multi-GPU boardMULTI_GPU_BOARD_GROUP_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](85))
: Unique id for a group of devices on the same multi-GPU boardHOST_NATIVE_ATOMIC_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](86))
: Link between the device and the host supports native atomic operations (this is a placeholder attribute, and is not supported on any current hardware).SINGLE_TO_DOUBLE_PRECISION_PERF_RATIO = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](87))
: Ratio of single precision performance (in floating-point operations per second) to double precision performance.PAGEABLE_MEMORY_ACCESS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](88))
: Device supports coherently accessing pageable memory without calling cudaHostRegister on it.CONCURRENT_MANAGED_ACCESS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](89))
: Device can coherently access managed memory concurrently with the CPUCOMPUTE_PREEMPTION_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](90))
: Device supports compute preemption.CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](91))
: Device can access host registered memory at the same virtual address as the CPUCAN_USE_STREAM_MEM_OPS_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](92))
: Deprecated, along with v1 MemOps API, ::cuStreamBatchMemOp and related APIs are supported.CAN_USE_64_BIT_STREAM_MEM_OPS_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](93))
: Deprecated, along with v1 MemOps API, 64-bit operations are supported in ::cuStreamBatchMemOp and related APIs.CAN_USE_STREAM_WAIT_VALUE_NOR_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](94))
: Deprecated, along with v1 MemOps API, ::CU_STREAM_WAIT_VALUE_NOR is supported.COOPERATIVE_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](95))
: Device supports launching cooperative kernels via ::cuLaunchCooperativeKernelCOOPERATIVE_MULTI_DEVICE_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](96))
: Deprecated, ::cuLaunchCooperativeKernelMultiDevice is deprecated.)MAX_SHARED_MEMORY_PER_BLOCK_OPTIN = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](97))
: Maximum optin shared memory per blockCAN_FLUSH_REMOTE_WRITES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](98))
: The ::CU_STREAM_WAIT_VALUE_FLUSH flag and the ::CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES MemOp are supported on the device. See ef CUDA_MEMOP for additional details.HOST_REGISTER_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](99))
: Device supports host memory registration via ::cudaHostRegister.PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](100))
: Device accesses pageable memory via the host's page tables.DIRECT_MANAGED_MEM_ACCESS_FROM_HOST = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](101))
: The host can directly access managed memory on the device without migration.VIRTUAL_ADDRESS_MANAGEMENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](102))
: Deprecated, Use alias VIRTUAL_MEMORY_MANAGEMENT_SUPPORTEDVIRTUAL_MEMORY_MANAGEMENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](102))
: Device supports virtual memory management APIs like ::cuMemAddressReserve, ::cuMemCreate, ::cuMemMap and related APIsHANDLE_TYPE_POSIX_FILE_DESCRIPTOR_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](103))
: Device supports exporting memory to a posix file descriptor with ::cuMemExportToShareableHandle, if requested via ::cuMemCreateHANDLE_TYPE_WIN32_HANDLE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](104))
: Device supports exporting memory to a Win32 NT handle with ::cuMemExportToShareableHandle, if requested via ::cuMemCreateHANDLE_TYPE_WIN32_KMT_HANDLE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](105))
: Device supports exporting memory to a Win32 KMT handle with ::cuMemExportToShareableHandle, if requested via ::cuMemCreateMAX_BLOCKS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](106))
: Maximum number of blocks per multiprocessorGENERIC_COMPRESSION_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](107))
: Device supports compression of memoryMAX_PERSISTING_L2_CACHE_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](108))
: Maximum L2 persisting lines capacity setting in bytes.MAX_ACCESS_POLICY_WINDOW_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](109))
: Maximum value of CUaccessPolicyWindow::num_bytes.GPU_DIRECT_RDMA_WITH_CUDA_VMM_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](110))
: Device supports specifying the GPUDirect RDMA flag with ::cuMemCreateRESERVED_SHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](111))
: Shared memory reserved by CUDA driver per block in bytesSPARSE_CUDA_ARRAY_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](112))
: Device supports sparse CUDA arrays and sparse CUDA mipmapped arraysREAD_ONLY_HOST_REGISTER_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](113))
: Device supports using the ::cuMemHostRegister flag ::CU_MEMHOSTERGISTER_READ_ONLY to register memory that must be mapped as read-only to the GPUTIMELINE_SEMAPHORE_INTEROP_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](114))
: External timeline semaphore interop is supported on the deviceMEMORY_POOLS_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](115))
: Device supports using the ::cuMemAllocAsync and ::cuMemPool family of APIsGPU_DIRECT_RDMA_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](116))
: Device supports GPUDirect RDMA APIs, like nvidia_p2p_get_pages (see https://docs.nvidia.com/cuda/gpudirect-rdma for more information)GPU_DIRECT_RDMA_FLUSH_WRITES_OPTIONS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](117))
: The returned attribute shall be interpreted as a bitmask, where the individual bits are described by the ::CUflushGPUDirectRDMAWritesOptions enumGPU_DIRECT_RDMA_WRITES_ORDERING = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](118))
: GPUDirect RDMA writes to the device do not need to be flushed for consumers within the scope indicated by the returned attribute. See ::CUGPUDirectRDMAWritesOrdering for the numerical values returned here.MEMPOOL_SUPPORTED_HANDLE_TYPES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](119))
: Handle types supported with mempool based IPCCLUSTER_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](120))
: Indicates device supports cluster launchDEFERRED_MAPPING_CUDA_ARRAY_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](121))
: Device supports deferred mapping CUDA arrays and CUDA mipmapped arraysCAN_USE_64_BIT_STREAM_MEM_OPS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](122))
: 64-bit operations are supported in ::cuStreamBatchMemOp and related MemOp APIs.CAN_USE_STREAM_WAIT_VALUE_NOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](123))
: ::CU_STREAM_WAIT_VALUE_NOR is supported by MemOp APIs.DMA_BUF_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](124))
: Device supports buffer sharing with dma_buf mechanism.IPC_EVENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](125))
: Device supports IPC Events.)MEM_SYNC_DOMAIN_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](126))
: Number of memory domains the device supports.TENSOR_MAP_ACCESS_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](127))
: Device supports accessing memory using Tensor Map.UNIFIED_FUNCTION_POINTERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](129))
: Device supports unified function pointers.MULTICAST_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](132))
: Device supports switch multicast and reduction operations.
Implemented traits
AnyType
,
Copyable
,
ExplicitlyCopyable
,
Movable
,
UnknownDestructibility
Methods
__init__
@implicit
__init__(value: SIMD[int32, 1]) -> Self
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!