Skip to main content
Log in

Mojo struct

DeviceAttribute

@register_passable(trivial) struct DeviceAttribute

Aliases

  • MAX_THREADS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](1)): Maximum number of threads per block
  • MAX_BLOCK_DIM_X = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](2)): Maximum block dimension X
  • MAX_BLOCK_DIM_Y = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](3)): Maximum block dimension Y
  • MAX_BLOCK_DIM_Z = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](4)): Maximum block dimension Z
  • MAX_GRID_DIM_X = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](5)): Maximum grid dimension X
  • MAX_GRID_DIM_Y = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](6)): Maximum grid dimension Y
  • MAX_GRID_DIM_Z = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](7)): Maximum grid dimension Z
  • MAX_SHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](8)): Maximum shared memory available per block in bytes
  • SHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](8)): Deprecated, use alias MAX_SHARED_MEMORY_PER_BLOCK
  • TOTAL_CONSTANT_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](9)): Memory available on device for constant variables in a CUDA C kernel in bytes
  • WARP_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](10)): Warp size in threads
  • MAX_PITCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](11)): Maximum pitch in bytes allowed by memory copies
  • MAX_REGISTERS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](12)): Maximum number of 32-bit registers available per block
  • REGISTERS_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](12)): Deprecated, use alias MAX_REGISTERS_PER_BLOCK
  • CLOCK_RATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](13)): Typical clock frequency in kilohertz
  • TEXTURE_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](14)): Alignment requirement for textures
  • GPU_OVERLAP = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](15)): Device can possibly copy memory and execute a kernel concurrently. Deprecated. Use instead alias ASYNC_ENGINE_COUNT.)
  • MULTIPROCESSOR_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](16)): Number of multiprocessors on device
  • KERNEL_EXEC_TIMEOUT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](17)): Specifies whether there is a run time limit on kernels
  • INTEGRATED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](18)): Device is integrated with host memory
  • CAN_MAP_HOST_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](19)): Device can map host memory into CUDA address space
  • COMPUTE_MODE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](20)): Compute mode (See ::CUcomputemode for details))
  • MAXIMUM_TEXTURE1D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](21)): Maximum 1D texture width
  • MAXIMUM_TEXTURE2D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](22)): Maximum 2D texture width
  • MAXIMUM_TEXTURE2D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](23)): Maximum 2D texture height
  • MAXIMUM_TEXTURE3D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](24)): Maximum 3D texture width
  • MAXIMUM_TEXTURE3D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](25)): Maximum 3D texture height
  • MAXIMUM_TEXTURE3D_DEPTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](26)): Maximum 3D texture depth
  • MAXIMUM_TEXTURE2D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](27)): Maximum 2D layered texture width
  • MAXIMUM_TEXTURE2D_LAYERED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](28)): Maximum 2D layered texture height
  • MAXIMUM_TEXTURE2D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](29)): Maximum layers in a 2D layered texture
  • MAXIMUM_TEXTURE2D_ARRAY_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](27)): Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_WIDTH
  • MAXIMUM_TEXTURE2D_ARRAY_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](28)): Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_HEIGHT
  • MAXIMUM_TEXTURE2D_ARRAY_NUMSLICES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](29)): Deprecated, use alias MAXIMUM_TEXTURE2D_LAYERED_LAYERS
  • SURFACE_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](30)): Alignment requirement for surfaces
  • CONCURRENT_KERNELS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](31)): Device can possibly execute multiple kernels concurrently
  • ECC_ENABLED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](32)): Device has ECC support enabled
  • PCI_BUS_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](33)): PCI bus ID of the device
  • PCI_DEVICE_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](34)): PCI device ID of the device
  • TCC_DRIVER = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](35)): Device is using TCC driver model
  • MEMORY_CLOCK_RATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](36)): Peak memory clock frequency in kilohertz
  • GLOBAL_MEMORY_BUS_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](37)): Global memory bus width in bits
  • L2_CACHE_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](38)): Size of L2 cache in bytes
  • MAX_THREADS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](39)): Maximum resident threads per multiprocessor
  • ASYNC_ENGINE_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](40)): Number of asynchronous engines
  • UNIFIED_ADDRESSING = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](41)): Device shares a unified address space with the host
  • MAXIMUM_TEXTURE1D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](42)): Maximum 1D layered texture width
  • MAXIMUM_TEXTURE1D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](43)): Maximum layers in a 1D layered texture
  • CAN_TEX2D_GATHER = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](44)): Deprecated, do not use.)
  • MAXIMUM_TEXTURE2D_GATHER_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](45)): Maximum 2D texture width if CUDA_ARRAY3D_TEXTURE_GATHER is set
  • MAXIMUM_TEXTURE2D_GATHER_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](46)): Maximum 2D texture height if CUDA_ARRAY3D_TEXTURE_GATHER is set
  • MAXIMUM_TEXTURE3D_WIDTH_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](47)): Alternate maximum 3D texture width
  • MAXIMUM_TEXTURE3D_HEIGHT_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](48)): Alternate maximum 3D texture height
  • MAXIMUM_TEXTURE3D_DEPTH_ALTERNATE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](49)): Alternate maximum 3D texture depth
  • PCI_DOMAIN_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](50)): PCI domain ID of the device
  • TEXTURE_PITCH_ALIGNMENT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](51)): Pitch alignment requirement for textures
  • MAXIMUM_TEXTURECUBEMAP_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](52)): Maximum cubemap texture width/height
  • MAXIMUM_TEXTURECUBEMAP_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](53)): Maximum cubemap layered texture width/height
  • MAXIMUM_TEXTURECUBEMAP_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](54)): Maximum layers in a cubemap layered texture
  • MAXIMUM_SURFACE1D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](55)): Maximum 1D surface width
  • MAXIMUM_SURFACE2D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](56)): Maximum 2D surface width
  • MAXIMUM_SURFACE2D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](57)): Maximum 2D surface height
  • MAXIMUM_SURFACE3D_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](58)): Maximum 3D surface width
  • MAXIMUM_SURFACE3D_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](59)): Maximum 3D surface height
  • MAXIMUM_SURFACE3D_DEPTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](60)): Maximum 3D surface depth
  • MAXIMUM_SURFACE1D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](61)): Maximum 1D layered surface width
  • MAXIMUM_SURFACE1D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](62)): Maximum layers in a 1D layered surface
  • MAXIMUM_SURFACE2D_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](63)): Maximum 2D layered surface width
  • MAXIMUM_SURFACE2D_LAYERED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](64)): Maximum 2D layered surface height
  • MAXIMUM_SURFACE2D_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](65)): Maximum layers in a 2D layered surface
  • MAXIMUM_SURFACECUBEMAP_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](66)): Maximum cubemap surface width
  • MAXIMUM_SURFACECUBEMAP_LAYERED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](67)): Maximum cubemap layered surface width
  • MAXIMUM_SURFACECUBEMAP_LAYERED_LAYERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](68)): Maximum layers in a cubemap layered surface
  • MAXIMUM_TEXTURE1D_LINEAR_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](69)): Deprecated, do not use. Use cudaDeviceGetTexture1DLinearMaxWidth() or cuDeviceGetTexture1DLinearMaxWidth() instead.)
  • MAXIMUM_TEXTURE2D_LINEAR_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](70)): Maximum 2D linear texture width
  • MAXIMUM_TEXTURE2D_LINEAR_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](71)): Maximum 2D linear texture height
  • MAXIMUM_TEXTURE2D_LINEAR_PITCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](72)): Maximum 2D linear texture pitch in bytes
  • MAXIMUM_TEXTURE2D_MIPMAPPED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](73)): Maximum mipmapped 2D texture width
  • MAXIMUM_TEXTURE2D_MIPMAPPED_HEIGHT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](74)): Maximum mipmapped 2D texture height
  • COMPUTE_CAPABILITY_MAJOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](75)): Major compute capability version number
  • COMPUTE_CAPABILITY_MINOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](76)): Minor compute capability version number
  • MAXIMUM_TEXTURE1D_MIPMAPPED_WIDTH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](77)): Maximum mipmapped 1D texture width
  • STREAM_PRIORITIES_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](78)): Device supports stream priorities
  • GLOBAL_L1_CACHE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](79)): Device supports caching globals in L1
  • LOCAL_L1_CACHE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](80)): Device supports caching locals in L1
  • MAX_SHARED_MEMORY_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](81)): Maximum shared memory available per multiprocessor in bytes
  • MAX_REGISTERS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](82)): Maximum number of 32-bit registers available per multiprocessor
  • MANAGED_MEMORY = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](83)): Device can allocate managed memory on this system
  • MULTI_GPU_BOARD = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](84)): Device is on a multi-GPU board
  • MULTI_GPU_BOARD_GROUP_ID = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](85)): Unique id for a group of devices on the same multi-GPU board
  • HOST_NATIVE_ATOMIC_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](86)): Link between the device and the host supports native atomic operations (this is a placeholder attribute, and is not supported on any current hardware).
  • SINGLE_TO_DOUBLE_PRECISION_PERF_RATIO = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](87)): Ratio of single precision performance (in floating-point operations per second) to double precision performance.
  • PAGEABLE_MEMORY_ACCESS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](88)): Device supports coherently accessing pageable memory without calling cudaHostRegister on it.
  • CONCURRENT_MANAGED_ACCESS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](89)): Device can coherently access managed memory concurrently with the CPU
  • COMPUTE_PREEMPTION_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](90)): Device supports compute preemption.
  • CAN_USE_HOST_POINTER_FOR_REGISTERED_MEM = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](91)): Device can access host registered memory at the same virtual address as the CPU
  • CAN_USE_STREAM_MEM_OPS_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](92)): Deprecated, along with v1 MemOps API, ::cuStreamBatchMemOp and related APIs are supported.
  • CAN_USE_64_BIT_STREAM_MEM_OPS_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](93)): Deprecated, along with v1 MemOps API, 64-bit operations are supported in ::cuStreamBatchMemOp and related APIs.
  • CAN_USE_STREAM_WAIT_VALUE_NOR_V1 = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](94)): Deprecated, along with v1 MemOps API, ::CU_STREAM_WAIT_VALUE_NOR is supported.
  • COOPERATIVE_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](95)): Device supports launching cooperative kernels via ::cuLaunchCooperativeKernel
  • COOPERATIVE_MULTI_DEVICE_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](96)): Deprecated, ::cuLaunchCooperativeKernelMultiDevice is deprecated.)
  • MAX_SHARED_MEMORY_PER_BLOCK_OPTIN = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](97)): Maximum optin shared memory per block
  • CAN_FLUSH_REMOTE_WRITES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](98)): The ::CU_STREAM_WAIT_VALUE_FLUSH flag and the ::CU_STREAM_MEM_OP_FLUSH_REMOTE_WRITES MemOp are supported on the device. See ef CUDA_MEMOP for additional details.
  • HOST_REGISTER_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](99)): Device supports host memory registration via ::cudaHostRegister.
  • PAGEABLE_MEMORY_ACCESS_USES_HOST_PAGE_TABLES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](100)): Device accesses pageable memory via the host's page tables.
  • DIRECT_MANAGED_MEM_ACCESS_FROM_HOST = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](101)): The host can directly access managed memory on the device without migration.
  • VIRTUAL_ADDRESS_MANAGEMENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](102)): Deprecated, Use alias VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED
  • VIRTUAL_MEMORY_MANAGEMENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](102)): Device supports virtual memory management APIs like ::cuMemAddressReserve, ::cuMemCreate, ::cuMemMap and related APIs
  • HANDLE_TYPE_POSIX_FILE_DESCRIPTOR_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](103)): Device supports exporting memory to a posix file descriptor with ::cuMemExportToShareableHandle, if requested via ::cuMemCreate
  • HANDLE_TYPE_WIN32_HANDLE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](104)): Device supports exporting memory to a Win32 NT handle with ::cuMemExportToShareableHandle, if requested via ::cuMemCreate
  • HANDLE_TYPE_WIN32_KMT_HANDLE_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](105)): Device supports exporting memory to a Win32 KMT handle with ::cuMemExportToShareableHandle, if requested via ::cuMemCreate
  • MAX_BLOCKS_PER_MULTIPROCESSOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](106)): Maximum number of blocks per multiprocessor
  • GENERIC_COMPRESSION_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](107)): Device supports compression of memory
  • MAX_PERSISTING_L2_CACHE_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](108)): Maximum L2 persisting lines capacity setting in bytes.
  • MAX_ACCESS_POLICY_WINDOW_SIZE = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](109)): Maximum value of CUaccessPolicyWindow::num_bytes.
  • GPU_DIRECT_RDMA_WITH_CUDA_VMM_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](110)): Device supports specifying the GPUDirect RDMA flag with ::cuMemCreate
  • RESERVED_SHARED_MEMORY_PER_BLOCK = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](111)): Shared memory reserved by CUDA driver per block in bytes
  • SPARSE_CUDA_ARRAY_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](112)): Device supports sparse CUDA arrays and sparse CUDA mipmapped arrays
  • READ_ONLY_HOST_REGISTER_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](113)): Device supports using the ::cuMemHostRegister flag ::CU_MEMHOSTERGISTER_READ_ONLY to register memory that must be mapped as read-only to the GPU
  • TIMELINE_SEMAPHORE_INTEROP_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](114)): External timeline semaphore interop is supported on the device
  • MEMORY_POOLS_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](115)): Device supports using the ::cuMemAllocAsync and ::cuMemPool family of APIs
  • GPU_DIRECT_RDMA_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](116)): Device supports GPUDirect RDMA APIs, like nvidia_p2p_get_pages (see https://docs.nvidia.com/cuda/gpudirect-rdma for more information)
  • GPU_DIRECT_RDMA_FLUSH_WRITES_OPTIONS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](117)): The returned attribute shall be interpreted as a bitmask, where the individual bits are described by the ::CUflushGPUDirectRDMAWritesOptions enum
  • GPU_DIRECT_RDMA_WRITES_ORDERING = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](118)): GPUDirect RDMA writes to the device do not need to be flushed for consumers within the scope indicated by the returned attribute. See ::CUGPUDirectRDMAWritesOrdering for the numerical values returned here.
  • MEMPOOL_SUPPORTED_HANDLE_TYPES = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](119)): Handle types supported with mempool based IPC
  • CLUSTER_LAUNCH = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](120)): Indicates device supports cluster launch
  • DEFERRED_MAPPING_CUDA_ARRAY_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](121)): Device supports deferred mapping CUDA arrays and CUDA mipmapped arrays
  • CAN_USE_64_BIT_STREAM_MEM_OPS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](122)): 64-bit operations are supported in ::cuStreamBatchMemOp and related MemOp APIs.
  • CAN_USE_STREAM_WAIT_VALUE_NOR = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](123)): ::CU_STREAM_WAIT_VALUE_NOR is supported by MemOp APIs.
  • DMA_BUF_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](124)): Device supports buffer sharing with dma_buf mechanism.
  • IPC_EVENT_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](125)): Device supports IPC Events.)
  • MEM_SYNC_DOMAIN_COUNT = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](126)): Number of memory domains the device supports.
  • TENSOR_MAP_ACCESS_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](127)): Device supports accessing memory using Tensor Map.
  • UNIFIED_FUNCTION_POINTERS = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](129)): Device supports unified function pointers.
  • MULTICAST_SUPPORTED = DeviceAttribute(__init__[__mlir_type.!kgen.int_literal](132)): Device supports switch multicast and reduction operations.

Implemented traits

AnyType, Copyable, ExplicitlyCopyable, Movable, UnknownDestructibility

Methods

__init__

@implicit __init__(value: SIMD[int32, 1]) -> Self