Python class
DeviceMapping
DeviceMappingβ
class max.experimental.sharding.DeviceMapping
Bases: ABC
Abstract base for all sharding specifications.
A DeviceMapping pairs a DeviceMesh with a description of how
tensor data is distributed across that mesh. Two concrete implementations
exist:
PlacementMapping: mesh-axis-indexed, for eager per-op dispatch.NamedMapping: tensor-dim-indexed, for future full-graph sharding search (for example, a Python-level transform over an op trace).
is_fully_replicatedβ
abstract property is_fully_replicated: bool
Whether every device holds a complete copy of the tensor.
Returns True if no dimension is sharded and there are no
pending reductions.
is_fully_resolvedβ
abstract property is_fully_resolved: bool
Whether this spec can be used in eager dispatch.
Returns False if the spec contains compiler-only annotations
(for example, priorities) that cannot be resolved without a compiler.
meshβ
abstract property mesh: DeviceMesh
The device mesh this sharding is defined over.
to_named_sharding()β
abstract to_named_sharding(tensor_rank)
Converts to tensor-dim-indexed spec for compiler lowering.
-
Parameters:
-
tensor_rank (int) β The number of dimensions in the tensor. Required because the spec must have one entry per tensor dim.
-
Raises:
-
ConversionError β If the spec contains custom placements that have no
NamedMappingequivalent. -
Return type:
to_placements()β
abstract to_placements()
Converts to mesh-axis-indexed placements for eager dispatch.
Returns one Placement per mesh axis.
-
Raises:
-
ConversionError β If the spec contains features that cannot be represented as placements (for example, priorities or custom placement types without a standard equivalent).
-
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!