Python class
TensorLayout
TensorLayoutβ
class max.experimental.sharding.TensorLayout(dtype, shape, mapping)
Bases: DeviceMapping
Metadata snapshot of a distributed tensor for rule evaluation.
Bundles the tensorβs dtype, shape, and distribution mapping. The mapping
stays abstract (DeviceMapping) so rules work with any concrete
mapping type, such as PlacementMapping or NamedMapping.
The shape is a Shape (list[Dim]), supporting both
static and symbolic dimensions for graph compilation compatibility.
This class implements DeviceMapping, so sharding rules can return input TensorLayouts directly.
-
Parameters:
-
- dtype (DType)
- shape (Shape)
- mapping (DeviceMapping)
dtypeβ
dtype: DType
The element data type of the tensor.
is_fully_replicatedβ
property is_fully_replicated: bool
Whether every device holds a complete copy of the tensor.
Returns True if no dimension is sharded and there are no
pending reductions.
is_fully_resolvedβ
property is_fully_resolved: bool
Whether this spec can be used in eager dispatch.
Returns False if the spec contains compiler-only annotations
(e.g. priorities) that cannot be resolved without a compiler.
mappingβ
mapping: DeviceMapping
The distribution mapping over the device mesh.
meshβ
property mesh: DeviceMesh
The device mesh derived from the mapping.
rankβ
property rank: int
The number of dimensions.
shapeβ
shape: Shape
The global shape of the tensor.
to_named_sharding()β
to_named_sharding(tensor_rank)
Converts to tensor-dim-indexed spec for compiler lowering.
-
Parameters:
-
tensor_rank (int)
-
Return type:
to_placements()β
to_placements()
Converts to mesh-axis-indexed placements for eager dispatch.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!