Mojo struct
Layout
struct Layout[shape_types: Variadic[CoordLike], stride_types: Variadic[CoordLike]]
A layout that supports mixed compile-time and runtime dimensions.
This layout provides a unified interface for layouts where some dimensions are known at compile time and others are determined at runtime. It enables more ergonomic layout definitions while maintaining performance.
A Layout's shape and strides must be non-negative.
Parameters
- shape_types (
Variadic): The types for the shape dimensions. - stride_types (
Variadic): The types for the stride dimensions.
Implemented traits
AnyType,
Copyable,
ImplicitlyCopyable,
ImplicitlyDestructible,
Movable,
RegisterPassable,
TensorLayout,
TrivialRegisterPassable
comptime members
all_dims_known
comptime all_dims_known = Layout[shape_types, stride_types].shape_known and Layout[shape_types, stride_types].stride_known
Whether all shape and stride dimensions are known at compile time.
flat_rank
comptime flat_rank = Variadic.size[CoordLike](#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])))
The number of dimensions after flattening nested coordinates.
rank
comptime rank = Variadic.size[CoordLike](shape_types)
The number of dimensions in the layout.
shape_known
comptime shape_known = Coord[shape_types].all_dims_known
Whether all shape dimensions are known at compile time.
static_cosize
comptime static_cosize = #kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=1, reducer=[PrevV: Variadic[Int], VA: Variadic[CoordLike], idx: __mlir_type.index] (((VA[idx].static_value - 1) * #kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx]))[idx].static_value) + PrevV[0]))[0]
The compile-time size of the memory region spanned by the layout.
static_product
comptime static_product = Coord[shape_types].static_product
The compile-time product of all shape dimensions.
static_shape
comptime static_shape[i: Int] = #kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx]))[i].static_value
Returns the compile-time value of the i-th flattened shape dimension.
Parameters
- i (
Int): The dimension index.
static_stride
comptime static_stride[i: Int] = #kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(#kgen.variadic.reduce(stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx])), base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[idx].VariadicType if VA[idx].is_tuple else VA[idx]))[i].static_value
Returns the compile-time value of the i-th flattened stride dimension.
Parameters
- i (
Int): The dimension index.
stride_known
comptime stride_known = Coord[stride_types].all_dims_known
Whether all stride dimensions are known at compile time.
Methods
__init__
__init__(shape: Coord[shape_types], stride: Coord[stride_types]) -> Self
Initialize a layout with shape and stride.
Args:
__call__
__call__[index_type: CoordLike, *, linear_idx_type: DType = DType.int64](self, index: index_type) -> Scalar[linear_idx_type]
Maps a logical coordinate to a linear memory index.
Supports hierarchical indexing where the coordinate structure can differ from the shape structure. For a layout with shape (4, (3, 2)):
- (1, (1, 1)): exact structure match, each element maps directly.
- (1, 1): rank-matching, the scalar 1 is decomposed within the nested (3, 2) sub-dimension.
- (1): scalar index decomposed across all dimensions.
Parameters:
- index_type (
CoordLike): The coordinate type. - linear_idx_type (
DType): The data type for the returned linear index.
Args:
- index (
index_type): The logical coordinates to map.
Returns:
Scalar: The linear memory index corresponding to the given coordinates.
idx2crd
idx2crd[*, out_dtype: DType = DType.int64](self, idx: Int) -> Coord[#kgen.variadic.tabulate(Layout[shape_types, stride_types], [idx: __mlir_type.index] RuntimeInt[out_dtype])]
Maps a linear memory index back to logical coordinates.
This is the inverse of __call__ (crd2idx). Given a linear index,
it computes the corresponding multi-dimensional coordinates using
the per-element formula: coord[i] = (idx // stride[i]) % shape[i].
Examples: For a layout with shape (3, 4) and row-major strides:
- layout.idx2crd(0) returns (0, 0).
- layout.idx2crd(5) returns (1, 1).
- layout.idx2crd(11) returns (2, 3).
Parameters:
- out_dtype (
DType): The data type for the output coordinate values.
Args:
- idx (
Int): The linear memory index to convert to coordinates.
Returns:
Coord: A Coord containing the logical coordinates corresponding to the linear index.
product
product(self) -> Int
Returns the total number of elements in the layout's domain.
For a layout with shape (m, n), this returns m * n, representing the total number of valid coordinates in the layout.
Returns:
Int: The total number of elements in the layout.
size
size(self) -> Int
Returns the total number of elements in the layout's domain.
Alias for product(). Compatible with the legacy Layout API.
Returns:
Int: The total number of elements in the layout.
cosize
cosize[linear_idx_type: DType = DType.int64](self) -> Scalar[linear_idx_type]
Returns the size of the memory region spanned by the layout.
For a layout with shape (m, n) and stride (r, s), this returns
(m-1)*r + (n-1)*s + 1, representing the memory footprint.
Parameters:
- linear_idx_type (
DType): The data type for the returned size value.
Returns:
Scalar: The size of the memory region required by the layout.
to_layout
to_layout(self) -> Layout
Converts this mixed layout to a legacy Layout using IntTuple.
Returns:
Layout: A legacy Layout with the same shape and stride.
reverse
reverse(self) -> Layout[#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[(add (mul idx, -1), len(VA), -1)])), #kgen.variadic.reduce(stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[(add (mul idx, -1), len(VA), -1)]))]
Reverse the order of dimensions in the layout.
Turns row-major into column-major ordering where the stride-1 dimension comes first, enabling coalesced scalar iteration.
Returns:
Layout: A new Layout with shape and stride Coords reversed.
transpose
transpose(self) -> Layout[#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[(add (mul idx, -1), len(VA), -1)])), #kgen.variadic.reduce(stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, VA[(add (mul idx, -1), len(VA), -1)]))]
Transposes the layout by reversing the order of dimensions.
For an n-dimensional layout, this reverses the order of both shapes and strides. For 2D layouts, this swaps rows and columns, converting row-major to column-major and vice versa.
Example:
from layout.tile_layout import row_major
var layout = row_major[3, 4]() # shape (3,4), stride (4,1)
var transposed = layout.transpose() # shape (4,3), stride (1,4)Returns:
Layout: A new Layout with transposed dimensions.
make_dynamic
make_dynamic[dtype: DType](self) -> Layout[#kgen.variadic.reduce(shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, RuntimeInt[dtype])), #kgen.variadic.reduce(stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, RuntimeInt[dtype]))]
Convert all elements in shape and stride to RuntimeInt[dtype].
Examples:
from layout.tile_layout import row_major
var layout = row_major[3, 4]() # All compile-time
var dynamic = layout.make_dynamic[DType.int64]()
# dynamic has RuntimeInt[DType.int64] for all dimensionsParameters:
- dtype (
DType): The data type for the resulting RuntimeInt values.
Returns:
Layout: A new Layout where all elements in shape and stride are
converted to RuntimeInt[dtype].
shape
shape[i: Int](self) -> shape_types[i]
Returns the i-th shape dimension.
Parameters:
- i (
Int): The dimension index.
Returns:
shape_types: The shape value for dimension i.
stride
stride[i: Int](self) -> stride_types[i]
Returns the i-th stride dimension.
Parameters:
- i (
Int): The dimension index.
Returns:
stride_types: The stride value for dimension i.
shape_coord
shape_coord(self) -> Coord[shape_types]
Returns the full shape as a Coord.
Returns:
Coord: A Coord containing all shape dimensions.
stride_coord
stride_coord(self) -> Coord[stride_types]
Returns the full stride as a Coord.
Returns:
Coord: A Coord containing all stride dimensions.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!