Mojo function
upcast
upcast[LayoutType: TensorLayout, factor: Int, //](layout: LayoutType) -> Layout[#kgen.variadic.reduce(LayoutType._shape_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[_comptime_shape_div(LayoutType._shape_types[idx].static_value, _comptime_shape_div(factor, LayoutType._stride_types[idx].static_value))] if LayoutType._shape_types[idx].is_static_value and LayoutType._stride_types[idx].is_static_value else RuntimeInt[LayoutType._shape_types[idx].DTYPE])), #kgen.variadic.reduce(LayoutType._stride_types, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[CoordLike], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[_comptime_shape_div(LayoutType._stride_types[idx].static_value, factor)] if LayoutType._stride_types[idx].is_static_value else RuntimeInt[LayoutType._stride_types[idx].DTYPE]))]
Fuses consecutive elements in a layout to create a coarser layout.
This is useful for converting between different data type granularities.
For example, if a layout describes byte-level offsets and you want to
treat every 2 bytes as one bf16 element, use upcast[2](layout).
For each dimension with shape s and stride d:
new_stride = shape_div(d, factor)new_shape = shape_div(s, shape_div(factor, d))
where shape_div(a, b) returns a // b if a is divisible by
b, otherwise signum(a * b) (i.e., 1 for positive values).
Example:
from layout.tile_layout import row_major, upcast
# 4x8 row-major, strides (8, 1)
var layout = row_major[4, 8]()
# Upcast by 2: treat pairs as single elements
var coarser = upcast[factor=2](layout)
# Result: shape (4, 4), strides (4, 1)Parameters:
- โLayoutType (
TensorLayout): The type of the input layout. - โfactor (
Int): The number of consecutive elements to fuse into one.
Args:
- โlayout (
LayoutType): The layout to upcast.
Returns:
Layout: A new layout with adjusted shape and stride for the coarser
granularity.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!