Mojo function
lt_to_tt
lt_to_tt[dtype: DType, lt_layout: Layout, //, ResultLayout: TensorLayout = Layout[#kgen.variadic.reduce(DimList(VariadicParamList(#kgen.variadic.tabulate(len[IntTuple](lt_layout.shape), [idx: __mlir_type.index] _int_to_dim(lt_layout.shape[idx].value())))).value.value, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64])), #kgen.variadic.reduce(DimList(VariadicParamList(#kgen.variadic.tabulate(len[IntTuple](lt_layout.stride), [idx: __mlir_type.index] _int_to_dim(lt_layout.stride[idx].value())))).value.value, base=, reducer=[PrevV: Variadic[CoordLike], VA: Variadic[Dim], idx: __mlir_type.index] #kgen.variadic.concat(PrevV, ComptimeInt[VA[idx]._value_or_missing] if (VA[idx] != -31337) else RuntimeInt[DType.int64]))]](lt: LayoutTensor[dtype, lt_layout, lt.origin, address_space=lt.address_space, element_layout=lt.element_layout, layout_int_type=lt.layout_int_type, linear_idx_type=lt.linear_idx_type, masked=lt.masked, alignment=lt.alignment]) -> TileTensor[dtype, Layout[ResultLayout._shape_types, ResultLayout._stride_types], lt.origin]
Convert a LayoutTensor to a TileTensor.
Static dimensions (known at compile time) are preserved as ComptimeInt. Dynamic dimensions (UNKNOWN_VALUE) become RuntimeInt, filled from the LayoutTensor's runtime layout. Works for any flat rank.
By default the TileTensor layout is derived automatically from the
LayoutTensor's legacy layout. Pass an explicit ResultLayout to
override which dimensions are static vs runtime.
Parameters:
- βdtype (
DType): Element type of the tensor. - βlt_layout (
Layout): The legacy Layout of the LayoutTensor. - βResultLayout (
TensorLayout): The target TileTensor layout type. Defaults toLTToTTLayout[lt_layout].
Args:
- βlt (
LayoutTensor): The LayoutTensor to convert.
Returns:
TileTensor: A TileTensor with the same data and equivalent layout.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!