Skip to main content

Python module

functional

Provides functional APIs for tensor operations.

This module provides functional-style tensor operations that work seamlessly with both MAX Graph construction and eager Tensor execution. All operations are wrapped versions of the core graph operations that automatically handle different execution contexts. These operations can be used in both graph construction and eager execution.

CustomExtensionType

max.functional.CustomExtensionType: TypeAlias = str | pathlib.Path

Type alias for custom extensions paths, matching engine.CustomExtensionsType.

abs()

max.functional.abs(x)

Computes the absolute value element-wise. See max.graph.ops.abs() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

add()

max.functional.add(lhs, rhs)

Adds two tensors element-wise. See max.graph.ops.add() for details.

Parameters:

Return type:

TensorValue

allgather()

max.functional.allgather(inputs, signal_buffers, axis=0)

Concatenate values from multiple devices. See max.graph.ops.allgather() for details.

Parameters:

Return type:

list[TensorValue]

allreduce_sum()

max.functional.allreduce_sum(inputs, signal_buffers)

Sum values from multiple devices. See max.graph.ops.allreduce.sum() for details.

Parameters:

Return type:

list[TensorValue]

arange()

max.functional.arange(start, stop, step=1, out_dim=None, *, dtype, device)

Creates a tensor with evenly spaced values. See max.graph.ops.range() for details.

Parameters:

Return type:

TensorValue

argmax()

max.functional.argmax(x, axis=-1)

Returns the indices of the maximum values along an axis.

Parameters:

Returns:

A tensor containing the indices of the maximum values.

Return type:

TensorValue

argmin()

max.functional.argmin(x, axis=-1)

Returns the indices of the minimum values along an axis.

Parameters:

Returns:

A tensor containing the indices of the minimum values.

Return type:

TensorValue

argsort()

max.functional.argsort(x, ascending=True)

Returns the indices that would sort a tensor along an axis. See max.graph.ops.argsort() for details.

Parameters:

Return type:

TensorValue

as_interleaved_complex()

max.functional.as_interleaved_complex(x)

Converts a tensor to interleaved complex representation. See max.graph.ops.as_interleaved_complex() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

atanh()

max.functional.atanh(x)

Computes the inverse hyperbolic tangent element-wise. See max.graph.ops.atanh() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

avg_pool2d()

max.functional.avg_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False, count_boundary=True)

Applies 2D average pooling. See max.graph.ops.avg_pool2d() for details.

Parameters:

Return type:

TensorValue

band_part()

max.functional.band_part(x, num_lower=None, num_upper=None, exclude=False)

Copies a tensor setting everything outside a central band to zero. See max.graph.ops.band_part() for details.

Parameters:

Return type:

TensorValue

broadcast_to()

max.functional.broadcast_to(x, shape, out_dims=None)

Broadcasts a tensor to a new shape. See max.graph.ops.broadcast_to() for details.

Parameters:

Return type:

TensorValue

buffer_store()

max.functional.buffer_store(destination, source)

Sets a tensor buffer to new values. See max.graph.ops.buffer_store() for details.

Parameters:

Return type:

None

buffer_store_slice()

max.functional.buffer_store_slice(destination, source, indices)

Sets a slice of a tensor buffer to new values. See max.graph.ops.buffer_store_slice() for details.

Parameters:

Return type:

None

cast()

max.functional.cast(x, dtype)

Casts a tensor to a different data type. See max.graph.ops.cast() for details.

Parameters:

Return type:

TensorValue

chunk()

max.functional.chunk(x, chunks, axis=0)

Splits a tensor into chunks along a dimension. See max.graph.ops.chunk() for details.

Parameters:

Return type:

list[TensorValue]

complex_mul()

max.functional.complex_mul(lhs, rhs)

Multiply two complex-valued tensors. See max.graph.ops.complex.mul() for details.

Parameters:

Return type:

TensorValue

concat()

max.functional.concat(original_vals, axis=0)

Concatenates a list of tensors along an axis. See max.graph.ops.concat() for details.

Parameters:

Return type:

TensorValue

constant()

max.functional.constant(value, dtype=None, device=None)

Creates a constant tensor. See max.graph.ops.constant() for details.

Parameters:

Return type:

TensorValue

constant_external()

max.functional.constant_external(name, type)

Creates a constant tensor from external data. See max.graph.ops.constant_external() for details.

Parameters:

Return type:

TensorValue

conv2d()

max.functional.conv2d(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)

Applies 2D convolution. See max.graph.ops.conv2d() for details.

Parameters:

Return type:

TensorValue

conv2d_transpose()

max.functional.conv2d_transpose(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), output_paddings=(0, 0), bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)

Applies 2D transposed convolution. See max.graph.ops.conv2d_transpose() for details.

Parameters:

Return type:

TensorValue

conv3d()

max.functional.conv3d(x, filter, stride=(1, 1, 1), dilation=(1, 1, 1), padding=(0, 0, 0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.QRSCF)

Applies 3D convolution. See max.graph.ops.conv3d() for details.

Parameters:

Return type:

TensorValue

cos()

max.functional.cos(x)

Computes the cosine element-wise. See max.graph.ops.cos() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

cumsum()

max.functional.cumsum(x, axis=-1, exclusive=False, reverse=False)

Computes the cumulative sum along an axis. See max.graph.ops.cumsum() for details.

Parameters:

Return type:

TensorValue

custom()

max.functional.custom(name, device, values, out_types, parameters=None, custom_extensions=None)

Applies a custom operation with optional custom extension loading.

Creates a node to execute a custom graph operation. The custom op should be registered by annotating a Mojo function with the @compiler.register decorator.

This function extends max.graph.ops.custom() with automatic loading of custom extension libraries, eliminating the need to manually import kernels before use.

Example:

from max import functional as F, Tensor
from max.dtype import DType
from max.driver import CPU

x = Tensor.full([10], 10, dtype=DType.float32, device=CPU())
y = Tensor.ones([10], dtype=DType.float32, device=CPU())

result = F.custom(
    "vector_sum",
    device=x.device,
    values=[x, y],
    out_types=[x.type],
    custom_extensions="ops.mojopkg"
)[0]

Parameters:

  • name (str) – The op name provided to @compiler.register.
  • device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a target parameter to the kernel.
  • values (Sequence[Value[Any]]) – The op function’s arguments.
  • out_types (Sequence[Type[Any]]) – The list of op function’s return types.
  • parameters (Mapping[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
  • custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (.mojopkg files or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.

Returns:

Symbolic values representing the outputs of the op in the graph. These correspond 1:1 with the types passed as out_types.

Return type:

list[Value[Any]]

div()

max.functional.div(lhs, rhs)

Divides two tensors element-wise. See max.graph.ops.div() for details.

Parameters:

Return type:

TensorValue

equal()

max.functional.equal(lhs, rhs)

Computes element-wise equality comparison. See max.graph.ops.equal() for details.

Parameters:

Return type:

TensorValue

erf()

max.functional.erf(x)

Computes the error function element-wise. See max.graph.ops.erf() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

exp()

max.functional.exp(x)

Computes the exponential element-wise. See max.graph.ops.exp() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

flatten()

max.functional.flatten(x, start_dim=0, end_dim=-1)

Flattens a tensor. See max.graph.ops.flatten() for details.

Parameters:

Return type:

TensorValue

floor()

max.functional.floor(x)

Computes the floor element-wise. See max.graph.ops.floor() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

fold()

max.functional.fold(input, output_size, kernel_size, stride=1, dilation=1, padding=0)

Performs tensor folding operation. See max.graph.ops.fold() for details.

Parameters:

Return type:

TensorValue

functional()

max.functional.functional(op)

Decorator that converts a graph operation to support multiple tensor types.

Parameters:

op (Callable[[...], Any])

Return type:

Callable[[…], Any]

gather()

max.functional.gather(input, indices, axis)

Gathers values along an axis specified by indices. See max.graph.ops.gather() for details.

Parameters:

Return type:

TensorValue

gather_nd()

max.functional.gather_nd(input, indices, batch_dims=0)

Gathers values using multi-dimensional indices. See max.graph.ops.gather_nd() for details.

Parameters:

Return type:

TensorValue

gelu()

max.functional.gelu(x, approximate='none')

Applies the Gaussian Error Linear Unit (GELU) activation. See max.graph.ops.gelu() for details.

Parameters:

greater()

max.functional.greater(lhs, rhs)

Computes element-wise greater-than comparison. See max.graph.ops.greater() for details.

Parameters:

Return type:

TensorValue

greater_equal()

max.functional.greater_equal(lhs, rhs)

Computes element-wise greater-than-or-equal comparison. See max.graph.ops.greater_equal() for details.

Parameters:

Return type:

TensorValue

hann_window()

max.functional.hann_window(window_length, device, periodic=True, dtype=float32)

Creates a Hann window. See max.graph.ops.hann_window() for details.

Parameters:

Return type:

TensorValue

in_graph_context()

max.functional.in_graph_context()

Checks whether the caller is inside a Graph context.

Returns:

True if inside a with Graph(...): block, False otherwise.

Return type:

bool

inplace_custom()

max.functional.inplace_custom(name, device, values, out_types=None, parameters=None, custom_extensions=None)

Applies an in-place custom operation with optional custom extension loading.

Creates a node to execute an in-place custom graph operation. The custom op should be registered by annotating a Mojo function with the @compiler.register decorator.

This function extends max.graph.ops.inplace_custom() with automatic loading of custom extension libraries, eliminating the need to manually import kernels before use.

Example:

from max import functional as F, Tensor
from max.dtype import DType
from max.driver import CPU

# Create a buffer for in-place modification
data = Tensor.zeros([10], dtype=DType.float32, device=CPU())

# Use in-place custom op with inline extension loading
F.inplace_custom(
    "my_inplace_op",
    device=data.device,
    values=[data],
    custom_extensions="ops.mojopkg"
)

Parameters:

  • name (str) – The op name provided to @compiler.register.
  • device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a target parameter to the kernel.
  • values (Sequence[Value[Any]]) – The op function’s arguments. At least one must be a BufferValue or _OpaqueValue.
  • out_types (Sequence[Type[Any]] | None) – The list of op function’s return types. Can be None if the operation has no outputs.
  • parameters (dict[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
  • custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (.mojopkg files or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.

Returns:

Symbolic values representing the outputs of the op in the graph.

Return type:

list[Value[Any]]

irfft()

max.functional.irfft(input_tensor, n=None, axis=-1, normalization=Normalization.BACKWARD, input_is_complex=False, buffer_size_mb=512)

Computes the inverse real FFT. See max.graph.ops.irfft() for details.

Parameters:

  • input_tensor (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue)
  • n (int | None)
  • axis (int)
  • normalization (Normalization | str)
  • input_is_complex (bool)
  • buffer_size_mb (int)

is_inf()

max.functional.is_inf(x)

Checks for infinite values element-wise. See max.graph.ops.is_inf() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

is_nan()

max.functional.is_nan(x)

Checks for NaN values element-wise. See max.graph.ops.is_nan() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

layer_norm()

max.functional.layer_norm(input, gamma, beta, epsilon)

Applies layer normalization. See max.graph.ops.layer_norm() for details.

Parameters:

Return type:

TensorValue

lazy()

max.functional.lazy()

Context manager for lazy tensor evaluation.

Within this context, tensor operations are recorded but not executed. Tensors remain unrealized until explicitly awaited via await tensor.realize or until their values are needed (e.g., by calling .item()).

This is particularly useful for creating tensors which may not ever be used. Lazy tensors that aren’t used will never allocate memory or perform operations.

Yields:

None

Return type:

Generator[None]

from max import functional as F
from max.tensor import Tensor
from max.nn import Linear

with F.lazy():
    model = Linear(2, 3)

print(model)  # Lazy weights not initialized
# Executing the model would be fine! The weights would be created
# on first use.
# output = model(Tensor.ones([5, 2]))

# Load pretrained weights, never creating the original random weights
weights =  {
    "weight": Tensor.zeros([3, 2]),
    "bias": Tensor.zeros([3]),
}
model.load_state_dict(weights)

log()

max.functional.log(x)

Computes the natural logarithm element-wise. See max.graph.ops.log() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

log1p()

max.functional.log1p(x)

Computes log(1 + x) element-wise. See max.graph.ops.log1p() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

logical_and()

max.functional.logical_and(lhs, rhs)

Computes element-wise logical AND. See max.graph.ops.logical_and() for details.

Parameters:

Return type:

TensorValue

logical_not()

max.functional.logical_not(x)

Computes element-wise logical NOT. See max.graph.ops.logical_not() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

logical_or()

max.functional.logical_or(lhs, rhs)

Computes element-wise logical OR. See max.graph.ops.logical_or() for details.

Parameters:

Return type:

TensorValue

logical_xor()

max.functional.logical_xor(lhs, rhs)

Computes element-wise logical XOR. See max.graph.ops.logical_xor() for details.

Parameters:

Return type:

TensorValue

logsoftmax()

max.functional.logsoftmax(value, axis=-1)

Applies the log softmax function. See max.graph.ops.logsoftmax() for details.

Parameters:

Return type:

TensorValue

masked_scatter()

max.functional.masked_scatter(input, mask, updates, out_dim)

Scatters values according to a mask. See max.graph.ops.masked_scatter() for details.

Parameters:

Return type:

TensorValue

matmul()

max.functional.matmul(lhs, rhs)

Performs matrix multiplication. See max.graph.ops.matmul() for details.

Parameters:

Return type:

TensorValue

max()

max.functional.max(x, y=None, /, axis=-1)

Returns the maximum values along an axis, or elementwise maximum of two tensors.

Parameters:

Returns:

A tensor containing the maximum values.

Return type:

TensorValue

max_pool2d()

max.functional.max_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False)

Applies 2D max pooling. See max.graph.ops.max_pool2d() for details.

Parameters:

Return type:

TensorValue

mean()

max.functional.mean(x, axis=-1)

Computes the mean along specified axes.

Parameters:

Returns:

A tensor containing the mean values.

Return type:

TensorValue

min()

max.functional.min(x, y=None, /, axis=-1)

Returns the minimum values along an axis, or elementwise minimum of two tensors.

Parameters:

Returns:

A tensor containing the minimum values.

Return type:

TensorValue

mod()

max.functional.mod(lhs, rhs)

Computes the modulo operation element-wise. See max.graph.ops.mod() for details.

Parameters:

Return type:

TensorValue

mul()

max.functional.mul(lhs, rhs)

Multiplies two tensors element-wise. See max.graph.ops.mul() for details.

Parameters:

Return type:

TensorValue

negate()

max.functional.negate(x)

Negates a tensor element-wise. See max.graph.ops.negate() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

nonzero()

max.functional.nonzero(x, out_dim)

Returns the indices of non-zero elements. See max.graph.ops.nonzero() for details.

Parameters:

Return type:

TensorValue

not_equal()

max.functional.not_equal(lhs, rhs)

Computes element-wise inequality comparison. See max.graph.ops.not_equal() for details.

Parameters:

Return type:

TensorValue

outer()

max.functional.outer(lhs, rhs)

Computes the outer product of two vectors. See max.graph.ops.outer() for details.

Parameters:

Return type:

TensorValue

pad()

max.functional.pad(input, paddings, mode='constant', value=0)

Pads a tensor. See max.graph.ops.pad() for details.

Parameters:

Return type:

TensorValue

permute()

max.functional.permute(x, dims)

Permutes the dimensions of a tensor. See max.graph.ops.permute() for details.

Parameters:

Return type:

TensorValue

pow()

max.functional.pow(lhs, rhs)

Raises tensor elements to a power. See max.graph.ops.pow() for details.

Parameters:

Return type:

TensorValue

relu()

max.functional.relu(x)

Applies the ReLU activation function. See max.graph.ops.relu() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

repeat_interleave()

max.functional.repeat_interleave(x, repeats, axis=None, out_dim=None)

Repeats elements of a tensor. See max.graph.ops.repeat_interleave() for details.

Parameters:

Return type:

TensorValue

reshape()

max.functional.reshape(x, shape)

Reshapes a tensor to a new shape. See max.graph.ops.reshape() for details.

Parameters:

Return type:

TensorValue

round()

max.functional.round(x)

Rounds tensor values element-wise. See max.graph.ops.round() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

rsqrt()

max.functional.rsqrt(x)

Computes the reciprocal square root element-wise. See max.graph.ops.rsqrt() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

scatter()

max.functional.scatter(input, updates, indices, axis=-1)

Scatters values along an axis. See max.graph.ops.scatter() for details.

Parameters:

Return type:

TensorValue

scatter_nd()

max.functional.scatter_nd(input, updates, indices)

Scatters values using multi-dimensional indices. See max.graph.ops.scatter_nd() for details.

Parameters:

Return type:

TensorValue

sigmoid()

max.functional.sigmoid(x)

Applies the sigmoid activation function. See max.graph.ops.sigmoid() for details.

Parameters:

x (TensorValue)

Return type:

TensorValue

silu()

max.functional.silu(x)

Applies the SiLU (Swish) activation function. See max.graph.ops.silu() for details.

Parameters:

x (TensorValue)

sin()

max.functional.sin(x)

Computes the sine element-wise. See max.graph.ops.sin() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

slice_tensor()

max.functional.slice_tensor(x, indices)

Slices a tensor along specified dimensions. See max.graph.ops.slice_tensor() for details.

Parameters:

Return type:

TensorValue

softmax()

max.functional.softmax(value, axis=-1)

Applies the softmax function. See max.graph.ops.softmax() for details.

Parameters:

Return type:

TensorValue

split()

max.functional.split(x, split_size_or_sections, axis=0)

Splits a tensor into multiple tensors along a given dimension.

This function supports two modes, matching PyTorch’s behavior:

  • If split_size_or_sections is an int, splits into chunks of that size (the last chunk may be smaller if the dimension is not evenly divisible).
  • If split_size_or_sections is a list of ints, splits into chunks with exactly those sizes (must sum to the dimension size).
from max import functional as F, Tensor

x = Tensor.ones([10, 4])

# Split into chunks of size 3 (last chunk is size 1)
chunks = F.split(x, 3, axis=0)  # shapes: [3,4], [3,4], [3,4], [1,4]

# Split into exact sizes
chunks = F.split(x, [2, 3, 5], axis=0)  # shapes: [2,4], [3,4], [5,4]

Parameters:

  • x (Tensor | TensorValue) – The input tensor to split.
  • split_size_or_sections (int | list[int]) – Either an int (chunk size) or a list of ints (exact sizes for each output tensor).
  • axis (int) – The dimension along which to split. Defaults to 0.

Returns:

A list of tensors resulting from the split.

Return type:

list[Tensor] | list[TensorValue]

sqrt()

max.functional.sqrt(x)

Computes the square root element-wise. See max.graph.ops.sqrt() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

squeeze()

max.functional.squeeze(x, axis)

Removes dimensions of size 1. See max.graph.ops.squeeze() for details.

Parameters:

Return type:

TensorValue

stack()

max.functional.stack(values, axis=0)

Stacks tensors along a new dimension. See max.graph.ops.stack() for details.

Parameters:

Return type:

TensorValue

sub()

max.functional.sub(lhs, rhs)

Subtracts two tensors element-wise. See max.graph.ops.sub() for details.

Parameters:

Return type:

TensorValue

sum()

max.functional.sum(x, axis=-1)

Computes the sum along specified axes.

Parameters:

Returns:

A tensor containing the sum values.

Return type:

TensorValue

tanh()

max.functional.tanh(x)

Computes the hyperbolic tangent element-wise. See max.graph.ops.tanh() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

tile()

max.functional.tile(x, repeats)

Tiles a tensor by repeating it. See max.graph.ops.tile() for details.

Parameters:

Return type:

TensorValue

top_k()

max.functional.top_k(input, k, axis=-1)

Returns the k largest elements along an axis. See max.graph.ops.top_k() for details.

Parameters:

Return type:

tuple[TensorValue, TensorValue]

transfer_to()

max.functional.transfer_to(x, device)

Transfers a tensor to a specified device. See max.graph.ops.transfer_to() for details.

Parameters:

Return type:

TensorValue

transpose()

max.functional.transpose(x, axis_1, axis_2)

Transposes a tensor. See max.graph.ops.transpose() for details.

Parameters:

Return type:

TensorValue

trunc()

max.functional.trunc(x)

Truncates tensor values element-wise. See max.graph.ops.trunc() for details.

Parameters:

x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)

Return type:

TensorValue

unsqueeze()

max.functional.unsqueeze(x, axis)

Adds dimensions of size 1. See max.graph.ops.unsqueeze() for details.

Parameters:

Return type:

TensorValue

where()

max.functional.where(condition, x, y)

Selects elements from two tensors based on a condition. See max.graph.ops.where() for details.

Parameters:

Return type:

TensorValue

Was this page helpful?