Python module
functional
Provides functional APIs for tensor operations.
This module provides functional-style tensor operations that work seamlessly with both MAX Graph construction and eager Tensor execution. All operations are wrapped versions of the core graph operations that automatically handle different execution contexts. These operations can be used in both graph construction and eager execution.
CustomExtensionType
max.functional.CustomExtensionType: TypeAlias = str | pathlib.Path
Type alias for custom extensions paths, matching engine.CustomExtensionsType.
abs()
max.functional.abs(x)
Computes the absolute value element-wise.
See max.graph.ops.abs() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
add()
max.functional.add(lhs, rhs)
Adds two tensors element-wise.
See max.graph.ops.add() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
allgather()
max.functional.allgather(inputs, signal_buffers, axis=0)
Concatenate values from multiple devices.
See max.graph.ops.allgather() for details.
-
Parameters:
-
- inputs (Iterable[Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray])
- signal_buffers (Iterable[BufferValue | HasBufferValue])
- axis (int)
-
Return type:
allreduce_sum()
max.functional.allreduce_sum(inputs, signal_buffers)
Sum values from multiple devices.
See max.graph.ops.allreduce.sum() for details.
-
Parameters:
-
- inputs (Iterable[Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray])
- signal_buffers (Iterable[BufferValue | HasBufferValue])
-
Return type:
arange()
max.functional.arange(start, stop, step=1, out_dim=None, *, dtype, device)
Creates a tensor with evenly spaced values.
See max.graph.ops.range() for details.
-
Parameters:
-
- start (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- stop (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- step (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- out_dim (int | str | Dim | integer[Any] | None)
- dtype (DType)
- device (Device | DeviceRef)
-
Return type:
argmax()
max.functional.argmax(x, axis=-1)
Returns the indices of the maximum values along an axis.
-
Parameters:
-
Returns:
-
A tensor containing the indices of the maximum values.
-
Return type:
argmin()
max.functional.argmin(x, axis=-1)
Returns the indices of the minimum values along an axis.
-
Parameters:
-
Returns:
-
A tensor containing the indices of the minimum values.
-
Return type:
argsort()
max.functional.argsort(x, ascending=True)
Returns the indices that would sort a tensor along an axis.
See max.graph.ops.argsort() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue)
- ascending (bool)
-
Return type:
as_interleaved_complex()
max.functional.as_interleaved_complex(x)
Converts a tensor to interleaved complex representation.
See max.graph.ops.as_interleaved_complex() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
atanh()
max.functional.atanh(x)
Computes the inverse hyperbolic tangent element-wise.
See max.graph.ops.atanh() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
avg_pool2d()
max.functional.avg_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False, count_boundary=True)
Applies 2D average pooling.
See max.graph.ops.avg_pool2d() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- kernel_size (tuple[int | str | Dim | integer[Any], int | str | Dim | integer[Any]])
- stride (int | tuple[int, int])
- dilation (int | tuple[int, int])
- padding (int | tuple[int, int])
- ceil_mode (bool)
- count_boundary (bool)
-
Return type:
band_part()
max.functional.band_part(x, num_lower=None, num_upper=None, exclude=False)
Copies a tensor setting everything outside a central band to zero.
See max.graph.ops.band_part() for details.
-
Parameters:
-
Return type:
broadcast_to()
max.functional.broadcast_to(x, shape, out_dims=None)
Broadcasts a tensor to a new shape.
See max.graph.ops.broadcast_to() for details.
buffer_store()
max.functional.buffer_store(destination, source)
Sets a tensor buffer to new values.
See max.graph.ops.buffer_store() for details.
-
Parameters:
-
- destination (BufferValue | HasBufferValue)
- source (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
-
None
buffer_store_slice()
max.functional.buffer_store_slice(destination, source, indices)
Sets a slice of a tensor buffer to new values.
See max.graph.ops.buffer_store_slice() for details.
-
Parameters:
-
Return type:
-
None
cast()
max.functional.cast(x, dtype)
Casts a tensor to a different data type.
See max.graph.ops.cast() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue)
- dtype (DType)
-
Return type:
chunk()
max.functional.chunk(x, chunks, axis=0)
Splits a tensor into chunks along a dimension.
See max.graph.ops.chunk() for details.
-
Parameters:
-
Return type:
complex_mul()
max.functional.complex_mul(lhs, rhs)
Multiply two complex-valued tensors.
See max.graph.ops.complex.mul() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
concat()
max.functional.concat(original_vals, axis=0)
Concatenates a list of tensors along an axis.
See max.graph.ops.concat() for details.
-
Parameters:
-
Return type:
constant()
max.functional.constant(value, dtype=None, device=None)
Creates a constant tensor.
See max.graph.ops.constant() for details.
constant_external()
max.functional.constant_external(name, type)
Creates a constant tensor from external data.
See max.graph.ops.constant_external() for details.
-
Parameters:
-
- name (str)
- type (TensorType)
-
Return type:
conv2d()
max.functional.conv2d(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)
Applies 2D convolution.
See max.graph.ops.conv2d() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- filter (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- stride (tuple[int, int])
- dilation (tuple[int, int])
- padding (tuple[int, int, int, int])
- groups (int)
- bias (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray | None)
- input_layout (ConvInputLayout)
- filter_layout (FilterLayout)
-
Return type:
conv2d_transpose()
max.functional.conv2d_transpose(x, filter, stride=(1, 1), dilation=(1, 1), padding=(0, 0, 0, 0), output_paddings=(0, 0), bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.RSCF)
Applies 2D transposed convolution.
See max.graph.ops.conv2d_transpose() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- filter (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- stride (tuple[int, int])
- dilation (tuple[int, int])
- padding (tuple[int, int, int, int])
- output_paddings (tuple[int, int])
- bias (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray | None)
- input_layout (ConvInputLayout)
- filter_layout (FilterLayout)
-
Return type:
conv3d()
max.functional.conv3d(x, filter, stride=(1, 1, 1), dilation=(1, 1, 1), padding=(0, 0, 0, 0, 0, 0), groups=1, bias=None, input_layout=ConvInputLayout.NHWC, filter_layout=FilterLayout.QRSCF)
Applies 3D convolution.
See max.graph.ops.conv3d() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- filter (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- stride (tuple[int, int, int])
- dilation (tuple[int, int, int])
- padding (tuple[int, int, int, int, int, int])
- groups (int)
- bias (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray | None)
- input_layout (ConvInputLayout)
- filter_layout (FilterLayout)
-
Return type:
cos()
max.functional.cos(x)
Computes the cosine element-wise.
See max.graph.ops.cos() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
cumsum()
max.functional.cumsum(x, axis=-1, exclusive=False, reverse=False)
Computes the cumulative sum along an axis.
See max.graph.ops.cumsum() for details.
-
Parameters:
-
Return type:
custom()
max.functional.custom(name, device, values, out_types, parameters=None, custom_extensions=None)
Applies a custom operation with optional custom extension loading.
Creates a node to execute a custom graph operation. The custom op should be
registered by annotating a Mojo function with the @compiler.register
decorator.
This function extends max.graph.ops.custom() with automatic loading
of custom extension libraries, eliminating the need to manually import
kernels before use.
Example:
from max import functional as F, Tensor
from max.dtype import DType
from max.driver import CPU
x = Tensor.full([10], 10, dtype=DType.float32, device=CPU())
y = Tensor.ones([10], dtype=DType.float32, device=CPU())
result = F.custom(
"vector_sum",
device=x.device,
values=[x, y],
out_types=[x.type],
custom_extensions="ops.mojopkg"
)[0]-
Parameters:
-
- name (str) – The op name provided to
@compiler.register. - device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a
targetparameter to the kernel. - values (Sequence[Value[Any]]) – The op function’s arguments.
- out_types (Sequence[Type[Any]]) – The list of op function’s return types.
- parameters (Mapping[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
- custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (
.mojopkgfiles or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.
- name (str) – The op name provided to
-
Returns:
-
Symbolic values representing the outputs of the op in the graph. These correspond 1:1 with the types passed as
out_types. -
Return type:
div()
max.functional.div(lhs, rhs)
Divides two tensors element-wise.
See max.graph.ops.div() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
equal()
max.functional.equal(lhs, rhs)
Computes element-wise equality comparison.
See max.graph.ops.equal() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
erf()
max.functional.erf(x)
Computes the error function element-wise.
See max.graph.ops.erf() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
exp()
max.functional.exp(x)
Computes the exponential element-wise.
See max.graph.ops.exp() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
flatten()
max.functional.flatten(x, start_dim=0, end_dim=-1)
Flattens a tensor.
See max.graph.ops.flatten() for details.
-
Parameters:
-
Return type:
floor()
max.functional.floor(x)
Computes the floor element-wise.
See max.graph.ops.floor() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
fold()
max.functional.fold(input, output_size, kernel_size, stride=1, dilation=1, padding=0)
Performs tensor folding operation.
See max.graph.ops.fold() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- output_size (tuple[int | str | Dim | integer[Any], int | str | Dim | integer[Any]])
- kernel_size (tuple[int | str | Dim | integer[Any], int | str | Dim | integer[Any]])
- stride (int | tuple[int, int])
- dilation (int | tuple[int, int])
- padding (int | tuple[int, int])
-
Return type:
functional()
max.functional.functional(op)
Decorator that converts a graph operation to support multiple tensor types.
gather()
max.functional.gather(input, indices, axis)
Gathers values along an axis specified by indices.
See max.graph.ops.gather() for details.
-
Parameters:
-
Return type:
gather_nd()
max.functional.gather_nd(input, indices, batch_dims=0)
Gathers values using multi-dimensional indices.
See max.graph.ops.gather_nd() for details.
-
Parameters:
-
Return type:
gelu()
max.functional.gelu(x, approximate='none')
Applies the Gaussian Error Linear Unit (GELU) activation.
See max.graph.ops.gelu() for details.
-
Parameters:
-
- x (TensorValue)
- approximate (str)
greater()
max.functional.greater(lhs, rhs)
Computes element-wise greater-than comparison.
See max.graph.ops.greater() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
greater_equal()
max.functional.greater_equal(lhs, rhs)
Computes element-wise greater-than-or-equal comparison.
See max.graph.ops.greater_equal() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
hann_window()
max.functional.hann_window(window_length, device, periodic=True, dtype=float32)
Creates a Hann window.
See max.graph.ops.hann_window() for details.
-
Parameters:
-
Return type:
in_graph_context()
max.functional.in_graph_context()
Checks whether the caller is inside a Graph context.
-
Returns:
-
True if inside a
with Graph(...):block, False otherwise. -
Return type:
inplace_custom()
max.functional.inplace_custom(name, device, values, out_types=None, parameters=None, custom_extensions=None)
Applies an in-place custom operation with optional custom extension loading.
Creates a node to execute an in-place custom graph operation. The custom op
should be registered by annotating a Mojo function with the
@compiler.register decorator.
This function extends max.graph.ops.inplace_custom() with automatic
loading of custom extension libraries, eliminating the need to manually
import kernels before use.
Example:
from max import functional as F, Tensor
from max.dtype import DType
from max.driver import CPU
# Create a buffer for in-place modification
data = Tensor.zeros([10], dtype=DType.float32, device=CPU())
# Use in-place custom op with inline extension loading
F.inplace_custom(
"my_inplace_op",
device=data.device,
values=[data],
custom_extensions="ops.mojopkg"
)-
Parameters:
-
- name (str) – The op name provided to
@compiler.register. - device (driver.Device | DeviceRef) – Device that the op is assigned to. This becomes a
targetparameter to the kernel. - values (Sequence[Value[Any]]) – The op function’s arguments. At least one must be a
BufferValueor_OpaqueValue. - out_types (Sequence[Type[Any]] | None) – The list of op function’s return types. Can be None if the operation has no outputs.
- parameters (dict[str, bool | int | str | DType] | None) – Dictionary of extra parameters expected by the kernel.
- custom_extensions (CustomExtensionsType | None) – Paths to custom extension libraries (
.mojopkgfiles or Mojo source directories). Extensions are automatically loaded into the current graph if not already present.
- name (str) – The op name provided to
-
Returns:
-
Symbolic values representing the outputs of the op in the graph.
-
Return type:
irfft()
max.functional.irfft(input_tensor, n=None, axis=-1, normalization=Normalization.BACKWARD, input_is_complex=False, buffer_size_mb=512)
Computes the inverse real FFT.
See max.graph.ops.irfft() for details.
is_inf()
max.functional.is_inf(x)
Checks for infinite values element-wise.
See max.graph.ops.is_inf() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
is_nan()
max.functional.is_nan(x)
Checks for NaN values element-wise.
See max.graph.ops.is_nan() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
layer_norm()
max.functional.layer_norm(input, gamma, beta, epsilon)
Applies layer normalization.
See max.graph.ops.layer_norm() for details.
-
Parameters:
-
- input (TensorValue)
- gamma (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- beta (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- epsilon (float)
-
Return type:
lazy()
max.functional.lazy()
Context manager for lazy tensor evaluation.
Within this context, tensor operations are recorded but not executed.
Tensors remain unrealized until explicitly awaited via await tensor.realize
or until their values are needed (e.g., by calling .item()).
This is particularly useful for creating tensors which may not ever be used. Lazy tensors that aren’t used will never allocate memory or perform operations.
-
Yields:
-
None
-
Return type:
-
Generator[None]
from max import functional as F
from max.tensor import Tensor
from max.nn import Linear
with F.lazy():
model = Linear(2, 3)
print(model) # Lazy weights not initialized
# Executing the model would be fine! The weights would be created
# on first use.
# output = model(Tensor.ones([5, 2]))
# Load pretrained weights, never creating the original random weights
weights = {
"weight": Tensor.zeros([3, 2]),
"bias": Tensor.zeros([3]),
}
model.load_state_dict(weights)log()
max.functional.log(x)
Computes the natural logarithm element-wise.
See max.graph.ops.log() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
log1p()
max.functional.log1p(x)
Computes log(1 + x) element-wise.
See max.graph.ops.log1p() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
logical_and()
max.functional.logical_and(lhs, rhs)
Computes element-wise logical AND.
See max.graph.ops.logical_and() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
logical_not()
max.functional.logical_not(x)
Computes element-wise logical NOT.
See max.graph.ops.logical_not() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
logical_or()
max.functional.logical_or(lhs, rhs)
Computes element-wise logical OR.
See max.graph.ops.logical_or() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
logical_xor()
max.functional.logical_xor(lhs, rhs)
Computes element-wise logical XOR.
See max.graph.ops.logical_xor() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
logsoftmax()
max.functional.logsoftmax(value, axis=-1)
Applies the log softmax function.
See max.graph.ops.logsoftmax() for details.
-
Parameters:
-
Return type:
masked_scatter()
max.functional.masked_scatter(input, mask, updates, out_dim)
Scatters values according to a mask.
See max.graph.ops.masked_scatter() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- mask (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- updates (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- out_dim (int | str | Dim | integer[Any])
-
Return type:
matmul()
max.functional.matmul(lhs, rhs)
Performs matrix multiplication.
See max.graph.ops.matmul() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
max()
max.functional.max(x, y=None, /, axis=-1)
Returns the maximum values along an axis, or elementwise maximum of two tensors.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray) – The input tensor.
- y (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray | None) – Optional second tensor for elementwise maximum.
- axis (int | None) – The axis along which to compute the maximum (only for reduction). If None, computes the maximum across all elements (flattened).
-
Returns:
-
A tensor containing the maximum values.
-
Return type:
max_pool2d()
max.functional.max_pool2d(input, kernel_size, stride=1, dilation=1, padding=0, ceil_mode=False)
Applies 2D max pooling.
See max.graph.ops.max_pool2d() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- kernel_size (tuple[int | str | Dim | integer[Any], int | str | Dim | integer[Any]])
- stride (int | tuple[int, int])
- dilation (int | tuple[int, int])
- padding (int | tuple[int, int])
- ceil_mode (bool)
-
Return type:
mean()
max.functional.mean(x, axis=-1)
Computes the mean along specified axes.
-
Parameters:
-
Returns:
-
A tensor containing the mean values.
-
Return type:
min()
max.functional.min(x, y=None, /, axis=-1)
Returns the minimum values along an axis, or elementwise minimum of two tensors.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray) – The input tensor.
- y (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray | None) – Optional second tensor for elementwise minimum.
- axis (int | None) – The axis along which to compute the minimum (only for reduction). If None, computes the minimum across all elements (flattened).
-
Returns:
-
A tensor containing the minimum values.
-
Return type:
mod()
max.functional.mod(lhs, rhs)
Computes the modulo operation element-wise.
See max.graph.ops.mod() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
mul()
max.functional.mul(lhs, rhs)
Multiplies two tensors element-wise.
See max.graph.ops.mul() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
negate()
max.functional.negate(x)
Negates a tensor element-wise.
See max.graph.ops.negate() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
nonzero()
max.functional.nonzero(x, out_dim)
Returns the indices of non-zero elements.
See max.graph.ops.nonzero() for details.
not_equal()
max.functional.not_equal(lhs, rhs)
Computes element-wise inequality comparison.
See max.graph.ops.not_equal() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
outer()
max.functional.outer(lhs, rhs)
Computes the outer product of two vectors.
See max.graph.ops.outer() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
pad()
max.functional.pad(input, paddings, mode='constant', value=0)
Pads a tensor.
See max.graph.ops.pad() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- paddings (Iterable[int])
- mode (Literal['constant'])
- value (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
permute()
max.functional.permute(x, dims)
Permutes the dimensions of a tensor.
See max.graph.ops.permute() for details.
-
Parameters:
-
Return type:
pow()
max.functional.pow(lhs, rhs)
Raises tensor elements to a power.
See max.graph.ops.pow() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
relu()
max.functional.relu(x)
Applies the ReLU activation function.
See max.graph.ops.relu() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
repeat_interleave()
max.functional.repeat_interleave(x, repeats, axis=None, out_dim=None)
Repeats elements of a tensor.
See max.graph.ops.repeat_interleave() for details.
-
Parameters:
-
Return type:
reshape()
max.functional.reshape(x, shape)
Reshapes a tensor to a new shape.
See max.graph.ops.reshape() for details.
round()
max.functional.round(x)
Rounds tensor values element-wise.
See max.graph.ops.round() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
rsqrt()
max.functional.rsqrt(x)
Computes the reciprocal square root element-wise.
See max.graph.ops.rsqrt() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
scatter()
max.functional.scatter(input, updates, indices, axis=-1)
Scatters values along an axis.
See max.graph.ops.scatter() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- updates (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- indices (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- axis (int)
-
Return type:
scatter_nd()
max.functional.scatter_nd(input, updates, indices)
Scatters values using multi-dimensional indices.
See max.graph.ops.scatter_nd() for details.
-
Parameters:
-
- input (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- updates (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- indices (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
sigmoid()
max.functional.sigmoid(x)
Applies the sigmoid activation function.
See max.graph.ops.sigmoid() for details.
-
Parameters:
-
x (TensorValue)
-
Return type:
silu()
max.functional.silu(x)
Applies the SiLU (Swish) activation function.
See max.graph.ops.silu() for details.
-
Parameters:
-
x (TensorValue)
sin()
max.functional.sin(x)
Computes the sine element-wise.
See max.graph.ops.sin() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
slice_tensor()
max.functional.slice_tensor(x, indices)
Slices a tensor along specified dimensions.
See max.graph.ops.slice_tensor() for details.
-
Parameters:
-
- x (TensorValue)
- indices (SliceIndices)
-
Return type:
softmax()
max.functional.softmax(value, axis=-1)
Applies the softmax function.
See max.graph.ops.softmax() for details.
-
Parameters:
-
Return type:
split()
max.functional.split(x, split_size_or_sections, axis=0)
Splits a tensor into multiple tensors along a given dimension.
This function supports two modes, matching PyTorch’s behavior:
- If
split_size_or_sectionsis an int, splits into chunks of that size (the last chunk may be smaller if the dimension is not evenly divisible). - If
split_size_or_sectionsis a list of ints, splits into chunks with exactly those sizes (must sum to the dimension size).
from max import functional as F, Tensor
x = Tensor.ones([10, 4])
# Split into chunks of size 3 (last chunk is size 1)
chunks = F.split(x, 3, axis=0) # shapes: [3,4], [3,4], [3,4], [1,4]
# Split into exact sizes
chunks = F.split(x, [2, 3, 5], axis=0) # shapes: [2,4], [3,4], [5,4]-
Parameters:
-
Returns:
-
A list of tensors resulting from the split.
-
Return type:
-
list[Tensor] | list[TensorValue]
sqrt()
max.functional.sqrt(x)
Computes the square root element-wise.
See max.graph.ops.sqrt() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
squeeze()
max.functional.squeeze(x, axis)
Removes dimensions of size 1.
See max.graph.ops.squeeze() for details.
-
Parameters:
-
Return type:
stack()
max.functional.stack(values, axis=0)
Stacks tensors along a new dimension.
See max.graph.ops.stack() for details.
-
Parameters:
-
Return type:
sub()
max.functional.sub(lhs, rhs)
Subtracts two tensors element-wise.
See max.graph.ops.sub() for details.
-
Parameters:
-
- lhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- rhs (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
sum()
max.functional.sum(x, axis=-1)
Computes the sum along specified axes.
-
Parameters:
-
Returns:
-
A tensor containing the sum values.
-
Return type:
tanh()
max.functional.tanh(x)
Computes the hyperbolic tangent element-wise.
See max.graph.ops.tanh() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
tile()
max.functional.tile(x, repeats)
Tiles a tensor by repeating it.
See max.graph.ops.tile() for details.
top_k()
max.functional.top_k(input, k, axis=-1)
Returns the k largest elements along an axis.
See max.graph.ops.top_k() for details.
-
Parameters:
-
Return type:
transfer_to()
max.functional.transfer_to(x, device)
Transfers a tensor to a specified device.
See max.graph.ops.transfer_to() for details.
-
Parameters:
-
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue)
- device (Device | DeviceRef)
-
Return type:
transpose()
max.functional.transpose(x, axis_1, axis_2)
Transposes a tensor.
See max.graph.ops.transpose() for details.
-
Parameters:
-
Return type:
trunc()
max.functional.trunc(x)
Truncates tensor values element-wise.
See max.graph.ops.trunc() for details.
-
Parameters:
-
x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
unsqueeze()
max.functional.unsqueeze(x, axis)
Adds dimensions of size 1.
See max.graph.ops.unsqueeze() for details.
-
Parameters:
-
Return type:
where()
max.functional.where(condition, x, y)
Selects elements from two tensors based on a condition.
See max.graph.ops.where() for details.
-
Parameters:
-
- condition (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- x (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
- y (Value[TensorType] | TensorValue | Shape | Dim | HasTensorValue | int | float | integer[Any] | floating[Any] | DLPackArray)
-
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!