Skip to main content
Log in

MAX changelog

This page describes all the changes in MAX.

See how to update MAX with magic.

v24.6 (2024-12-17)

This is a huge update that offers a first look at our serving library for MAX on GPUs!

Also check out our blog post introducing MAX 24.6.

✨ Highlights

  • MAX Engine on GPUs preview

    We’re excited to share a preview of MAX Engine on GPUs. We’ve created a few tutorials that demonstrate MAX’s ability to run GenAI models with our next-generation MAX graph compiler on NVIDIA GPU architectures (including A100, A10, L4, and L40 GPUs). You can experience it today by deploying Llama 3 on an A100 GPU.

  • MAX Serve preview

    This release also includes an all-new serving interface called MAX Serve. It's a Python-based serving layer that supports both native MAX models when you want a high-performance deployment, and off-the-shelf PyTorch LLMs from Hugging Face when you want to explore and experiment—all with GPU support. It provides an OpenAI-compatible REST endpoint for inference requests, and a Prometheus-compatible metrics endpoint. You can use a magic command to start a local server , or use our ready-to-deploy MAX container to start an endpoint in the cloud. Try it now with an LLM from Hugging Face.

  • Upgraded MAX models

    As we continue to build our Python-based MAX Graph API that allows you to build high-performance GenAI models, we’ve made a ton of performance improvements to the existing models and added a few new models to our GitHub repo. All the Python-based MAX models now support GPUs and broad model architectures. For example, llama3 adds compatibility for the LlamaForCausalLM family, which includes over 20,000 model variants and weights on Hugging Face.

Documentation

New tutorials:

Other new docs:

Also, our documentation is now available for MAX nightly builds! If you’re building with a MAX nightly release, you can switch to see the nightly docs using a toggle to the right of the search bar.

MAX Serve

This release includes a preview of our Python-based serving library called MAX Serve. It simplifies the process to deploy your own inference server with consistent and reliable performance.

MAX Serve currently includes the following features:

  • Deploys locally and to the cloud with our MAX container image, or with the magic CLI.

  • An OpenAI-compatible server with streaming /chat/completion and /completion endpoints for LLM inference requests.

  • Prometheus-compatible metrics endpoint with LLM KPIs (TTFT and ITL) for monitoring and evaluating performance.

  • Supports most TextGeneration Hugging Face Hub models.

  • Multiprocess HTTP/model worker architecture to maximize CPU core utilization by distributing multiple incoming requests across multiple processes, ensuring both high throughput and responsiveness.

  • Continuous heterogeneous batching to combine multiple incoming requests into a single inference (no waiting to fill a batch size) and improve total throughput.

There’s much more still in the works for MAX Serve, but you can try it today with our tutorials to Deploy Llama 3 on GPU with MAX Serve and Deploy a PyTorch model from Hugging Face.

Known issues:

  • While this release is enough to support typical chatbot applications, this release does not yet support the function-calling portion of the OpenAI API specification needed to enable robust agentic workflows.

  • Sampling is still limited and doesn’t currently respect temperature or other sampling-related API request input.

  • Structured generation is not supported.

  • Support for multi-modal models is still nascent.

MAX models

All of our Python-based GenAI models on GitHub now support GPUs!

As we add more models, we’re also building a robust set of libraries and infrastructure that make it easier to build and deploy a growing library of LLMs. Some of which is available in a new max.pipelines package and some of it is alongside the models on GitHub. Here are just some of the highlights:

  • Deep integration with the Hugging Face ecosystem for a quick-to-deploy experience, such as using HF Model Hub tools to fetch config files, support for weights in safetensor format, support for HF tokenizers, and more. (We also support GGUF weight formats.)

  • Expanded set of model abstractions for use by different LLM architectures:

    • Attention layers (including highly optimized implementations with configurable masking, like AttentionWithRope). The optimized attention layers include variants that accept an attention mask. More memory-efficient variants that don’t take a mask instead take a “mask functor” argument to the kernel, which implements masking without materializing a mask by computing a mask value from input coordinates on the fly.

    • Transformers such as Transformer and TransformerBlock. These include an initial implementation of ragged tensors—tensors for which each dimension can have a different size, avoiding the use of padding tokens by flattening a batch of sequences of differing lengths.

    • Common layers such as RMSNorm , Embedding, and Sequential.

    • KV cache management helpers, like ContinuousBatchingKVCacheManager.

    • Low-level wrappers over optimized kernels like fused_qk_ragged_rope. These are custom fused kernels that update the KV cache in place. Although they are custom, they reuse the underlying kernel implementation by passing in lambda functions used to retrieve inputs and write to outputs in place.

  • Added generalized interfaces for text generation such as TokenGenerator and PipelineModel, which provide modularity within the models and serving infrastructure. Also added a plug-in mechanism (PipelineRegistry) to more quickly define new models, tokenizers, and other reusable components. For example, anything that conforms to TokenGenerator can be served using the LLM infrastructure within MAX Serve. We then used this interface to create the following:

    • An optimized TextGenerationPipeline that can be combined with any compatible graph and has powerful performance features like graph-based multi-step scheduling, sampling, KV cache management, ragged tensor support, and more.

    • A generic HFTextGenerationPipeline that can run any Hugging Face model for which we don’t yet have an optimized implementation in eager mode.

  • Models now accept weights via a weights registry, which is passed to the session.load() method’s weights_registry argument. The decoupling of weights and model architecture allows implementing all of the different fine-tunes for a given model with the same graph. Furthermore, because the underlying design is decoupled, we can later expose the ability to compile a model once and swap weights out on the fly, without re-compiling the model.

  • Added generic implementations of common kernels, which allow you to plug-in different batching strategies (ragged or padded), KV cache management approaches (continuous batching), masking (causal, sliding window, etc.), and position encoding (RoPE or ALIBI) without having to re-write any kernel code. (More about this in a future release.)

  • Multi-step scheduling to run multiple token-generation steps on GPU before synchronizing to the CPU.

Updated models:

  • Significant performance upgrades for Llama 3, and expanded compatibility with the LlamaForCausalLM models family. For example, it also supports Llama 3.2 1B and 3B text models.

New models:

Known issues:

  • The Q4 quantized models currently work on CPU only.

  • Using a large setting for top-k with the Llama 3.1 model may lead to segmentation faults for certain workloads when run on NVIDIA GPUs. This should be resolved in the latest nightly MAX builds.

  • The models currently use a smaller default context window than the max_seq_len specified in the Hugging Face configuration files for a given model. This can be manually adjusted by setting the --max-length parameter to the desired context length when serving a model.

  • Some variants of the supported core models (like LlamaForCausalLM with different number of heads, head sizes, etc.) might not be fully optimized yet. We plan to fully generalize our implementations in a future release.

MAX Engine

MAX Engine includes a lot of the core infrastructure that enables MAX to accelerate AI models on any hardware, such as the graph compiler, runtime, kernels, and the APIs to interact with it all, and it all works without external dependencies such as PyTorch or CUDA.

This release includes a bunch of performance upgrades to our graph compiler and runtime. We’ve added support for NVIDIA GPU architectures (including A100, A10, L4, and L40 GPUs), and built out new infrastructure so we can quickly add support for other GPU hardware.

Engine API changes:

  • InferenceSession now accepts a custom_extensions constructor argument, same as load(), to specify model extension libraries.

  • The Model object is now callable to run an inference.

Breaking changes:

  • Model.execute() signature changed to support GPUs.

    • The execute() function currently doesn’t accept keyword arguments. Instead you can pass tensors as a driver.Tensor, int, float, bool, np.generic, or DLPackArray (DLPack). Note that both PyTorch and NumPy arrays implement the DLPack protocol, which means you can also pass either of those types to execute().

    • execute_legacy() preserves the semantics of execute() with support for keyword arguments to help with migration, but will be removed in a future release. execute_legacy() doesn't support GPUs.

    • Calling execute() with positional arguments still works the same.

Driver APIs

MAX Driver (the max.driver module) is a new component of MAX Engine that’s still a work in progress. It provides primitives for working with heterogeneous hardware systems (GPUs and CPUs), such as to allocate on-device memory, transfer data between host and device, query device stats, and more. It’s a foundation on which other components of MAX Engine operate (for example, InferenceEngine now uses driver.Tensor to handle model inputs and outputs).

Driver API changes:

  • Added CUDA() device to open an NVIDIA GPU.

  • Added support for fp16 and bfloat16 dtypes.

  • Expanded functionality for max.driver.Device, with new class methods and properties. We are still working on building this out to support more accelerator features.

  • driver.Tensor (and the InferenceSession.load() argument weights_registry ) now supports zero-copy interoperability with NumPy arrays and PyTorch tensors, using DLPack / DLPackArray.

  • driver.Tensor has new methods, such as from_dlpack(), element_size() , to(), to_numpy(), view(), zeros(), and more.

MAX Driver APIs are still changing rapidly and not yet ready for general use. We’ll publish more documentation in a future release.

Known issues:

  • MAX Driver is currently limited to managing just one NVIDIA GPU at a time (it does not yet support multi-GPU). It also does not yet support remote devices.

  • DLPack support is not complete. For example, streams are not yet supported.

Graph compiler

When you load a model into MAX Engine, the graph compiler is the component that inspects and optimizes all graph operations (ops) to deliver the best run time performance on each device.

This release includes various graph compiler improvements:

  • Major extensions to support NVIDIA GPUs (and other devices in the future), including async copies and caching of JIT’d kernels.

  • The runtime now performs scheduling to enable GPU compute overlap with the CPU.

  • New transformations to the Mojo kernels to enable a number of optimizations, including specialization on tensor dimensions, specialization on target hardware, specialization on non-tensor dimension input to kernels, automatic kernel fusion between operators, and more.

  • New algebraic simplifications and algorithms for ops such as horizontal fusion of matrix multiplications.

  • New CPU-side primitives for device management that are automatically transformed and optimized to reduce overhead (MAX does not need to use things like CUDA Graphs).

  • Updated memory planning to preallocate device memory (hoist computation from inference runtime to initialization time) and reduce per-inference overhead.

Graph APIs

The graph compiler is also exposed through the MAX Graph APIs (the max.graph package), which allow you to build high-performance GenAI models in Python.

Graph API changes:

  • Python stack traces from model execution failures now include a trace to the original op-creation, allowing for easier debugging during development.

  • The max.graph APIs now include preliminary support for symbolic algebraic expressions using AlgebraicDim, enabling more powerful support for checked dynamic shapes. This allows -Dim("x") - 4. Furthermore, the algebraic expressions simplify to a canonical form, so that for example -Dim("x") - 4 == -(Dim("x") + 4) holds.

  • More advanced dtype promotion now allows TensorValue math operators to just work when used with NumPy arrays and python primitives.

  • TensorValue has new methods, such as broadcast_to(), cast(), flatten(), permute(), and more.

  • Added BufferValue, which allows for device-resident tensors that are read and mutated within the graph.

  • DType has new methods/properties, align, size_in_bytes, and is_float().

  • Value constructor accepts more types for value.

  • TensorValue constructor accepts more types for value.

  • TensorValue.rebind() accepts a new message argument.

Breaking changes:

  • Graph.add_weight() now accepts Weight and returns TensorValue. Weight is essentially a named placeholder for a tensor that knows its name, dtype, shape, and optionally device and quantization encoding. Graph.add_weight() stages an op in the graph that is populated by a named weight in the weights registry passed to session.load.

  • The Weight constructor arguments changed; added align , dtype , and shape; removed assign , filepath, offset, and value.

  • The ops.scalar() method was removed along with the is_static() and is_symbolic() methods from all graph.type objects.

    • Instead of ops.scalar(), use ops.constant().

    • Instead of is_static() and is_symbolic(), use isinstance(dim, SymbolicDim) and isinstance(dim, StaticDim).

The MAX Graph APIs are not ready for general use but you can experiment with it now by following this tutorial. We'll add more documentation when we finish some API redesigns.

Custom op registration

Although the APIs to write custom operators (ops) isn’t ready for general use, this release includes a significant redesign that lays the groundwork. You might notice some associated APIs in this release and more APIs in the nightlies, so here’s a little about the work in progress:

  • The custom op APIs will allow you to extend MAX Engine with new ops written in Mojo, providing full composability and extensibility for your models. It’s the exact same API we use to write MAX Engine’s built-in ops such as matmul. That means your custom ops can benefit from all our compiler optimization features such as kernel fusion—your ops are treated the same as all the ops included “in the box.”

  • The new API requires far less adornment at the definition site to enable the MAX model compiler to optimize custom ops along with the rest of the graph (compared to our previous version that used NDBuffer).

  • Custom ops support “destination passing style” for tensors.

  • The design composes on top of Mojo’s powerful meta programming, as well as the kernel libraries abstractions for composable kernels.

We’ll publish more documentation when the custom op API is ready for general use. Check out the MAX repo’s nightly branch to see the latest custom op examples.

Known issues:

  • Custom ops don't have type or lifetime checking. They also don't reason about mutability. Expect lots of sharp corners and segfaults if you hold them wrong while we improve this!

Numeric kernels

The GPU kernels for MAX Engine are built from the ground up in Mojo with no dependencies on external vendor code or libraries. This release includes the following kernel improvements:

  • AttenGen: a novel way to express attention pattern that’s able to express different attention masks, score functions, as well as caching strategies.

  • State-of-the-art matrix multiplication algorithms with optimizations such as the following:

    • Pipelining and double-buffering to overlap data transfer and computation and to hide memory access latency (for both global and shared memory).

    • Thread swizzling to avoid shared memory bank conflicts associated with tensor core layouts.

    • Block swizzling to increase L2 cache locality.

  • SplitK/StreamK GEMM algorithms: divides the computation along the shared K dimension into smaller matrices which can then be executed independently on streaming multiprocessors (such as CUDA cores). These algorithms are ideal for matrices with large K dimension but small M dimension.

  • Large context length MHA: uses SplitK/StreamK to implement the attention mechanism and eliminate the need of a huge score matrix, which drastically reduces memory usage/traffic to enable large context length.

  • DualGemm: accelerates the multi-layer perceptron (MLP) layers where the left-hand side (LHS) is shared between two matrix multiplications.

Known issues:

  • The MAX kernels are optimized for bfloat16 on GPUs.

  • Convolution on GPU is not performance optimized yet.

  • Although v24.6 technically runs on H100, it doesn’t include performance-optimized kernels for that device yet and it isn’t recommended.

Mojo

Mojo is a crucial component of the MAX stack that enables all of MAX’s performance-oriented code across hardware. For all the updates to the Mojo language, standard library, and tools, see the Mojo changelog.

v24.5 (2024-09-13)

✨ Highlights

⭐️ New

  • Added repeat_interleave graph op.

  • Added caching for MAX graph models. This means that graph compilation is cached and the executable model is retrieved from cache on the 2nd and subsequent runs. Note that the model cache is architecture specific and isn't portable across different targets.

  • Support for Python 3.12.

MAX Graph Python API

This Python API will ultimately provide the same low-level programming interface for high-performance inference graphs as the Mojo API. As with the Mojo API, it's an API for graph-building only, and it does not implement support for training.

You can take a look at how the API works in the MAX Graph Python API reference.

MAX Driver Python API

The MAX Driver API allows you to interact with devices (such as CPUs and GPUs) and allocate memory directly onto them. With this API, you interact with this memory as tensors.

Note that this API is still under development, with support for non-host devices, such as GPUs, planned for a future release.

To learn more, check out the MAX Driver Python API reference.

MAX C API

New APIs for adding torch metadata libraries:

  • M_setTorchMetadataLibraryPath
  • M_setTorchMetadataLibraryPtr

🦋 Changed

MAX Engine performance

  • Compared to v24.4, MAX Engine v24.5 generates tokens for Llama an average of 15%-48% faster.

MAX C API

Simplified the API for adding torch library paths, which now only takes one path per API call, but can be called multiple times to add paths to the config:

  • M_setTorchLibraries -> M_setTorchLibraryPath

⚠️ Deprecated

  • The max command line tool is no longer supported and will be removed in a future release.

❌ Removed

  • Dropped support for Ubuntu 20.04. If you're using Ubuntu, we currently support Ubuntu 22.04 LTS only.
  • Dropped support for Python 3.8.
  • Removed built-in PyTorch libraries from the max package. See the FAQ for information on supported torch versions.

v24.4 (2024-06-07)

🔥 Legendary

  • MAX is now available on macOS! Try it now.

  • New quantization APIs for MAX Graph. You can now build high-performance graphs in Mojo that use the latest quantization techniques, enabling even faster performance and more system compatibility for large models.

    Learn more in the guide to quantize your graph weights.

⭐️ New

MAX Mojo APIs

MAX C API

Miscellaneous new APIs:

  • M_cloneCompileConfig()
  • M_copyAsyncTensorMap()
  • M_tensorMapKeys() and M_deleteTensorMapKeys()
  • M_setTorchLibraries()

🦋 Changed

MAX Mojo API

  • EngineNumpyView.data() and EngineTensorView.data() functions that return a type-erased pointer were renamed to unsafe_ptr().

  • TensorMap now conforms to CollectionElement trait to be copyable and movable.

  • custom_nv() was removed, and its functionality moved into custom() as an function overload, so it can now output a list of tensor symbols.

v24.3 (2024-05-02)

🔥 Legendary

  • You can now write custom ops for your models with Mojo!

    Learn more about MAX extensibility.

🦋 Changed

  • Added support for named dynamic dimensions. This means you can specify when two or more dimensions in your model's input are dynamic but their sizes at run time must match each other. By specifying each of these dimension sizes with a name (instead of using None to indicate a dynamic size), the MAX Engine compiler can perform additional optimizations. See the notes below for the corresponding API changes that support named dimensions.

  • Simplified all the APIs to load input specs for models, making them more consistent.

MAX Engine performance

  • Compared to v24.2, MAX Engine v24.3 shows an average speedup of 10% on PyTorch models, and an average 20% speedup on dynamically quantized ONNX transformers.

MAX Graph API

The max.graph APIs are still changing rapidly, but starting to stabilize.

See the updated guide to build a graph with MAX Graph.

  • AnyMoType renamed to Type, MOTensor renamed to TensorType, and MOList renamed to ListType.

  • Removed ElementType in favor of using DType.

  • Removed TypeTuple in favor of using List[Type].

  • Removed the Module type so you can now start building a graph by directly instantiating a Graph.

  • Some new ops in max.ops, including support for custom ops.

    See how to create a custom op in MAX Graph.

MAX Engine Python API

  • Redesigned InferenceSession.load() to replace the confusing options argument with a custom_ops_path argument for use when loading a custom op, and an input_specs argument for use when loading TorchScript models.

    As a result, CommonLoadOptions, TorchLoadOptions, and TensorFlowLoadOptions have all been removed.

  • TorchInputSpec now supports named dynamic dimensions (previously, dynamic dimension sizes could be specified only as None). This lets you tell MAX which dynamic dimensions are required to have the same size, which helps MAX better optimize your model.

MAX Engine Mojo API

  • InferenceSession.load_model() was renamed to load().

  • Redesigned InferenceSession.load() to replace the confusing config argument with a custom_ops_path argument for use when loading a custom op, and an input_specs argument for use when loading TorchScript models.

    Doing so removed LoadOptions and introduced the new InputSpec type to define the input shape/type of a model (instead of LoadOptions).

  • New ShapeElement type to allow for named dynamic dimensions (in InputSpec).

  • max.engine.engine module was renamed to max.engine.info.

MAX Engine C API

❌ Removed

  • Removed TensorFlow support in the MAX SDK, so you can no longer load a TensorFlow SavedModel for inference. However, TensorFlow is still available for enterprise customers.

    We removed TensorFlow because industry-wide TensorFlow usage has declined significantly, especially for the latest AI innovations. Removing TensorFlow also cuts our package size by over 50% and accelerates the development of other customer-requested features. If you have a production use-case for a TensorFlow model, please contact us.

  • Removed the Python CommonLoadOptions, TorchLoadOptions, and TensorFlowLoadOptions classes. See note above about InferenceSession.load() changes.

  • Removed the Mojo LoadOptions type. See the note above about InferenceSession.load() changes.

v24.2.1 (2024-04-11)

  • You can now import more MAX Graph functions from max.graph.ops instead of using max.graph.ops.elementwise. For example:

    from max.graph import ops

    var relu = ops.relu(matmul)
    from max.graph import ops

    var relu = ops.relu(matmul)

v24.2 (2024-03-28)

  • MAX Engine now supports TorchScript models with dynamic input shapes.

    No matter what the input shapes are, you still need to specify the input specs for all TorchScript models.

  • The Mojo standard library is now open source!

    Read more about it in this blog post.

  • And, of course, lots of Mojo updates, including implicit traits, support for keyword arguments in Python calls, a new List type (previously DynamicVector), some refactoring that might break your code, and much more.

    For details, see the Mojo changelog.

v24.1.1 (2024-03-18)

This is a minor release that improves error reports.

v24.1 (2024-02-29)

The first release of the MAX platform is here! 🚀

This is a preview version of the MAX platform. That means it is not ready for production deployment and designed only for local development and evaluation.

Because this is a preview, some API libraries are still in development and subject to change, and some features that we previously announced are not quite ready yet. But there is a lot that you can do in this release!

This release includes our flagship developer tools, currently for Linux only:

  • MAX Engine: Our state-of-the-art graph compiler and runtime library that executes models from PyTorch and ONNX, with incredible inference speed on a wide range of hardware.

    • API libraries in Python, C, and Mojo to run inference with your existing models. See the API references.

    • The max benchmark tool, which runs MLPerf benchmarks on any compatible model without writing any code.

    • The max visualize tool, which allows you to visualize your model in Netron after partially lowering in MAX Engine.

    • An early look at the MAX Graph API, our low-level library for building high-performance inference graphs.

  • MAX Serving: A preview of our serving wrapper for MAX Engine that provides full interoperability with existing AI serving systems (such as Triton) and that seamlessly deploys within existing container infrastructure (such as Kubernetes).

    • A Docker image that runs MAX Engine as a backend for NVIDIA Triton Inference Server. Try it now.
  • Mojo: The world's first programming language built from the ground-up for AI developers, with cutting-edge compiler technology that delivers unparalleled performance and programmability for any hardware.

    • The latest version of Mojo, the standard library, and the mojo command line tool. These are always included in MAX, so you don't need to download any separate packages.

    • The Mojo changes in each release are often quite long, so we're going to continue sharing those in the existing Mojo changelog.

Additionally, we've started a new GitHub repo for MAX, where we currently share a bunch of code examples for our API libraries, including some large model pipelines such as Stable Diffusion in Mojo and Llama2 built with MAX Graph. You can also use this repo to report issues with MAX.