MAX docs

Welcome to the MAX docs

We know from experience that deploying AI systems is a complicated process that’s full of compromises. So we built the world’s first extensible and unified AI inference platform that removes the complexity and compromises, and adds flexibility and speed. We call it MAX (Modular Accelerated Xecution).

The MAX platform includes: MAX Engine, our AI runtime that executes models from any AI framework on any hardware with unprecedented speed; MAX Serving, our deployment services for production-scale inferencing; and Mojo, a new language for AI development that allows you to natively extend and customize your models inside MAX Engine.

When combined, these components provide a solution for large-scale AI deployment that is faster, more extensible, and more hardware-compatible than anything else.

See the following docs for more details.

Talk to us on Discord

Go to