Intro to MAX extensibility
The AI model you get from a framework like PyTorch or TensorFlow is built as a graph of connected operations ("ops"). Although most ops are simple math functions, efficiently executing models that include trillions of ops requires highly-performant implementations for each op (sometimes called "kernels"). However, even the most high-performance ops aren't enough to achieve peak performance. It's also necessary to employ graph compilers that can analyze the entire graph and optimize the calculations and memory that span across a sequence of ops.
That's why MAX Engine is designed to be fully extensible with Mojo. Regardless of the model format you have (such as PyTorch, ONNX, or MAX Graph), you can write custom ops in Mojo that the MAX Engine compiler can natively analyze and optimize along with the rest of the model.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!
π What went wrong?