Modular Inference Engine
The world’s fastest unified inference engine, supercharging any model from TensorFlow or PyTorch on a wide range of hardware.
We’re excited to share an early preview of the Modular compute and AI infrastructure stack. Although our infrastructure isn’t ready for general availability yet, we want to share an early look at our documentation, share early access with a number of developers, and get your feedback.
You can see everything we’ve announced in our launch blog post.