Bring your own fine-tuned model to MAX pipelines
Learn how to use your fine-tuned model in MAX pipelines.
Learn how to use your fine-tuned model in MAX pipelines.
Learn how to deploy MAX pipelines to cloud
Create a GPU-enabled Kubernetes cluster with the cloud provider of your choice and deploy Llama 3.1 with MAX using Helm.
Learn how to serve models with the max CLI and interact with them through OpenAI-compatible endpoints