Skip to main content
Log in

Start a chat endpoint

The MAX framework simplifies the process to serve open source models with the same API interface as OpenAI. This allows you to replace commercial models with alternatives from the MAX Builds site with minimal code changes.

This tutorial shows you how to serve Llama 3.1 locally with the max CLI and interact with it through REST and Python APIs. You'll learn to configure the server and make requests using the OpenAI client libraries as a drop-in replacement.

System requirements:

Set up your environment

Create a Python project to install our APIs and CLI tools:

  1. Create a project folder:
    mkdir chat-tutorial && cd chat-tutorial
    mkdir chat-tutorial && cd chat-tutorial
  2. Create and activate a virtual environment:
    python3 -m venv .venv/chat-tutorial \
    && source .venv/chat-tutorial/bin/activate
    python3 -m venv .venv/chat-tutorial \
    && source .venv/chat-tutorial/bin/activate
  3. Install the modular Python package:
    pip install modular \
    --extra-index-url https://download.pytorch.org/whl/cpu \
    --extra-index-url https://dl.modular.com/public/nightly/python/simple/
    pip install modular \
    --extra-index-url https://download.pytorch.org/whl/cpu \
    --extra-index-url https://dl.modular.com/public/nightly/python/simple/

Serve your model

Use the max serve command to start a local model server with the Llama 3.1 model:

max serve \
--model-path modularai/Llama-3.1-8B-Instruct-GGUF
max serve \
--model-path modularai/Llama-3.1-8B-Instruct-GGUF

While this example uses the Llama 3.1 model, you can replace it with any of the models listed in the MAX Builds site.

The server is ready when you see a message indicating it's running on http://0.0.0.0:8000:

Server ready on http://0.0.0.0:8000 (Press CTRL+C to quit)
Server ready on http://0.0.0.0:8000 (Press CTRL+C to quit)

For a complete list of max CLI commands and options, refer to the MAX CLI reference.

Interact with the model

After the server is running, you can interact with the model using different methods. The MAX endpoint supports OpenAI REST APIs, so you can send requests from your client using the openai Python API.

You can use OpenAI's Python client to interact with the model.

To get started, install the OpenAI Python client:

pip install openai
pip install openai

Then, create a client and make a request to the model:

generate-text.py
from openai import OpenAI

client = OpenAI(
base_url = 'http://0.0.0.0:8000/v1',
api_key='EMPTY', # required by the API, but not used by MAX
)

response = client.chat.completions.create(
model="modularai/Llama-3.1-8B-Instruct-GGUF",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The LA Dodgers won in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
print(response.choices[0].message.content)
from openai import OpenAI

client = OpenAI(
base_url = 'http://0.0.0.0:8000/v1',
api_key='EMPTY', # required by the API, but not used by MAX
)

response = client.chat.completions.create(
model="modularai/Llama-3.1-8B-Instruct-GGUF",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Who won the world series in 2020?"},
{"role": "assistant", "content": "The LA Dodgers won in 2020."},
{"role": "user", "content": "Where was it played?"}
]
)
print(response.choices[0].message.content)

In this example, you're using the OpenAI Python client to interact with the MAX endpoint running on local host 8000. The client object is initialized with the base URL http://0.0.0.0:8000/v1 and the API key is ignored.

When you run this code, the model should respond with information about the 2020 World Series location:

python generate-text.py
python generate-text.py
The 2020 World Series was played at Globe Life Field in Arlington, Texas. It was a neutral site due to the COVID-19 pandemic.
The 2020 World Series was played at Globe Life Field in Arlington, Texas. It was a neutral site due to the COVID-19 pandemic.

For complete details on all available API endpoints and options, see the MAX Serve API documentation.

Next steps

Now that you have successfully set up MAX with OpenAI-compatible endpoints, checkout out these other tutorials:

Did this tutorial work for you?