Skip to main content

Python module

max.pipelines.architectures.gpt_oss_modulev3

GPT-OSS mixture-of-experts architecture for text generation.

GptOssConfig​

class max.pipelines.architectures.gpt_oss_modulev3.GptOssConfig(*, vocab_size, hidden_size, intermediate_size, num_hidden_layers, num_attention_heads, num_key_value_heads, head_dim, hidden_activation, max_position_embeddings, rms_norm_eps, rope_theta, attention_bias, sliding_window, num_local_experts, num_experts_per_tok, router_aux_loss_coef, layer_types, attention_dropout, rope_scaling, query_pre_attn_scalar, final_logit_softcapping, attn_logit_softcapping, swiglu_limit, dtype, devices, interleaved_rope_weights, kv_params, tie_word_embeddings=False, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: ArchConfigWithKVCache

Configuration for GPT OSS models.

Contains parameters specific to the GPT OSS architecture, typically extracted from a HuggingFace configuration object’s text config.

Parameters:

attention_bias​

attention_bias: bool

source

Whether to use a bias in the query, key, value and output projection layers during self-attention.

attention_dropout​

attention_dropout: float

source

Dropout probability for attention weights.

attn_logit_softcapping​

attn_logit_softcapping: float | None

source

Softcapping value for attention logits.

calculate_max_seq_len()​

static calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the maximum sequence length for the model.

Uses the max_length from the max.pipelines.config.PipelineConfig if provided, otherwise falls back to the max_position_embeddings from the HuggingFace configuration’s text config.

Parameters:

  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • huggingface_config (AutoConfig) – The HuggingFace model configuration object (transformers.AutoConfig).

Returns:

The calculated maximum sequence length.

Return type:

int

construct_kv_params()​

static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Constructs the KV cache parameters from configuration objects.

Parameters:

  • huggingface_config (AutoConfig) – The HuggingFace model configuration object (transformers.AutoConfig).
  • devices (list[DeviceRef]) – The list of devices the model will run on.
  • kv_cache_config (KVCacheConfig) – The MAX Engine KV cache configuration settings (max.pipelines.max_config.KVCacheConfig).
  • cache_dtype (DType) – The desired data type for the KV cache (max.dtype.DType).
  • pipeline_config (PipelineConfig)

Returns:

The configured max.pipelines.kv_cache.KVCacheParams object.

Return type:

KVCacheParams

devices​

devices: list[DeviceRef]

source

Devices to run the model with.

dtype​

dtype: DType

source

DType of the model weights and input.

final_logit_softcapping​

final_logit_softcapping: float | None

source

Softcapping value for final logits.

finalize()​

finalize(huggingface_config, state_dict, return_logits)

source

Define parameters that can’t be determined just from the pipeline config.

Parameters:

  • huggingface_config (AutoConfig) – The HuggingFace model configuration object.
  • state_dict (dict[str, WeightData]) – The model’s state dictionary containing weights.
  • return_logits (ReturnLogits) – Whether to return the last token, all tokens or a variable number of logits.

Return type:

None

get_kv_params()​

get_kv_params()

source

KV cache parameters to use when running the model.

Return type:

KVCacheParams

get_max_seq_len()​

get_max_seq_len()

source

Returns the default maximum sequence length for the model.

Subclasses should determine whether this value can be overridden by setting the --max-length (pipeline_config.model.max_length) flag.

Return type:

int

get_num_layers()​

static get_num_layers(huggingface_config)

source

Retrieves the number of hidden layers from the HuggingFace configuration.

Parameters:

huggingface_config (AutoConfig) – The HuggingFace model configuration object (transformers.AutoConfig).

Returns:

The number of hidden layers specified in the configuration.

Return type:

int

head_dim​

head_dim: int

source

The attention head dimension.

hidden_activation​

hidden_activation: str

source

The non-linear activation function (function or string) in the decoder. Will default to β€œgelu_tanh” if not specified. β€œgelu_tanh” uses an approximation of the β€œgelu” activation function.

hidden_size​

hidden_size: int

source

Dimension of the hidden representations.

initialize()​

classmethod initialize(pipeline_config, model_config=None)

source

Initializes a GptOssConfig instance from pipeline configuration.

This method creates a config instance with all fields that can be determined from the pipeline configuration, without needing the state_dict. Fields that depend on the state_dict (like tie_word_embeddings) should be set via the finalize() method.

Parameters:

Returns:

An initialized GptOssConfig instance.

Return type:

Self

interleaved_rope_weights​

interleaved_rope_weights: bool

source

True if the rope weights are in interleaved complex format.

intermediate_size​

intermediate_size: int

source

Dimension of the MLP representations.

kv_params​

kv_params: KVCacheParams

source

KV cache parameters.

layer_types​

layer_types: list[str]

source

Type of attention for each layer (β€˜full_attention’ or β€˜sliding_attention’).

max_position_embeddings​

max_position_embeddings: int

source

The maximum sequence length that this model might ever be used with.

num_attention_heads​

num_attention_heads: int

source

Number of attention heads for each attention layer in the Transformer decoder.

num_experts_per_tok​

num_experts_per_tok: int

source

Number of experts selected per token in MoE layers.

num_hidden_layers​

num_hidden_layers: int

source

Number of hidden layers in the Transformer decoder.

num_key_value_heads​

num_key_value_heads: int

source

Number of key_value heads that should be used to implement Grouped Query Attention.

num_local_experts​

num_local_experts: int

source

Number of experts in each MoE layer.

query_pre_attn_scalar​

query_pre_attn_scalar: float | None

source

Scalar applied to queries before attention computation.

return_logits​

return_logits: ReturnLogits = 'last_token'

source

Whether to return the last token, all logits, or a variable number of logits.

rms_norm_eps​

rms_norm_eps: float

source

The epsilon used by the rms normalization layers.

rope_scaling​

rope_scaling: YarnScalingParams

source

Scaling configuration for the RoPE embeddings used in global attention.

rope_theta​

rope_theta: float

source

The base period of the RoPE embeddings.

router_aux_loss_coef​

router_aux_loss_coef: float

source

Coefficient for the auxiliary load balancing loss in MoE layers.

sliding_window​

sliding_window: int

source

In the GPT OSS language model, specific layers use sliding window attention. This is the size of the sliding window.

swiglu_limit​

swiglu_limit: float

source

Clamping limit for SwiGLU activation in MoE layers.

tie_word_embeddings​

tie_word_embeddings: bool = False

source

Whether to tie weight embeddings. When true, the output linear layer uses the same weight as the embedding layer.

vocab_size​

vocab_size: int

source

Vocabulary size of the GPT OSS model.

GptOssInputs​

class max.pipelines.architectures.gpt_oss_modulev3.GptOssInputs(tokens, input_row_offsets, return_n_logits, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None)

source

Bases: ModelInputs

A class representing inputs for the GPT OSS model.

This class encapsulates the input tensors required for the GPT OSS model execution.

Parameters:

input_row_offsets​

input_row_offsets: Buffer

source

Buffer containing the offsets for each row in the ragged input sequence.

return_n_logits​

return_n_logits: Buffer

source

Number of logits to return.

tokens​

tokens: Buffer

source

Buffer containing the input token IDs.

GptOssModel​

class max.pipelines.architectures.gpt_oss_modulev3.GptOssModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: PipelineModelWithKVCache[TextContext]

A GPT OSS pipeline model for text generation.

This class integrates the GPT OSS architecture with the MAX Engine pipeline infrastructure, handling model loading, KV cache management, and input preparation for inference.

Parameters:

calculate_max_seq_len()​

static calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the maximum sequence length for the GPT OSS model.

Uses the max_length from the max.pipelines.config.PipelineConfig if provided, otherwise falls back to the max_position_embeddings from the HuggingFace configuration’s text config.

Parameters:

  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • huggingface_config (AutoConfig) – The HuggingFace model configuration object (transformers.AutoConfig).

Returns:

The calculated maximum sequence length.

Return type:

int

execute()​

execute(model_inputs)

source

Executes the GPT OSS model with the prepared inputs.

Parameters:

model_inputs (ModelInputs) – The prepared inputs for the model execution, typically including token IDs, attention masks/offsets, and KV cache inputs.

Returns:

An object containing the output logits from the model execution.

Return type:

ModelOutputs

get_kv_params()​

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Gets the parameters required to configure the KV cache for Gemma 3.

Delegates to the GptOssConfig.construct_kv_params static method.

Parameters:

  • huggingface_config (AutoConfig) – The HuggingFace model configuration object (transformers.AutoConfig).
  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • devices (list[DeviceRef]) – The list of devices the model will run on.
  • kv_cache_config (KVCacheConfig) – The MAX Engine KV cache configuration settings (max.pipelines.max_config.KVCacheConfig).
  • cache_dtype (DType) – The desired data type for the KV cache (max.dtype.DType).

Returns:

The configured max.pipelines.kv_cache.KVCacheParams object.

Return type:

KVCacheParams

load_model()​

load_model()

source

Loads the compiled GPT OSS model into the MAX Engine session.

Parameters:

session – The MAX Engine inference session.

Returns:

The loaded MAX Engine model object.

Return type:

Callable[[…], Any]

prepare_initial_token_inputs()​

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepares the initial inputs for the first execution pass of the GPT OSS model.

Parameters:

  • replica_batches (Sequence[Sequence[TextContext]]) – A sequence of sequences of TextContext objects representing the input prompts for each replica.
  • kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None) – Optional inputs required by the KV cache manager.
  • return_n_logits (int)

Returns:

The prepared ModelInputs object for the initial execution step.

Return type:

ModelInputs

prepare_next_token_inputs()​

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepares the inputs for subsequent execution steps in a multi-step generation.

Parameters:

  • next_tokens (Buffer) – The tensor containing the token IDs generated in the previous step.
  • prev_model_inputs (ModelInputs) – The ModelInputs used in the previous execution step.

Returns:

The prepared ModelInputs object for the next execution step.

Return type:

ModelInputs