Skip to main content

Python module

max.pipelines.architectures.qwen3

Qwen3 transformer architecture for text generation.

Qwen3Config

class max.pipelines.architectures.qwen3.Qwen3Config(*, hidden_size: 'int', num_attention_heads: 'int', num_key_value_heads: 'int', num_hidden_layers: 'int', rope_theta: 'float', rope_scaling_params: 'Llama3RopeScalingParams | None', max_seq_len: 'int', intermediate_size: 'int', interleaved_rope_weights: 'bool', vocab_size: 'int', dtype: 'DType', model_quantization_encoding: 'QuantizationEncoding | None', quantization_config: 'QuantizationConfig | None', kv_params: 'KVCacheParams', return_logits: 'ReturnLogits' = <ReturnLogits.LAST_TOKEN: 'last_token'>, norm_method: "Literal['rms_norm'] | Literal['layer_norm']" = 'rms_norm', norm_dtype: 'DType | None' = None, attention_bias: 'bool' = False, rms_norm_eps: 'float | None' = None, tie_word_embeddings: 'bool' = False, stacked_mlp: 'bool' = False, stacked_qkv: 'bool' = False, attention_multiplier: 'float', embedding_multiplier: 'float', residual_multiplier: 'float', devices: 'list[DeviceRef]', clip_qkv: 'float | None', quant_config: 'QuantConfig | None' = None, lora_config: 'LoRAConfig | None' = None, longrope_scaling_params: 'LongRoPEScalingParams | None' = None, logits_scaling: 'float' = 1.0, return_hidden_states: 'ReturnHiddenStates' = <ReturnHiddenStates.NONE: 'none'>, use_subgraphs: 'bool' = True, data_parallel_degree: 'int' = 1, num_experts: 'int' = 0, num_experts_per_tok: 'int' = 1, moe_intermediate_size: 'int' = 0, mlp_only_layers: 'list[int]' = <factory>, norm_topk_prob: 'bool' = False, decoder_sparse_step: 'int' = 1, ep_config: 'EPConfig | None' = None)

source

Bases: Llama3Config

Parameters:

calculate_attention_multiplier()

static calculate_attention_multiplier(huggingface_config)

source

The attention multiplier for Qwen3 models.

Uses the explicit head_dim from the config instead of calculating it.

Parameters:

huggingface_config (AutoConfig) – The HuggingFace configuration object.

Returns:

The attention multiplier value.

Return type:

float

construct_kv_params()

static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Override the default Llama3Config.construct_kv_params to use head_dim from config.

Qwen3 models have an explicit head_dim field in their configuration, unlike Llama models where it needs to be calculated.

Parameters:

  • huggingface_config (AutoConfig) – The HuggingFace configuration object.
  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • devices (list[DeviceRef]) – Devices to use for the KV cache.
  • kv_cache_config (KVCacheConfig) – Configuration for KV cache.
  • cache_dtype (DType) – Data type for the cache.

Returns:

KVCacheParams object with the correct head_dim from config.

Return type:

KVCacheParams

decoder_sparse_step

decoder_sparse_step: int = 1

source

Sparse step for the decoder. Controls which layers use MoE.

ep_config

ep_config: EPConfig | None = None

source

Expert parallelism configuration. None means no EP.

initialize()

classmethod initialize(pipeline_config, model_config=None)

source

Initializes a Qwen3Config instance from pipeline configuration.

Parameters:

Returns:

An initialized Qwen3Config instance.

Return type:

Self

initialize_from_config()

classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)

source

Initializes a Qwen3Config instance from pipeline and HuggingFace configs.

This method creates a config instance with all fields that can be determined from the pipeline configuration, without needing the state_dict.

Parameters:

  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • huggingface_config (AutoConfig) – The HuggingFace model configuration.
  • model_config (MAXModelConfig | None) – The MAX Engine model configuration.

Returns:

An initialized Qwen3Config instance.

Return type:

Self

mlp_only_layers

mlp_only_layers: list[int]

source

List of layer indices that use MLP instead of MoE.

moe_intermediate_size

moe_intermediate_size: int = 0

source

Intermediate size in the MoE layer. If 0, uses intermediate_size.

norm_topk_prob

norm_topk_prob: bool = False

source

Whether to use top-k probability normalization in the MoE layer.

num_experts

num_experts: int = 0

source

Number of experts in the MoE layer. 0 means dense model (no MoE).

num_experts_per_tok

num_experts_per_tok: int = 1

source

Number of experts per token in the MoE layer.

Qwen3Model

class max.pipelines.architectures.qwen3.Qwen3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)

source

Bases: AlwaysSignalBuffersMixin, LlamaModelBase

Qwen3 pipeline model supporting single-GPU, TP, and DP+EP inference.

Uses AlwaysSignalBuffersMixin since VocabParallelEmbedding and ColumnParallelLinear always require signal buffers for allreduce.

Parameters:

attention_bias

attention_bias: bool = False

source

Whether to use attention bias.

get_kv_params()

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Returns the KV cache params for the pipeline model.

Parameters:

Return type:

KVCacheParams

load_model()

load_model(session)

source

Parameters:

session (InferenceSession)

Return type:

Model

model

model: Model

source

Compiled and initialized model ready for inference.

norm_method

norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'

source

Normalization layer.

prepare_initial_token_inputs()

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepare the inputs for the first pass in multistep execution.

Parameters:

Return type:

Llama3Inputs | Qwen3Inputs

prepare_next_token_inputs()

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.

Parameters:

Return type:

Llama3Inputs | Qwen3Inputs

state_dict

state_dict: dict[str, Any]

source

Weights to load into the model.