Python module
max.pipelines.architectures.qwen3
Qwen3 transformer architecture for text generation.
Qwen3Config
class max.pipelines.architectures.qwen3.Qwen3Config(*, hidden_size: 'int', num_attention_heads: 'int', num_key_value_heads: 'int', num_hidden_layers: 'int', rope_theta: 'float', rope_scaling_params: 'Llama3RopeScalingParams | None', max_seq_len: 'int', intermediate_size: 'int', interleaved_rope_weights: 'bool', vocab_size: 'int', dtype: 'DType', model_quantization_encoding: 'QuantizationEncoding | None', quantization_config: 'QuantizationConfig | None', kv_params: 'KVCacheParams', return_logits: 'ReturnLogits' = <ReturnLogits.LAST_TOKEN: 'last_token'>, norm_method: "Literal['rms_norm'] | Literal['layer_norm']" = 'rms_norm', norm_dtype: 'DType | None' = None, attention_bias: 'bool' = False, rms_norm_eps: 'float | None' = None, tie_word_embeddings: 'bool' = False, stacked_mlp: 'bool' = False, stacked_qkv: 'bool' = False, attention_multiplier: 'float', embedding_multiplier: 'float', residual_multiplier: 'float', devices: 'list[DeviceRef]', clip_qkv: 'float | None', quant_config: 'QuantConfig | None' = None, lora_config: 'LoRAConfig | None' = None, longrope_scaling_params: 'LongRoPEScalingParams | None' = None, logits_scaling: 'float' = 1.0, return_hidden_states: 'ReturnHiddenStates' = <ReturnHiddenStates.NONE: 'none'>, use_subgraphs: 'bool' = True, data_parallel_degree: 'int' = 1, num_experts: 'int' = 0, num_experts_per_tok: 'int' = 1, moe_intermediate_size: 'int' = 0, mlp_only_layers: 'list[int]' = <factory>, norm_topk_prob: 'bool' = False, decoder_sparse_step: 'int' = 1, ep_config: 'EPConfig | None' = None)
Bases: Llama3Config
-
Parameters:
-
- hidden_size (int)
- num_attention_heads (int)
- num_key_value_heads (int)
- num_hidden_layers (int)
- rope_theta (float)
- rope_scaling_params (Llama3RopeScalingParams | None)
- max_seq_len (int)
- intermediate_size (int)
- interleaved_rope_weights (bool)
- vocab_size (int)
- dtype (DType)
- model_quantization_encoding (QuantizationEncoding | None)
- quantization_config (QuantizationConfig | None)
- kv_params (KVCacheParams)
- return_logits (ReturnLogits)
- norm_method (Literal['rms_norm', 'layer_norm'])
- norm_dtype (DType | None)
- attention_bias (bool)
- rms_norm_eps (float | None)
- tie_word_embeddings (bool)
- stacked_mlp (bool)
- stacked_qkv (bool)
- attention_multiplier (float)
- embedding_multiplier (float)
- residual_multiplier (float)
- devices (list[DeviceRef])
- clip_qkv (float | None)
- quant_config (QuantConfig | None)
- lora_config (LoRAConfig | None)
- longrope_scaling_params (LongRoPEScalingParams | None)
- logits_scaling (float)
- return_hidden_states (ReturnHiddenStates)
- use_subgraphs (bool)
- data_parallel_degree (int)
- num_experts (int)
- num_experts_per_tok (int)
- moe_intermediate_size (int)
- mlp_only_layers (list[int])
- norm_topk_prob (bool)
- decoder_sparse_step (int)
- ep_config (EPConfig | None)
calculate_attention_multiplier()
static calculate_attention_multiplier(huggingface_config)
The attention multiplier for Qwen3 models.
Uses the explicit head_dim from the config instead of calculating it.
-
Parameters:
-
huggingface_config (AutoConfig) – The HuggingFace configuration object.
-
Returns:
-
The attention multiplier value.
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Override the default Llama3Config.construct_kv_params to use head_dim from config.
Qwen3 models have an explicit head_dim field in their configuration, unlike Llama models where it needs to be calculated.
-
Parameters:
-
- huggingface_config (AutoConfig) – The HuggingFace configuration object.
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- devices (list[DeviceRef]) – Devices to use for the KV cache.
- kv_cache_config (KVCacheConfig) – Configuration for KV cache.
- cache_dtype (DType) – Data type for the cache.
-
Returns:
-
KVCacheParams object with the correct head_dim from config.
-
Return type:
decoder_sparse_step
decoder_sparse_step: int = 1
Sparse step for the decoder. Controls which layers use MoE.
ep_config
ep_config: EPConfig | None = None
Expert parallelism configuration. None means no EP.
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes a Qwen3Config instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
An initialized Qwen3Config instance.
-
Return type:
initialize_from_config()
classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)
Initializes a Qwen3Config instance from pipeline and HuggingFace configs.
This method creates a config instance with all fields that can be determined from the pipeline configuration, without needing the state_dict.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- huggingface_config (AutoConfig) – The HuggingFace model configuration.
- model_config (MAXModelConfig | None) – The MAX Engine model configuration.
-
Returns:
-
An initialized Qwen3Config instance.
-
Return type:
mlp_only_layers
List of layer indices that use MLP instead of MoE.
moe_intermediate_size
moe_intermediate_size: int = 0
Intermediate size in the MoE layer. If 0, uses intermediate_size.
norm_topk_prob
norm_topk_prob: bool = False
Whether to use top-k probability normalization in the MoE layer.
num_experts
num_experts: int = 0
Number of experts in the MoE layer. 0 means dense model (no MoE).
num_experts_per_tok
num_experts_per_tok: int = 1
Number of experts per token in the MoE layer.
Qwen3Model
class max.pipelines.architectures.qwen3.Qwen3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)
Bases: AlwaysSignalBuffersMixin, LlamaModelBase
Qwen3 pipeline model supporting single-GPU, TP, and DP+EP inference.
Uses AlwaysSignalBuffersMixin since VocabParallelEmbedding and ColumnParallelLinear always require signal buffers for allreduce.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The configuration for this pipeline.
- session (InferenceSession) – The container for the runtime for this model.
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
attention_bias
attention_bias: bool = False
Whether to use attention bias.
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
load_model()
load_model(session)
-
Parameters:
-
session (InferenceSession)
-
Return type:
model
model: Model
Compiled and initialized model ready for inference.
norm_method
norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'
Normalization layer.
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepare the inputs for the first pass in multistep execution.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
-
Llama3Inputs | Qwen3Inputs
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
-
Llama3Inputs | Qwen3Inputs
state_dict
Weights to load into the model.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!