Python module
max.pipelines.architectures.qwen3vl_moe
Qwen3-VL vision-language architecture for multimodal text generation.
Qwen3VLConfig
class max.pipelines.architectures.qwen3vl_moe.Qwen3VLConfig(*, devices, dtype, image_token_id, video_token_id, vision_start_token_id, spatial_merge_size, mrope_section, num_experts, num_experts_per_tok, moe_intermediate_size, mlp_only_layers, norm_topk_prob, decoder_sparse_step, vision_config, llm_config)
Bases: ArchConfigWithKVCache
Configuration for Qwen3VL models.
-
Parameters:
-
- devices (list[DeviceRef])
- dtype (DType)
- image_token_id (int)
- video_token_id (int)
- vision_start_token_id (int)
- spatial_merge_size (int)
- mrope_section (list[int])
- num_experts (int)
- num_experts_per_tok (int)
- moe_intermediate_size (int)
- mlp_only_layers (list[int])
- norm_topk_prob (bool)
- decoder_sparse_step (int)
- vision_config (VisionConfig)
- llm_config (Llama3Config)
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
Calculate maximum sequence length for Qwen3VL.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
decoder_sparse_step
decoder_sparse_step: int
Sparse step for the decoder.
devices
Devices that the Qwen3VL model is parallelized over.
dtype
dtype: DType
DType of the Qwen3VL model weights.
finalize()
finalize(huggingface_config, llm_state_dict, vision_state_dict, return_logits, norm_method='rms_norm')
Finalize the Qwen3VLConfig instance with state_dict dependent fields.
-
Parameters:
-
- huggingface_config (AutoConfig) – HuggingFace model configuration.
- llm_state_dict (dict[str, WeightData]) – Language model weights dictionary.
- vision_state_dict (dict[str, WeightData]) – Vision encoder weights dictionary.
- return_logits (ReturnLogits) – Return logits configuration.
- norm_method (Literal['rms_norm', 'layer_norm']) – Normalization method.
-
Return type:
-
None
get_kv_params()
get_kv_params()
Returns the KV cache parameters from the embedded LLM config.
-
Return type:
get_max_seq_len()
get_max_seq_len()
Returns the maximum sequence length from the embedded LLM config.
-
Return type:
get_num_layers()
static get_num_layers(huggingface_config)
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
image_token_id
image_token_id: int
Token ID used for image placeholders in the input sequence.
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes a Qwen3VLConfig instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
A Qwen3VLConfig instance with fields initialized from config.
-
Return type:
initialize_from_config()
classmethod initialize_from_config(pipeline_config, huggingface_config)
Initializes a Qwen3VLConfig from pipeline and HuggingFace configs.
This method creates a config instance with all fields that can be determined from the pipeline and HuggingFace configurations, without needing the state_dict. Fields that depend on the state_dict should be set via the finalize() method.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- huggingface_config (AutoConfig) – HuggingFace model configuration.
-
Returns:
-
A Qwen3VLConfig instance ready for finalization.
-
Return type:
llm_config
llm_config: Llama3Config
Language model configuration using Llama3 architecture.
mlp_only_layers
List of indices for the MLP only layers.
moe_intermediate_size
moe_intermediate_size: int
Intermediate size in the MoE layer.
mrope_section
List of indices for the mrope section.
norm_topk_prob
norm_topk_prob: bool
Whether to use top-k probability normalization in the MoE layer.
num_experts
num_experts: int
Number of experts in the MoE layer.
num_experts_per_tok
num_experts_per_tok: int
Number of experts per token in the MoE layer.
spatial_merge_size
spatial_merge_size: int
Size parameter for spatial merging of vision features.
video_token_id
video_token_id: int
Token ID used for video placeholders in the input sequence.
vision_config
vision_config: VisionConfig
Vision encoder configuration.
vision_start_token_id
vision_start_token_id: int
Token ID that marks the start of vision content.
Qwen3VLModel
class max.pipelines.architectures.qwen3vl_moe.Qwen3VLModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)
Bases: AlwaysSignalBuffersMixin, PipelineModelWithKVCache[Qwen3VLTextAndVisionContext]
A Qwen3VL pipeline model for multimodal text generation.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the maximum sequence length for the Qwen3VL model.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
estimate_activation_memory()
classmethod estimate_activation_memory(pipeline_config, huggingface_config)
Estimates the activation memory required for model execution.
This accounts for temporary memory buffers used during model execution, such as intermediate activations and working buffers.
The default implementation returns 0 for backward compatibility. Models with significant activation memory requirements should override this method to provide accurate estimates.
-
Parameters:
-
- pipeline_config (PipelineConfig) – Pipeline configuration
- huggingface_config (AutoConfig) – Hugging Face model configuration
-
Returns:
-
Estimated activation memory in bytes
-
Return type:
execute()
execute(model_inputs)
Executes the Qwen3VL model with the prepared inputs.
-
Parameters:
-
model_inputs (ModelInputs)
-
Return type:
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Gets the parameters required to configure the KV cache for Qwen3VL.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
language_model
language_model: Model
The compiled language model for text generation.
load_model()
load_model(session)
Loads the compiled Qwen3VL models into the MAX Engine session.
-
Returns:
-
A tuple of (vision_model, language_model).
-
Parameters:
-
session (InferenceSession)
-
Return type:
model_config
model_config: Qwen3VLConfig | None
The Qwen3VL model configuration.
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepares the initial inputs for the first execution pass of the Qwen3VL model.
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the inputs for subsequent execution steps in a multi-step generation.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
-
Qwen3VLInputs
vision_model
vision_model: Model
The compiled vision model for processing images.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!