Skip to main content

Python module

max.pipelines.architectures.qwen3vl_moe

Qwen3-VL vision-language architecture for multimodal text generation.

Qwen3VLConfig

class max.pipelines.architectures.qwen3vl_moe.Qwen3VLConfig(*, devices, dtype, image_token_id, video_token_id, vision_start_token_id, spatial_merge_size, mrope_section, num_experts, num_experts_per_tok, moe_intermediate_size, mlp_only_layers, norm_topk_prob, decoder_sparse_step, vision_config, llm_config)

source

Bases: ArchConfigWithKVCache

Configuration for Qwen3VL models.

Parameters:

  • devices (list[DeviceRef])
  • dtype (DType)
  • image_token_id (int)
  • video_token_id (int)
  • vision_start_token_id (int)
  • spatial_merge_size (int)
  • mrope_section (list[int])
  • num_experts (int)
  • num_experts_per_tok (int)
  • moe_intermediate_size (int)
  • mlp_only_layers (list[int])
  • norm_topk_prob (bool)
  • decoder_sparse_step (int)
  • vision_config (VisionConfig)
  • llm_config (Llama3Config)

calculate_max_seq_len()

static calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculate maximum sequence length for Qwen3VL.

Parameters:

Return type:

int

construct_kv_params()

static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Parameters:

Return type:

KVCacheParams

decoder_sparse_step

decoder_sparse_step: int

source

Sparse step for the decoder.

devices

devices: list[DeviceRef]

source

Devices that the Qwen3VL model is parallelized over.

dtype

dtype: DType

source

DType of the Qwen3VL model weights.

finalize()

finalize(huggingface_config, llm_state_dict, vision_state_dict, return_logits, norm_method='rms_norm')

source

Finalize the Qwen3VLConfig instance with state_dict dependent fields.

Parameters:

  • huggingface_config (AutoConfig) – HuggingFace model configuration.
  • llm_state_dict (dict[str, WeightData]) – Language model weights dictionary.
  • vision_state_dict (dict[str, WeightData]) – Vision encoder weights dictionary.
  • return_logits (ReturnLogits) – Return logits configuration.
  • norm_method (Literal['rms_norm', 'layer_norm']) – Normalization method.

Return type:

None

get_kv_params()

get_kv_params()

source

Returns the KV cache parameters from the embedded LLM config.

Return type:

KVCacheParams

get_max_seq_len()

get_max_seq_len()

source

Returns the maximum sequence length from the embedded LLM config.

Return type:

int

get_num_layers()

static get_num_layers(huggingface_config)

source

Parameters:

huggingface_config (AutoConfig)

Return type:

int

image_token_id

image_token_id: int

source

Token ID used for image placeholders in the input sequence.

initialize()

classmethod initialize(pipeline_config, model_config=None)

source

Initializes a Qwen3VLConfig instance from pipeline configuration.

Parameters:

Returns:

A Qwen3VLConfig instance with fields initialized from config.

Return type:

Self

initialize_from_config()

classmethod initialize_from_config(pipeline_config, huggingface_config)

source

Initializes a Qwen3VLConfig from pipeline and HuggingFace configs.

This method creates a config instance with all fields that can be determined from the pipeline and HuggingFace configurations, without needing the state_dict. Fields that depend on the state_dict should be set via the finalize() method.

Parameters:

  • pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
  • huggingface_config (AutoConfig) – HuggingFace model configuration.

Returns:

A Qwen3VLConfig instance ready for finalization.

Return type:

Self

llm_config

llm_config: Llama3Config

source

Language model configuration using Llama3 architecture.

mlp_only_layers

mlp_only_layers: list[int]

source

List of indices for the MLP only layers.

moe_intermediate_size

moe_intermediate_size: int

source

Intermediate size in the MoE layer.

mrope_section

mrope_section: list[int]

source

List of indices for the mrope section.

norm_topk_prob

norm_topk_prob: bool

source

Whether to use top-k probability normalization in the MoE layer.

num_experts

num_experts: int

source

Number of experts in the MoE layer.

num_experts_per_tok

num_experts_per_tok: int

source

Number of experts per token in the MoE layer.

spatial_merge_size

spatial_merge_size: int

source

Size parameter for spatial merging of vision features.

video_token_id

video_token_id: int

source

Token ID used for video placeholders in the input sequence.

vision_config

vision_config: VisionConfig

source

Vision encoder configuration.

vision_start_token_id

vision_start_token_id: int

source

Token ID that marks the start of vision content.

Qwen3VLModel

class max.pipelines.architectures.qwen3vl_moe.Qwen3VLModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: AlwaysSignalBuffersMixin, PipelineModelWithKVCache[Qwen3VLTextAndVisionContext]

A Qwen3VL pipeline model for multimodal text generation.

Parameters:

calculate_max_seq_len()

static calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the maximum sequence length for the Qwen3VL model.

Parameters:

Return type:

int

estimate_activation_memory()

classmethod estimate_activation_memory(pipeline_config, huggingface_config)

source

Estimates the activation memory required for model execution.

This accounts for temporary memory buffers used during model execution, such as intermediate activations and working buffers.

The default implementation returns 0 for backward compatibility. Models with significant activation memory requirements should override this method to provide accurate estimates.

Parameters:

  • pipeline_config (PipelineConfig) – Pipeline configuration
  • huggingface_config (AutoConfig) – Hugging Face model configuration

Returns:

Estimated activation memory in bytes

Return type:

int

execute()

execute(model_inputs)

source

Executes the Qwen3VL model with the prepared inputs.

Parameters:

model_inputs (ModelInputs)

Return type:

ModelOutputs

get_kv_params()

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Gets the parameters required to configure the KV cache for Qwen3VL.

Parameters:

Return type:

KVCacheParams

language_model

language_model: Model

source

The compiled language model for text generation.

load_model()

load_model(session)

source

Loads the compiled Qwen3VL models into the MAX Engine session.

Returns:

A tuple of (vision_model, language_model).

Parameters:

session (InferenceSession)

Return type:

tuple[Model, Model]

model_config

model_config: Qwen3VLConfig | None

source

The Qwen3VL model configuration.

prepare_initial_token_inputs()

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepares the initial inputs for the first execution pass of the Qwen3VL model.

Parameters:

Return type:

Qwen3VLInputs

prepare_next_token_inputs()

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepares the inputs for subsequent execution steps in a multi-step generation.

Parameters:

Return type:

Qwen3VLInputs

vision_model

vision_model: Model

source

The compiled vision model for processing images.