Skip to main content

Python module

max.pipelines.architectures.qwen3_5

Qwen3_5Config

class max.pipelines.architectures.qwen3_5.Qwen3_5Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, rope_theta, rope_scaling_params, max_seq_len, intermediate_size, interleaved_rope_weights, vocab_size, dtype, model_quantization_encoding, quantization_config, kv_params, return_logits=ReturnLogits.LAST_TOKEN, norm_method='rms_norm', norm_dtype=None, attention_bias=False, rms_norm_eps=None, tie_word_embeddings=False, stacked_mlp=False, stacked_qkv=False, attention_multiplier, embedding_multiplier, residual_multiplier, devices, clip_qkv, quant_config=None, lora_config=None, longrope_scaling_params=None, logits_scaling=1.0, return_hidden_states=ReturnHiddenStates.NONE, use_subgraphs=True, data_parallel_degree=1, layer_types=<factory>, full_attention_interval=4, linear_key_head_dim=128, linear_value_head_dim=128, linear_num_key_heads=16, linear_num_value_heads=48, linear_conv_kernel_dim=4, partial_rotary_factor=0.25, attn_output_gate=True, vision_config=None, image_token_id=None, video_token_id=None, vision_start_token_id=None, mrope_section=None)

source

Bases: Llama3Config

Configuration for Qwen3.5 hybrid attention models.

Qwen3.5 uses a hybrid architecture with both full (standard) attention and linear attention (Gated DeltaNet) layers. Every full_attention_interval-th layer uses full attention, and the rest use linear attention.

Parameters:

attn_output_gate

attn_output_gate: bool = True

source

Whether full attention layers use a sigmoid output gate.

calculate_attention_multiplier()

static calculate_attention_multiplier(huggingface_config)

source

Compute attention scaling factor using explicit head_dim.

Parameters:

huggingface_config (AutoConfig)

Return type:

float

construct_kv_params()

static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Construct KV cache parameters for full attention layers only.

Only allocates KV cache entries for full-attention layers; linear attention layers use separate conv/recurrent state buffers instead. The forward pass maps each full-attention layer to a sequential KV cache index (0, 1, 2, …) independent of the absolute layer index.

Parameters:

Return type:

KVCacheParams

full_attention_interval

full_attention_interval: int = 4

source

Every N-th layer uses full attention.

get_num_layers()

static get_num_layers(huggingface_config)

source

Parameters:

huggingface_config (AutoConfig)

Return type:

int

image_token_id

image_token_id: int | None = None

source

Token ID used for image placeholders in the input sequence.

infer_optimal_batch_size()

infer_optimal_batch_size(devices)

source

Return a memory-safe default max_batch_size for this architecture.

Qwen3.5 allocates GPU memory for GatedDeltaNet recurrent states with three distinct cost centres per active request:

  1. Persistent pool (max_batch x per_req): pre-allocated once at startup and lives for the full server lifetime.
  2. Input working buffers (batch x per_req): gathered from the pool into dense batch tensors by get_states() each step.
  3. Output working buffers (batch x per_req): produced by the model kernel and scattered back to the pool by update_states().

Worst-case simultaneous footprint is therefore 3 x max_batch x per_req (pool + both working copies). We budget 15 % of current free GPU memory for this total, so:

max_batch = 0.15 x free_memory / (3 x per_req)

This is consistent with estimate_activation_memory() which reserves 3 x max_batch x per_req bytes before the KV-cache allocator runs.

Falls back to 32—safe for the 27B model on H100/A100 (80 GB)—when the device query fails.

Parameters:

devices (list[Any])

Return type:

int

initialize()

classmethod initialize(pipeline_config, model_config=None)

source

Initialize the config from a PipelineConfig.

Parameters:

  • pipeline_config (PipelineConfig) – The pipeline configuration.
  • model_config (MAXModelConfig | None) – The model configuration to read from. When None (the default), pipeline_config.model is used. Pass an explicit config (e.g. pipeline_config.draft_model) to initialize the arch config for a different model.

Return type:

Self

initialize_from_config()

classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)

source

Initialize config from pipeline and HuggingFace configurations.

Handles both multimodal (Qwen3_5ForConditionalGeneration) and text-only (Qwen3_5ForCausalLM) configs by extracting the text config.

Parameters:

Return type:

Self

layer_types

layer_types: list[str]

source

‘full_attention’ or ‘linear_attention’.

Type:

Per-layer attention type

linear_conv_kernel_dim

linear_conv_kernel_dim: int = 4

source

Causal conv1d kernel size for linear attention layers.

linear_key_head_dim

linear_key_head_dim: int = 128

source

Key head dimension for linear attention layers.

linear_num_key_heads

linear_num_key_heads: int = 16

source

Number of key heads for linear attention layers.

linear_num_value_heads

linear_num_value_heads: int = 48

source

Number of value heads for linear attention layers.

linear_value_head_dim

linear_value_head_dim: int = 128

source

Value head dimension for linear attention layers.

mrope_section

mrope_section: list[int] | None = None

source

MRoPE section lengths for multimodal rotary position encoding.

partial_rotary_factor

partial_rotary_factor: float = 0.25

source

Fraction of head_dim that gets rotary position embedding.

video_token_id

video_token_id: int | None = None

source

Token ID used for video placeholders in the input sequence.

vision_config

vision_config: VisionConfig | None = None

source

Vision encoder configuration; None for text-only models.

vision_start_token_id

vision_start_token_id: int | None = None

source

Token ID that marks the start of vision content.

Qwen3_5Inputs

class max.pipelines.architectures.qwen3_5.Qwen3_5Inputs(tokens, input_row_offsets, signal_buffers, return_n_logits, lora_grouped_offsets=None, num_active_loras=None, lora_end_idx=None, batch_seq_len=None, lora_ids_kv=None, lora_grouped_offsets_kv=None, data_parallel_splits=None, conv_states=None, recurrent_states=None, request_ids=None, image_token_indices=None, pixel_values=None, vision_position_ids=None, weights=None, indices=None, max_grid_size=None, grid_thw=None, cu_seqlens=None, max_seqlen=None, lm_image_embeddings=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None)

source

Bases: Llama3Inputs

Inputs for Qwen3.5 including linear attention states and optional vision inputs.

Parameters:

buffers

property buffers: tuple[Buffer, ...]

source

Returns positional Buffer inputs for model ABI calls.

conv_states

conv_states: list[Buffer] | None = None

source

Conv states for each linear attention layer.

cu_seqlens

cu_seqlens: Buffer | None = None

source

Cumulative sequence lengths for vision full attention.

grid_thw

grid_thw: Buffer | None = None

source

Grid dimensions (temporal, height, width) per image, shape (n_images, 3).

has_vision_inputs

property has_vision_inputs: bool

source

True when pixel values are available for vision encoding.

image_token_indices

image_token_indices: Buffer | None = None

source

Pre-computed scatter indices for image embeddings.

indices

indices: Buffer | None = None

source

Bilinear interpolation indices for vision position embeddings.

lm_image_embeddings

lm_image_embeddings: Buffer | None = None

source

Image embeddings for the LM graph (empty [0, H] buffer for decode/text-only steps, real embeddings for prefill steps with images). Must be non-None for multimodal models.

max_grid_size

max_grid_size: Buffer | None = None

source

Maximum grid size (CPU scalar) for vision attention.

max_seqlen

max_seqlen: Buffer | None = None

source

Maximum sequence length (CPU scalar) for vision attention.

pixel_values

pixel_values: Buffer | None = None

source

Raw pixel values for vision encoding.

recurrent_states

recurrent_states: list[Buffer] | None = None

source

Recurrent states for each linear attention layer.

request_ids

request_ids: list[RequestID] | None = None

source

Request IDs for this batch, used to update per-request state cache.

vision_position_ids

vision_position_ids: Buffer | None = None

source

Rotary position IDs for the vision encoder.

weights

weights: Buffer | None = None

source

Bilinear interpolation weights for vision position embeddings.

Qwen3_5Model

class max.pipelines.architectures.qwen3_5.Qwen3_5Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)

source

Bases: AlwaysSignalBuffersMixin, LlamaModelBase

Qwen3.5 pipeline model implementation.

Supports the hybrid linear/full attention architecture with KV cache for full attention layers and conv/recurrent states for linear layers.

Parameters:

attention_bias

attention_bias: bool = False

source

Whether to use attention bias.

calculate_max_seq_len()

classmethod calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the optimal max sequence length for the model.

Models are expected to implement this method. The following example shows how to implement it for a Mistral model:

class MistralModel(PipelineModel):
    @classmethod
    def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
        try:
            return upper_bounded_default(
                upper_bound=huggingface_config.max_seq_len,
                default=pipeline_config.model.max_length,
            )
        except ValueError as e:
            raise ValueError(
                "Unable to infer max_length for Mistral, the provided "
                f"max_length ({pipeline_config.model.max_length}) exceeds the "
                f"model's max_seq_len ({huggingface_config.max_seq_len})."
            ) from e

Parameters:

  • pipeline_config (PipelineConfig) – Configuration for the pipeline.
  • huggingface_config (AutoConfig) – Hugging Face model configuration.

Returns:

The maximum sequence length to use.

Return type:

int

estimate_activation_memory()

classmethod estimate_activation_memory(pipeline_config, huggingface_config)

source

Reserve GPU memory for GatedDeltaNet recurrent-state buffers.

GatedDeltaNetStateCache has three simultaneous GPU allocations at peak (during a model forward pass):

  1. Persistent pool (max_batch x per_req): pre-allocated once at startup.
  2. Input working buffers (batch x per_req): gathered from the pool into dense tensors by get_states() each step.
  3. Output working buffers (batch x per_req): produced by the model kernel and scattered back to the pool by update_states().

Worst-case simultaneous footprint: 3 x max_batch x per_req.

This method is called before infer_optimal_batch_size() sets max_batch_size on the pipeline config. To keep the reservation consistent with the batch size that will be inferred, we reproduce the same device-memory query used by infer_optimal_batch_size():

max_batch = 0.15 x free_memory / (3 x per_req)

so that 3 x max_batch x per_req = 0.15 x free_memory.

Falls back to 32 (safe for Qwen3.5-27B on H100/A100 80 GB) when the device query is unavailable or the user has not specified a batch size.

Parameters:

Return type:

int

execute()

execute(model_inputs)

source

Executes the graph with the given inputs.

Parameters:

model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.

Returns:

ModelOutputs containing the pipeline’s output tensors.

Return type:

ModelOutputs

This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.

get_kv_params()

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Returns the KV cache params for the pipeline model.

Parameters:

Return type:

KVCacheParams

load_model()

load_model(session)

source

Parameters:

session (InferenceSession)

Return type:

Model

model

model: Model

source

Compiled and initialized model ready for inference.

norm_method

norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'

source

Normalization layer.

prepare_initial_token_inputs()

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepare the inputs for the first pass in multistep execution.

Parameters:

Return type:

Qwen3_5Inputs

prepare_next_token_inputs()

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.

Parameters:

Return type:

Qwen3_5Inputs

release()

release(request_id)

source

Release per-request state cache slot when a request completes.

Parameters:

request_id (RequestID)

Return type:

None

state_dict

state_dict: dict[str, Any]

source

Weights to load into the model.

vision_model

vision_model: Model | None = None

source