Python module
max.pipelines.architectures.qwen3_5
Qwen3_5Config
class max.pipelines.architectures.qwen3_5.Qwen3_5Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, rope_theta, rope_scaling_params, max_seq_len, intermediate_size, interleaved_rope_weights, vocab_size, dtype, model_quantization_encoding, quantization_config, kv_params, return_logits=ReturnLogits.LAST_TOKEN, norm_method='rms_norm', norm_dtype=None, attention_bias=False, rms_norm_eps=None, tie_word_embeddings=False, stacked_mlp=False, stacked_qkv=False, attention_multiplier, embedding_multiplier, residual_multiplier, devices, clip_qkv, quant_config=None, lora_config=None, longrope_scaling_params=None, logits_scaling=1.0, return_hidden_states=ReturnHiddenStates.NONE, use_subgraphs=True, data_parallel_degree=1, layer_types=<factory>, full_attention_interval=4, linear_key_head_dim=128, linear_value_head_dim=128, linear_num_key_heads=16, linear_num_value_heads=48, linear_conv_kernel_dim=4, partial_rotary_factor=0.25, attn_output_gate=True, vision_config=None, image_token_id=None, video_token_id=None, vision_start_token_id=None, mrope_section=None)
Bases: Llama3Config
Configuration for Qwen3.5 hybrid attention models.
Qwen3.5 uses a hybrid architecture with both full (standard) attention and linear attention (Gated DeltaNet) layers. Every full_attention_interval-th layer uses full attention, and the rest use linear attention.
-
Parameters:
-
- hidden_size (int)
- num_attention_heads (int)
- num_key_value_heads (int)
- num_hidden_layers (int)
- rope_theta (float)
- rope_scaling_params (Llama3RopeScalingParams | None)
- max_seq_len (int)
- intermediate_size (int)
- interleaved_rope_weights (bool)
- vocab_size (int)
- dtype (DType)
- model_quantization_encoding (QuantizationEncoding | None)
- quantization_config (QuantizationConfig | None)
- kv_params (KVCacheParams)
- return_logits (ReturnLogits)
- norm_method (Literal['rms_norm', 'layer_norm'])
- norm_dtype (DType | None)
- attention_bias (bool)
- rms_norm_eps (float | None)
- tie_word_embeddings (bool)
- stacked_mlp (bool)
- stacked_qkv (bool)
- attention_multiplier (float)
- embedding_multiplier (float)
- residual_multiplier (float)
- devices (list[DeviceRef])
- clip_qkv (float | None)
- quant_config (QuantConfig | None)
- lora_config (LoRAConfig | None)
- longrope_scaling_params (LongRoPEScalingParams | None)
- logits_scaling (float)
- return_hidden_states (ReturnHiddenStates)
- use_subgraphs (bool)
- data_parallel_degree (int)
- layer_types (list[str])
- full_attention_interval (int)
- linear_key_head_dim (int)
- linear_value_head_dim (int)
- linear_num_key_heads (int)
- linear_num_value_heads (int)
- linear_conv_kernel_dim (int)
- partial_rotary_factor (float)
- attn_output_gate (bool)
- vision_config (VisionConfig | None)
- image_token_id (int | None)
- video_token_id (int | None)
- vision_start_token_id (int | None)
- mrope_section (list[int] | None)
attn_output_gate
attn_output_gate: bool = True
Whether full attention layers use a sigmoid output gate.
calculate_attention_multiplier()
static calculate_attention_multiplier(huggingface_config)
Compute attention scaling factor using explicit head_dim.
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Construct KV cache parameters for full attention layers only.
Only allocates KV cache entries for full-attention layers; linear attention layers use separate conv/recurrent state buffers instead. The forward pass maps each full-attention layer to a sequential KV cache index (0, 1, 2, …) independent of the absolute layer index.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
full_attention_interval
full_attention_interval: int = 4
Every N-th layer uses full attention.
get_num_layers()
static get_num_layers(huggingface_config)
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
image_token_id
Token ID used for image placeholders in the input sequence.
infer_optimal_batch_size()
infer_optimal_batch_size(devices)
Return a memory-safe default max_batch_size for this architecture.
Qwen3.5 allocates GPU memory for GatedDeltaNet recurrent states with three distinct cost centres per active request:
- Persistent pool (max_batch x per_req): pre-allocated once at startup and lives for the full server lifetime.
- Input working buffers (batch x per_req): gathered from the pool into dense batch tensors by get_states() each step.
- Output working buffers (batch x per_req): produced by the model kernel and scattered back to the pool by update_states().
Worst-case simultaneous footprint is therefore 3 x max_batch x per_req (pool + both working copies). We budget 15 % of current free GPU memory for this total, so:
max_batch = 0.15 x free_memory / (3 x per_req)
This is consistent with estimate_activation_memory() which reserves 3 x max_batch x per_req bytes before the KV-cache allocator runs.
Falls back to 32—safe for the 27B model on H100/A100 (80 GB)—when the device query fails.
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initialize the config from a PipelineConfig.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The pipeline configuration.
- model_config (MAXModelConfig | None) – The model configuration to read from. When
None(the default),pipeline_config.modelis used. Pass an explicit config (e.g.pipeline_config.draft_model) to initialize the arch config for a different model.
-
Return type:
initialize_from_config()
classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)
Initialize config from pipeline and HuggingFace configurations.
Handles both multimodal (Qwen3_5ForConditionalGeneration) and text-only (Qwen3_5ForCausalLM) configs by extracting the text config.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
- model_config (MAXModelConfig | None)
-
Return type:
layer_types
‘full_attention’ or ‘linear_attention’.
-
Type:
-
Per-layer attention type
linear_conv_kernel_dim
linear_conv_kernel_dim: int = 4
Causal conv1d kernel size for linear attention layers.
linear_key_head_dim
linear_key_head_dim: int = 128
Key head dimension for linear attention layers.
linear_num_key_heads
linear_num_key_heads: int = 16
Number of key heads for linear attention layers.
linear_num_value_heads
linear_num_value_heads: int = 48
Number of value heads for linear attention layers.
linear_value_head_dim
linear_value_head_dim: int = 128
Value head dimension for linear attention layers.
mrope_section
MRoPE section lengths for multimodal rotary position encoding.
partial_rotary_factor
partial_rotary_factor: float = 0.25
Fraction of head_dim that gets rotary position embedding.
video_token_id
Token ID used for video placeholders in the input sequence.
vision_config
vision_config: VisionConfig | None = None
Vision encoder configuration; None for text-only models.
vision_start_token_id
Token ID that marks the start of vision content.
Qwen3_5Inputs
class max.pipelines.architectures.qwen3_5.Qwen3_5Inputs(tokens, input_row_offsets, signal_buffers, return_n_logits, lora_grouped_offsets=None, num_active_loras=None, lora_end_idx=None, batch_seq_len=None, lora_ids_kv=None, lora_grouped_offsets_kv=None, data_parallel_splits=None, conv_states=None, recurrent_states=None, request_ids=None, image_token_indices=None, pixel_values=None, vision_position_ids=None, weights=None, indices=None, max_grid_size=None, grid_thw=None, cu_seqlens=None, max_seqlen=None, lm_image_embeddings=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None)
Bases: Llama3Inputs
Inputs for Qwen3.5 including linear attention states and optional vision inputs.
-
Parameters:
-
- tokens (Buffer)
- input_row_offsets (Buffer)
- signal_buffers (list[Buffer])
- return_n_logits (Buffer)
- lora_grouped_offsets (Buffer | None)
- num_active_loras (Buffer | None)
- lora_end_idx (Buffer | None)
- batch_seq_len (Buffer | None)
- lora_ids_kv (Buffer | None)
- lora_grouped_offsets_kv (Buffer | None)
- data_parallel_splits (Buffer | Sequence[Sequence[int]] | None)
- conv_states (list[Buffer] | None)
- recurrent_states (list[Buffer] | None)
- request_ids (list[RequestID] | None)
- image_token_indices (Buffer | None)
- pixel_values (Buffer | None)
- vision_position_ids (Buffer | None)
- weights (Buffer | None)
- indices (Buffer | None)
- max_grid_size (Buffer | None)
- grid_thw (Buffer | None)
- cu_seqlens (Buffer | None)
- max_seqlen (Buffer | None)
- lm_image_embeddings (Buffer | None)
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- lora_ids (Buffer | None)
- lora_ranks (Buffer | None)
- hidden_states (Buffer | list[Buffer] | None)
buffers
Returns positional Buffer inputs for model ABI calls.
conv_states
Conv states for each linear attention layer.
cu_seqlens
Cumulative sequence lengths for vision full attention.
grid_thw
Grid dimensions (temporal, height, width) per image, shape (n_images, 3).
has_vision_inputs
property has_vision_inputs: bool
True when pixel values are available for vision encoding.
image_token_indices
Pre-computed scatter indices for image embeddings.
indices
Bilinear interpolation indices for vision position embeddings.
lm_image_embeddings
Image embeddings for the LM graph (empty [0, H] buffer for decode/text-only steps, real embeddings for prefill steps with images). Must be non-None for multimodal models.
max_grid_size
Maximum grid size (CPU scalar) for vision attention.
max_seqlen
Maximum sequence length (CPU scalar) for vision attention.
pixel_values
Raw pixel values for vision encoding.
recurrent_states
Recurrent states for each linear attention layer.
request_ids
Request IDs for this batch, used to update per-request state cache.
vision_position_ids
Rotary position IDs for the vision encoder.
weights
Bilinear interpolation weights for vision position embeddings.
Qwen3_5Model
class max.pipelines.architectures.qwen3_5.Qwen3_5Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)
Bases: AlwaysSignalBuffersMixin, LlamaModelBase
Qwen3.5 pipeline model implementation.
Supports the hybrid linear/full attention architecture with KV cache for full attention layers and conv/recurrent states for linear layers.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The configuration for this pipeline.
- session (InferenceSession) – The container for the runtime for this model.
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
attention_bias
attention_bias: bool = False
Whether to use attention bias.
calculate_max_seq_len()
classmethod calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) – Configuration for the pipeline.
- huggingface_config (AutoConfig) – Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
estimate_activation_memory()
classmethod estimate_activation_memory(pipeline_config, huggingface_config)
Reserve GPU memory for GatedDeltaNet recurrent-state buffers.
GatedDeltaNetStateCache has three simultaneous GPU allocations at peak (during a model forward pass):
- Persistent pool (max_batch x per_req): pre-allocated once at startup.
- Input working buffers (batch x per_req): gathered from the pool into dense tensors by get_states() each step.
- Output working buffers (batch x per_req): produced by the model kernel and scattered back to the pool by update_states().
Worst-case simultaneous footprint: 3 x max_batch x per_req.
This method is called before infer_optimal_batch_size() sets max_batch_size on the pipeline config. To keep the reservation consistent with the batch size that will be inferred, we reproduce the same device-memory query used by infer_optimal_batch_size():
max_batch = 0.15 x free_memory / (3 x per_req)
so that 3 x max_batch x per_req = 0.15 x free_memory.
Falls back to 32 (safe for Qwen3.5-27B on H100/A100 80 GB) when the device query is unavailable or the user has not specified a batch size.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
execute()
execute(model_inputs)
Executes the graph with the given inputs.
-
Parameters:
-
model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.
-
Returns:
-
ModelOutputs containing the pipeline’s output tensors.
-
Return type:
This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
load_model()
load_model(session)
-
Parameters:
-
session (InferenceSession)
-
Return type:
model
model: Model
Compiled and initialized model ready for inference.
norm_method
norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'
Normalization layer.
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepare the inputs for the first pass in multistep execution.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
release()
release(request_id)
Release per-request state cache slot when a request completes.
-
Parameters:
-
request_id (RequestID)
-
Return type:
-
None
state_dict
Weights to load into the model.
vision_model
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!