Python module
max.pipelines.architectures.gemma3multimodal_modulev3
Gemma 3 vision-language architecture for multimodal text generation.
Gemma3ForConditionalGenerationConfig
class max.pipelines.architectures.gemma3multimodal_modulev3.Gemma3ForConditionalGenerationConfig(*, boi_token_index, eoi_token_index, devices, dtype, kv_params, image_token_index, initializer_range, interleaved_rope_weights, mm_tokens_per_image, return_logits, tie_word_embeddings, text_config, vision_config, attention_bias=False, quant_config=None, head_dim=256, num_key_value_heads=4)
Bases: ArchConfigWithKVCache
Base configuration for Gemma 3 models.
Contains parameters specific to the Gemma 3 architecture, typically extracted from a HuggingFace configuration object’s text config.
-
Parameters:
-
- boi_token_index (int)
- eoi_token_index (int)
- devices (list[DeviceRef])
- dtype (DType)
- kv_params (KVCacheParams)
- image_token_index (int)
- initializer_range (float)
- interleaved_rope_weights (bool)
- mm_tokens_per_image (int)
- return_logits (ReturnLogits)
- tie_word_embeddings (bool)
- text_config (Gemma3Config)
- vision_config (Gemma3VisionConfig)
- attention_bias (bool)
- quant_config (QuantConfig | None)
- head_dim (int)
- num_key_value_heads (int)
attention_bias
attention_bias: bool = False
Whether to use a bias in the query, key, value and output projection layers during self-attention.
boi_token_index
boi_token_index: int
The begin-of-image token index to wrap the image prompt
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
devices
Devices to run the model with.
dtype
dtype: DType
DType of the model weights and input.
eoi_token_index
eoi_token_index: int
The end-of-image token index to wrap the image prompt
finalize()
finalize(huggingface_config, state_dict, return_logits)
Finalize the Gemma3ForConditionalGenerationConfig instance with state_dict dependent fields.
-
Parameters:
-
- huggingface_config (AutoConfig) – HuggingFace model configuration.
- state_dict (dict[str, WeightData]) – Model weights dictionary.
- return_logits (ReturnLogits) – Return logits configuration.
-
Return type:
-
None
get_kv_params()
get_kv_params()
Returns the KV cache parameters.
-
Return type:
get_max_seq_len()
get_max_seq_len()
Returns the maximum sequence length from the embedded text config.
-
Return type:
get_num_layers()
static get_num_layers(huggingface_config)
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
head_dim
head_dim: int = 256
The attention head dimension.
image_token_index
image_token_index: int
The image token index to encode the image prompt
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes a Gemma3ForConditionalGenerationConfig instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
A Gemma3ForConditionalGenerationConfig instance with fields initialized from config.
-
Return type:
initialize_from_config()
classmethod initialize_from_config(pipeline_config, huggingface_config)
Initializes a Gemma3ForConditionalGenerationConfig from pipeline and HuggingFace configs.
This method creates a config instance with all fields that can be determined from the pipeline and HuggingFace configurations, without needing the state_dict. Fields that depend on the state_dict should be set via the finalize() method.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- huggingface_config (AutoConfig) – HuggingFace model configuration.
-
Returns:
-
A Gemma3ForConditionalGenerationConfig instance ready for finalization.
-
Return type:
initializer_range
initializer_range: float
Standard deviation for weight initialization.
interleaved_rope_weights
interleaved_rope_weights: bool
True if the rope weights are in interleaved complex format.
kv_params
kv_params: KVCacheParams
KV cache parameters.
mm_tokens_per_image
mm_tokens_per_image: int
The number of tokens per image embedding
num_key_value_heads
num_key_value_heads: int = 4
This is the number of key_value heads that should be used to implement Grouped Query Attention. If num_key_value_heads=num_attention_heads, the model will use Multi Head Attention (MHA), if num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed”
quant_config
quant_config: QuantConfig | None = None
Scaled quantization configuration.
return_logits
return_logits: ReturnLogits
Whether to return the last token, all logits, or a variable number of logits.
text_config
text_config: Gemma3Config
The config object of the text backbone
tie_word_embeddings
tie_word_embeddings: bool
Whether to tie weight embeddings. When true, the output linear layer uses the same weight as the embedding layer.
vision_config
vision_config: Gemma3VisionConfig
Custom vision config or dict
Gemma3MultiModalModelV3
class max.pipelines.architectures.gemma3multimodal_modulev3.Gemma3MultiModalModelV3(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)
Bases: PipelineModelWithKVCache[TextAndVisionContext]
Gemma 3 multimodal pipeline model using the ModuleV3 API.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
calculate_max_seq_len()
classmethod calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) – Configuration for the pipeline.
- huggingface_config (AutoConfig) – Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
estimate_activation_memory()
classmethod estimate_activation_memory(pipeline_config, huggingface_config)
Estimates the activation memory required for model execution.
This accounts for temporary memory buffers used during model execution, such as intermediate activations and working buffers.
The default implementation returns 0 for backward compatibility. Models with significant activation memory requirements should override this method to provide accurate estimates.
-
Parameters:
-
- pipeline_config (PipelineConfig) – Pipeline configuration
- huggingface_config (AutoConfig) – Hugging Face model configuration
-
Returns:
-
Estimated activation memory in bytes
-
Return type:
execute()
execute(model_inputs)
Executes the graph with the given inputs.
-
Parameters:
-
model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.
-
Returns:
-
ModelOutputs containing the pipeline’s output tensors.
-
Return type:
This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
get_num_layers()
classmethod get_num_layers(huggingface_config)
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
language_model
language_model: Callable[..., Any]
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepares the initial inputs to be passed to execute().
The inputs and functionality can vary per model. For example, model
inputs could include encoded tensors, unique IDs per tensor when using
a KV cache manager, and kv_cache_inputs (or None if the model does
not use KV cache). This method typically batches encoded tensors,
claims a KV cache slot if needed, and returns the inputs and caches.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextAndVisionContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
vision_model
vision_model: Callable[..., Any]
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!