Python module
max.pipelines.architectures.idefics3_modulev3
Idefics3 vision-language architecture for multimodal text generation.
Idefics3Config
class max.pipelines.architectures.idefics3_modulev3.Idefics3Config(*, devices, scale_factor, image_token_id, vision_config, text_config)
Bases: ArchConfigWithKVCache
Configuration for Idefics3 models (ModuleV3).
-
Parameters:
-
- devices (list[DeviceRef])
- scale_factor (int)
- image_token_id (int)
- vision_config (Idefics3VisionConfig)
- text_config (Llama3Config)
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
Calculate maximum sequence length for Idefics3.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Get KV cache parameters for the language model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
devices
Devices that the Idefics3 model is parallelized over.
finalize()
finalize(huggingface_config, llm_state_dict, return_logits, return_hidden_states=ReturnHiddenStates.NONE, norm_method='rms_norm')
Finalize the Idefics3Config with state_dict-dependent fields.
-
Parameters:
-
- huggingface_config (AutoConfig)
- llm_state_dict (dict[str, WeightData])
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
- norm_method (Literal['rms_norm', 'layer_norm'])
-
Return type:
-
None
get_kv_params()
get_kv_params()
Returns the KV cache parameters from the embedded text config.
-
Return type:
get_max_seq_len()
get_max_seq_len()
Returns the maximum sequence length from the embedded text config.
-
Return type:
get_num_layers()
static get_num_layers(huggingface_config)
Get number of layers in the language model.
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
image_seq_len
property image_seq_len: int
Calculate the number of image tokens after connector processing.
image_token_id
image_token_id: int
Token ID used to represent image tokens in the text sequence.
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes an Idefics3Config instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- model_config (MAXModelConfig | None)
-
Return type:
scale_factor
scale_factor: int
Scale factor for pixel shuffle operation in the connector.
text_config
text_config: Llama3Config
Text model configuration (Llama3-based).
vision_config
vision_config: Idefics3VisionConfig
Vision encoder configuration (SigLIP-based).
Idefics3Model
class max.pipelines.architectures.idefics3_modulev3.Idefics3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)
Bases: PipelineModelWithKVCache[TextAndVisionContext]
An Idefics3 pipeline model using the ModuleV3 API.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) – Configuration for the pipeline.
- huggingface_config (AutoConfig) – Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
execute()
execute(model_inputs)
Execute the Idefics3 model.
-
Parameters:
-
model_inputs (ModelInputs)
-
Return type:
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
language_model
language_model: Callable[..., Any]
The compiled language model.
load_model()
load_model()
Compile vision and language models using the V3 API.
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepare the initial inputs for the first execution pass.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextAndVisionContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
-
Idefics3Inputs
vision_model
vision_model: Callable[..., Any]
The compiled vision model.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!