Skip to main content

Python module

max.pipelines.architectures.pixtral_modulev3

Pixtral vision-language architecture for multimodal text generation.

PixtralConfig​

class max.pipelines.architectures.pixtral_modulev3.PixtralConfig(*, dtype, devices, image_token_index, hidden_size, num_attention_heads, rms_norm_eps, rope_theta, max_seq_len, num_hidden_layers, head_dim, num_key_value_heads, feed_forward_length, vocab_size, kv_params, attention_multiplier, patch_size, image_size, num_channels, vision_hidden_size, vision_num_attention_heads, vision_rope_theta, vision_num_hidden_layers, vision_intermediate_size, vision_head_dim, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: ArchConfigWithKVCache

Configuration for Pixtral models.

Parameters:

  • dtype (DType)
  • devices (list[DeviceRef])
  • image_token_index (int)
  • hidden_size (int)
  • num_attention_heads (int)
  • rms_norm_eps (float)
  • rope_theta (float)
  • max_seq_len (int)
  • num_hidden_layers (int)
  • head_dim (int)
  • num_key_value_heads (int)
  • feed_forward_length (int)
  • vocab_size (int)
  • kv_params (KVCacheParams)
  • attention_multiplier (float)
  • patch_size (int)
  • image_size (int)
  • num_channels (int)
  • vision_hidden_size (int)
  • vision_num_attention_heads (int)
  • vision_rope_theta (float)
  • vision_num_hidden_layers (int)
  • vision_intermediate_size (int)
  • vision_head_dim (int)
  • return_logits (ReturnLogits)

attention_multiplier​

attention_multiplier: float

source

calculate_max_seq_len()​

static calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the maximum sequence length for the model.

Parameters:

Return type:

int

construct_kv_params()​

static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Parameters:

Return type:

KVCacheParams

devices​

devices: list[DeviceRef]

source

dtype​

dtype: DType

source

feed_forward_length​

feed_forward_length: int

source

get_kv_params()​

get_kv_params()

source

KV cache parameters to use when running the model.

Return type:

KVCacheParams

get_max_seq_len()​

get_max_seq_len()

source

Returns the default maximum sequence length for the model.

Subclasses should determine whether this value can be overridden by setting the --max-length (pipeline_config.model.max_length) flag.

Return type:

int

get_num_layers()​

static get_num_layers(huggingface_config)

source

Parameters:

huggingface_config (AutoConfig)

Return type:

int

head_dim​

head_dim: int

source

hidden_size​

hidden_size: int

source

image_size​

image_size: int

source

image_token_index​

image_token_index: int

source

initialize()​

classmethod initialize(pipeline_config, model_config=None)

source

Initializes a PixtralConfig instance from pipeline configuration.

This method creates a config instance with all fields that can be determined from the pipeline configuration.

Parameters:

Returns:

An initialized PixtralConfig instance.

Return type:

Self

kv_params​

kv_params: KVCacheParams

source

max_seq_len​

max_seq_len: int

source

num_attention_heads​

num_attention_heads: int

source

num_channels​

num_channels: int

source

num_hidden_layers​

num_hidden_layers: int

source

num_key_value_heads​

num_key_value_heads: int

source

patch_size​

patch_size: int

source

return_logits​

return_logits: ReturnLogits = 'last_token'

source

Whether to return the last token, all logits, or a variable number of logits.

rms_norm_eps​

rms_norm_eps: float

source

rope_theta​

rope_theta: float

source

vision_head_dim​

vision_head_dim: int

source

vision_hidden_size​

vision_hidden_size: int

source

vision_intermediate_size​

vision_intermediate_size: int

source

vision_num_attention_heads​

vision_num_attention_heads: int

source

vision_num_hidden_layers​

vision_num_hidden_layers: int

source

vision_rope_theta​

vision_rope_theta: float

source

vocab_size​

vocab_size: int

source

PixtralInputs​

class max.pipelines.architectures.pixtral_modulev3.PixtralInputs(tokens, input_row_offsets, return_n_logits, pixel_patches=None, vision_attention_mask=None, vision_position_ids=None, image_token_indices=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None)

source

Bases: ModelInputs

Holds inputs for the Pixtral model.

Parameters:

has_vision_inputs​

property has_vision_inputs: bool

source

Returns true iff this includes vision model inputs.

image_token_indices​

image_token_indices: Buffer | None = None

source

input_row_offsets​

input_row_offsets: Buffer

source

pixel_patches​

pixel_patches: Buffer | None = None

source

return_n_logits​

return_n_logits: Buffer

source

tokens​

tokens: Buffer

source

vision_attention_mask​

vision_attention_mask: Buffer | None = None

source

vision_position_ids​

vision_position_ids: Buffer | None = None

source

PixtralModel​

class max.pipelines.architectures.pixtral_modulev3.PixtralModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: PipelineModelWithKVCache[TextAndVisionContext]

The overall interface to the Pixtral model.

Parameters:

calculate_max_seq_len()​

classmethod calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the optimal max sequence length for the model.

Models are expected to implement this method. The following example shows how to implement it for a Mistral model:

class MistralModel(PipelineModel):
    @classmethod
    def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
        try:
            return upper_bounded_default(
                upper_bound=huggingface_config.max_seq_len,
                default=pipeline_config.model.max_length,
            )
        except ValueError as e:
            raise ValueError(
                "Unable to infer max_length for Mistral, the provided "
                f"max_length ({pipeline_config.model.max_length}) exceeds the "
                f"model's max_seq_len ({huggingface_config.max_seq_len})."
            ) from e

Parameters:

  • pipeline_config (PipelineConfig) – Configuration for the pipeline.
  • huggingface_config (AutoConfig) – Hugging Face model configuration.

Returns:

The maximum sequence length to use.

Return type:

int

execute()​

execute(model_inputs)

source

Executes the graph with the given inputs.

Parameters:

model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.

Returns:

ModelOutputs containing the pipeline’s output tensors.

Return type:

ModelOutputs

This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.

get_kv_params()​

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Returns the KV cache params for the pipeline model.

Parameters:

Return type:

KVCacheParams

language_model​

language_model: Callable[..., Any]

source

Compiled language model with multimodal embedding merge.

prepare_initial_token_inputs()​

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepares the initial inputs to be passed to execute().

The inputs and functionality can vary per model. For example, model inputs could include encoded tensors, unique IDs per tensor when using a KV cache manager, and kv_cache_inputs (or None if the model does not use KV cache). This method typically batches encoded tensors, claims a KV cache slot if needed, and returns the inputs and caches.

Parameters:

Return type:

PixtralInputs

prepare_next_token_inputs()​

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepares the secondary inputs to be passed to execute().

While prepare_initial_token_inputs is responsible for managing the initial inputs. This function is responsible for updating the inputs, for each step in a multi-step execution pattern.

Parameters:

Return type:

PixtralInputs

vision_model​

vision_model: Callable[..., Any]

source

Compiled vision model (encoder + projector) for a ragged batch of images.