Python module
max.pipelines.architectures.pixtral
Pixtral vision-language architecture for multimodal text generation.
PixtralConfig
class max.pipelines.architectures.pixtral.PixtralConfig(*, dtype, devices, image_token_index, hidden_size, num_attention_heads, rms_norm_eps, rope_theta, max_seq_len, num_hidden_layers, head_dim, num_key_value_heads, feed_forward_length, vocab_size, kv_params, attention_multiplier, patch_size, image_size, num_channels, vision_hidden_size, vision_num_attention_heads, vision_rope_theta, vision_num_hidden_layers, vision_intermediate_size, vision_head_dim, return_logits=ReturnLogits.LAST_TOKEN)
Bases: ArchConfigWithKVCache
Configuration for Pixtral models.
-
Parameters:
-
- dtype (DType)
- devices (list[DeviceRef])
- image_token_index (int)
- hidden_size (int)
- num_attention_heads (int)
- rms_norm_eps (float)
- rope_theta (float)
- max_seq_len (int)
- num_hidden_layers (int)
- head_dim (int)
- num_key_value_heads (int)
- feed_forward_length (int)
- vocab_size (int)
- kv_params (KVCacheParams)
- attention_multiplier (float)
- patch_size (int)
- image_size (int)
- num_channels (int)
- vision_hidden_size (int)
- vision_num_attention_heads (int)
- vision_rope_theta (float)
- vision_num_hidden_layers (int)
- vision_intermediate_size (int)
- vision_head_dim (int)
- return_logits (ReturnLogits)
attention_multiplier
attention_multiplier: float
calculate_max_seq_len()
static calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the maximum sequence length for the model.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- huggingface_config (AutoConfig)
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
devices
dtype
dtype: DType
feed_forward_length
feed_forward_length: int
get_kv_params()
get_kv_params()
KV cache parameters to use when running the model.
-
Return type:
get_max_seq_len()
get_max_seq_len()
Returns the default maximum sequence length for the model.
Subclasses should determine whether this value can be overridden by
setting the --max-length (pipeline_config.model.max_length) flag.
-
Return type:
get_num_layers()
static get_num_layers(huggingface_config)
-
Parameters:
-
huggingface_config (AutoConfig)
-
Return type:
head_dim
head_dim: int
hidden_size
hidden_size: int
image_size
image_size: int
image_token_index
image_token_index: int
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes a PixtralConfig instance from pipeline configuration.
This method creates a config instance with all fields that can be determined from the pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
An initialized PixtralConfig instance.
-
Return type:
kv_params
kv_params: KVCacheParams
max_seq_len
max_seq_len: int
num_attention_heads
num_attention_heads: int
num_channels
num_channels: int
num_hidden_layers
num_hidden_layers: int
num_key_value_heads
num_key_value_heads: int
patch_size
patch_size: int
return_logits
return_logits: ReturnLogits = 'last_token'
Whether to return the last token, all logits, or a variable number of logits.
rms_norm_eps
rms_norm_eps: float
rope_theta
rope_theta: float
vision_head_dim
vision_head_dim: int
vision_hidden_size
vision_hidden_size: int
vision_intermediate_size
vision_intermediate_size: int
vision_num_attention_heads
vision_num_attention_heads: int
vision_num_hidden_layers
vision_num_hidden_layers: int
vision_rope_theta
vision_rope_theta: float
vocab_size
vocab_size: int
PixtralModel
class max.pipelines.architectures.pixtral.PixtralModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)
Bases: PipelineModelWithKVCache[TextAndVisionContext]
Pixtral pipeline model with separate vision and language graphs.
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
calculate_max_seq_len()
classmethod calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) – Configuration for the pipeline.
- huggingface_config (AutoConfig) – Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
execute()
execute(model_inputs)
Executes the graph with the given inputs.
-
Parameters:
-
model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.
-
Returns:
-
ModelOutputs containing the pipeline’s output tensors.
-
Return type:
This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
language_model
language_model: Model
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepares the initial inputs to be passed to execute().
The inputs and functionality can vary per model. For example, model
inputs could include encoded tensors, unique IDs per tensor when using
a KV cache manager, and kv_cache_inputs (or None if the model does
not use KV cache). This method typically batches encoded tensors,
claims a KV cache slot if needed, and returns the inputs and caches.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextAndVisionContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
-
PixtralInputs
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
-
PixtralInputs
vision_model
vision_model: Model
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!