Skip to main content

Python module

max.pipelines.architectures.mistral3

Mistral 3 vision-language architecture for multimodal text generation.

Mistral3Config

class max.pipelines.architectures.mistral3.Mistral3Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, head_dim, vocab_size, rope_theta, max_seq_len, rms_norm_eps, feed_forward_length, dtype, kv_params, attention_multiplier, devices, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: MistralConfig

Configuration for Mistral3 models.

Parameters:

initialize()

classmethod initialize(pipeline_config, model_config=None)

source

Initializes a MistralConfig instance from pipeline configuration.

This method creates a config instance with all fields that can be determined from the pipeline configuration.

Parameters:

Returns:

An initialized MistralConfig instance.

Return type:

Self

Mistral3Model

class max.pipelines.architectures.mistral3.Mistral3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN)

source

Bases: MistralModel

Text-only Mistral3 pipeline model implementation.

Parameters:

calculate_max_seq_len()

classmethod calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the optimal max sequence length for the model.

Models are expected to implement this method. The following example shows how to implement it for a Mistral model:

class MistralModel(PipelineModel):
    @classmethod
    def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
        try:
            return upper_bounded_default(
                upper_bound=huggingface_config.max_seq_len,
                default=pipeline_config.model.max_length,
            )
        except ValueError as e:
            raise ValueError(
                "Unable to infer max_length for Mistral, the provided "
                f"max_length ({pipeline_config.model.max_length}) exceeds the "
                f"model's max_seq_len ({huggingface_config.max_seq_len})."
            ) from e

Parameters:

  • pipeline_config (PipelineConfig) – Configuration for the pipeline.
  • huggingface_config (AutoConfig) – Hugging Face model configuration.

Returns:

The maximum sequence length to use.

Return type:

int

get_kv_params()

classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)

source

Returns the KV cache params for the pipeline model.

Parameters:

Return type:

KVCacheParams