Skip to main content

Python class

PipelineModel

PipelineModel

class max.pipelines.PipelineModel(pipeline_config, session, devices, kv_cache_config, weights, adapter, return_logits, return_hidden_states=ReturnHiddenStates.NONE)

source

Bases: ABC, Generic[BaseContextType]

A pipeline model with setup, input preparation and execution methods.

Parameters:

calculate_max_seq_len()

abstract classmethod calculate_max_seq_len(pipeline_config, huggingface_config)

source

Calculates the optimal max sequence length for the model.

Models are expected to implement this method. The following example shows how to implement it for a Mistral model:

class MistralModel(PipelineModel):
    @classmethod
    def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
        try:
            return upper_bounded_default(
                upper_bound=huggingface_config.max_seq_len,
                default=pipeline_config.model.max_length,
            )
        except ValueError as e:
            raise ValueError(
                "Unable to infer max_length for Mistral, the provided "
                f"max_length ({pipeline_config.model.max_length}) exceeds the "
                f"model's max_seq_len ({huggingface_config.max_seq_len})."
            ) from e

Parameters:

  • pipeline_config (PipelineConfig) – Configuration for the pipeline.
  • huggingface_config (AutoConfig) – Hugging Face model configuration.

Returns:

The maximum sequence length to use.

Return type:

int

compute_log_probabilities()

compute_log_probabilities(session, model_inputs, model_outputs, next_tokens, batch_top_n, batch_echo)

source

Optional method that can be overridden to compute log probabilities.

Parameters:

  • session (InferenceSession) – Inference session to compute log probabilities within.
  • model_inputs (ModelInputs) – Inputs to the model returned by prepare_*_token_inputs().
  • model_outputs (ModelOutputs) – Outputs returned by execute().
  • next_tokens (Buffer) – Sampled tokens. Should have shape=[batch size]
  • batch_top_n (list[int]) – Number of top log probabilities to return per input in the batch. For any element where top_n == 0, the LogProbabilities is skipped.
  • batch_echo (list[bool]) – Whether to include input tokens in the returned log probabilities.

Returns:

List of log probabilities.

Return type:

list[LogProbabilities | None]

dtype

property dtype: DType

source

Returns the model data type from pipeline config.

estimate_activation_memory()

classmethod estimate_activation_memory(pipeline_config, huggingface_config)

source

Estimates the activation memory required for model execution.

This accounts for temporary memory buffers used during model execution, such as intermediate activations and working buffers.

The default implementation returns 0 for backward compatibility. Models with significant activation memory requirements should override this method to provide accurate estimates.

Parameters:

  • pipeline_config (PipelineConfig) – Pipeline configuration
  • huggingface_config (AutoConfig) – Hugging Face model configuration

Returns:

Estimated activation memory in bytes

Return type:

int

estimate_weights_size()

classmethod estimate_weights_size(pipeline_config)

source

Calculates the estimated memory consumption of our model.

Parameters:

pipeline_config (PipelineConfig)

Return type:

int

execute()

abstract execute(model_inputs)

source

Executes the graph with the given inputs.

Parameters:

model_inputs (ModelInputs) – The model inputs to execute, containing tensors and any other required data for model execution.

Returns:

ModelOutputs containing the pipeline’s output tensors.

Return type:

ModelOutputs

This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.

huggingface_config

property huggingface_config: AutoConfig

source

Returns the HuggingFace config from pipeline config.

For multimodal models (e.g., Pixtral, Gemma3 multimodal), this returns the top-level config which contains both text_config and vision_config. Models should explicitly access .text_config or .vision_config as needed.

Returns:

The HuggingFace AutoConfig for this model.

Raises:

ValueError – If HuggingFace config could not be loaded.

lora_manager

property lora_manager: LoRAManager | None

source

Returns the LoRA manager if LoRA is enabled, otherwise None.

prepare_initial_token_inputs()

abstract prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)

source

Prepares the initial inputs to be passed to execute().

The inputs and functionality can vary per model. For example, model inputs could include encoded tensors, unique IDs per tensor when using a KV cache manager, and kv_cache_inputs (or None if the model does not use KV cache). This method typically batches encoded tensors, claims a KV cache slot if needed, and returns the inputs and caches.

Parameters:

Return type:

ModelInputs

prepare_next_token_inputs()

abstract prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepares the secondary inputs to be passed to execute().

While prepare_initial_token_inputs is responsible for managing the initial inputs. This function is responsible for updating the inputs, for each step in a multi-step execution pattern.

Parameters:

Return type:

ModelInputs

signal_buffers

property signal_buffers: list[Buffer]

source

Lazily initialize signal buffers for multi-GPU communication collectives.

Signal buffers are only needed during model execution, not during compilation. By deferring their allocation, we avoid memory allocation in compile-only mode.

Returns:

List of signal buffer tensors, one per device for multi-device setups, or an empty list for single-device setups or compile-only mode.