Python module
max.pipelines.architectures.bert
BERT sentence transformer architecture for embeddings generation.
BertInputsβ
class max.pipelines.architectures.bert.BertInputs(next_tokens_batch: 'Buffer', attention_mask: 'Buffer', *, kv_cache_inputs: 'KVCacheInputs[Buffer, Buffer] | None' = None, lora_ids: 'Buffer | None' = None, lora_ranks: 'Buffer | None' = None, hidden_states: 'Buffer | list[Buffer] | None' = None)
Bases: ModelInputs
-
Parameters:
attention_maskβ
attention_mask: Buffer
next_tokens_batchβ
next_tokens_batch: Buffer
BertModelConfigβ
class max.pipelines.architectures.bert.BertModelConfig(*, dtype, device, pool_embeddings, huggingface_config, pipeline_config)
Bases: ArchConfig
Configuration for Bert models.
-
Parameters:
-
- dtype (DType)
- device (DeviceRef)
- pool_embeddings (bool)
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
deviceβ
device: DeviceRef
dtypeβ
dtype: DType
get_max_seq_len()β
get_max_seq_len()
Returns the default maximum sequence length for the model.
Subclasses should determine whether this value can be overridden by
setting the --max-length (pipeline_config.model.max_length) flag.
-
Return type:
huggingface_configβ
huggingface_config: AutoConfig
initialize()β
classmethod initialize(pipeline_config, model_config=None)
Initializes a BertModelConfig instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) β The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
An initialized BertModelConfig instance.
-
Return type:
pipeline_configβ
pipeline_config: PipelineConfig
pool_embeddingsβ
pool_embeddings: bool
BertPipelineModelβ
class max.pipelines.architectures.bert.BertPipelineModel(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.ALL)
Bases: PipelineModel[TextContext]
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
calculate_max_seq_len()β
classmethod calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) β Configuration for the pipeline.
- huggingface_config (AutoConfig) β Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
execute()β
execute(model_inputs)
Executes the graph with the given inputs.
-
Parameters:
-
model_inputs (ModelInputs) β The model inputs to execute, containing tensors and any other required data for model execution.
-
Returns:
-
ModelOutputs containing the pipelineβs output tensors.
-
Return type:
This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.
load_model()β
load_model(session)
-
Parameters:
-
session (InferenceSession)
-
Return type:
prepare_initial_token_inputs()β
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepares the initial inputs to be passed to execute().
The inputs and functionality can vary per model. For example, model
inputs could include encoded tensors, unique IDs per tensor when using
a KV cache manager, and kv_cache_inputs (or None if the model does
not use KV cache). This method typically batches encoded tensors,
claims a KV cache slot if needed, and returns the inputs and caches.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()β
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!