Python module
max.pipelines.architectures.lfm2
ConvStateCacheβ
class max.pipelines.architectures.lfm2.ConvStateCache(num_conv_layers, hidden_size, conv_kernel, dtype, max_slots, device)
Bases: object
-
Parameters:
claim()β
claim(request_id)
-
Parameters:
-
request_id (RequestID)
-
Return type:
-
None
get_states()β
get_states(request_ids)
Return one [N, hidden, kernel] buffer per conv layer.
For N == 1 this is zero-copy (returns the slotβs buffer
directly). For N > 1 per-slot buffers are concatenated along
the leading batch dim via numpy round-trip β the conv state is
small (hidden * kernel per slot), so this is acceptable.
release()β
release(request_id)
-
Parameters:
-
request_id (RequestID)
-
Return type:
-
None
update_states()β
update_states(request_ids, new_states)
Store updated per-layer states back into their request slots.
For N == 1 the buffer reference is stored directly. For
N > 1 the leading batch dim is split and each slice is
copied into the matching slot.
LFM2Configβ
class max.pipelines.architectures.lfm2.LFM2Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, rope_theta, rope_scaling_params, max_seq_len, intermediate_size, interleaved_rope_weights, vocab_size, dtype, model_quantization_encoding, quantization_config, kv_params, return_logits=ReturnLogits.LAST_TOKEN, norm_method='rms_norm', norm_dtype=None, attention_bias=False, rms_norm_eps=None, tie_word_embeddings=False, stacked_mlp=False, stacked_qkv=False, attention_multiplier, embedding_multiplier, residual_multiplier, devices, clip_qkv, quant_config=None, lora_config=None, longrope_scaling_params=None, logits_scaling=1.0, return_hidden_states=ReturnHiddenStates.NONE, use_subgraphs=True, data_parallel_degree=1, layer_types=<factory>, conv_L_cache=3, conv_bias=False, norm_eps=1e-05)
Bases: Llama3Config
Model configuration for LFM2 graph construction/execution.
-
Parameters:
-
- hidden_size (int)
- num_attention_heads (int)
- num_key_value_heads (int)
- num_hidden_layers (int)
- rope_theta (float)
- rope_scaling_params (Llama3RopeScalingParams | None)
- max_seq_len (int)
- intermediate_size (int)
- interleaved_rope_weights (bool)
- vocab_size (int)
- dtype (DType)
- model_quantization_encoding (QuantizationEncoding | None)
- quantization_config (QuantizationConfig | None)
- kv_params (KVCacheParams)
- return_logits (ReturnLogits)
- norm_method (Literal['rms_norm', 'layer_norm'])
- norm_dtype (DType | None)
- attention_bias (bool)
- rms_norm_eps (float | None)
- tie_word_embeddings (bool)
- stacked_mlp (bool)
- stacked_qkv (bool)
- attention_multiplier (float)
- embedding_multiplier (float)
- residual_multiplier (float)
- devices (list[DeviceRef])
- clip_qkv (float | None)
- quant_config (QuantConfig | None)
- lora_config (LoRAConfig | None)
- longrope_scaling_params (LongRoPEScalingParams | None)
- logits_scaling (float)
- return_hidden_states (ReturnHiddenStates)
- use_subgraphs (bool)
- data_parallel_degree (int)
- layer_types (list[str])
- conv_L_cache (int)
- conv_bias (bool)
- norm_eps (float)
conv_L_cacheβ
conv_L_cache: int = 3
conv_biasβ
conv_bias: bool = False
finalize()β
finalize(huggingface_config, state_dict, return_logits, return_hidden_states=ReturnHiddenStates.NONE, norm_method='rms_norm', attention_bias=False)
Define parameters that canβt be determined just from the pipeline config.
-
Parameters:
-
- huggingface_config (AutoConfig)
- state_dict (dict[str, WeightData])
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
- norm_method (Literal['rms_norm', 'layer_norm'])
- attention_bias (bool)
-
Return type:
-
None
initialize_from_config()β
classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)
-
Parameters:
-
Return type:
layer_typesβ
norm_epsβ
norm_eps: float = 1e-05
LFM2Inputsβ
class max.pipelines.architectures.lfm2.LFM2Inputs(tokens: 'Buffer', input_row_offsets: 'Buffer', signal_buffers: 'list[Buffer]', return_n_logits: 'Buffer', lora_grouped_offsets: 'Buffer | None' = None, num_active_loras: 'Buffer | None' = None, lora_end_idx: 'Buffer | None' = None, batch_seq_len: 'Buffer | None' = None, lora_ids_kv: 'Buffer | None' = None, lora_grouped_offsets_kv: 'Buffer | None' = None, data_parallel_splits: 'Buffer | Sequence[Sequence[int]] | None' = None, conv_states: 'list[Buffer]' = <factory>, request_ids: 'list[RequestID]' = <factory>, *, kv_cache_inputs: 'KVCacheInputs[Buffer, Buffer] | None' = None, lora_ids: 'Buffer | None' = None, lora_ranks: 'Buffer | None' = None, hidden_states: 'Buffer | list[Buffer] | None' = None)
Bases: Llama3Inputs
-
Parameters:
-
- tokens (Buffer)
- input_row_offsets (Buffer)
- signal_buffers (list[Buffer])
- return_n_logits (Buffer)
- lora_grouped_offsets (Buffer | None)
- num_active_loras (Buffer | None)
- lora_end_idx (Buffer | None)
- batch_seq_len (Buffer | None)
- lora_ids_kv (Buffer | None)
- lora_grouped_offsets_kv (Buffer | None)
- data_parallel_splits (Buffer | Sequence[Sequence[int]] | None)
- conv_states (list[Buffer])
- request_ids (list[RequestID])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- lora_ids (Buffer | None)
- lora_ranks (Buffer | None)
- hidden_states (Buffer | list[Buffer] | None)
buffersβ
Returns positional Buffer inputs for model ABI calls.
conv_statesβ
request_idsβ
LFM2Modelβ
class max.pipelines.architectures.lfm2.LFM2Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)
Bases: LlamaModelBase
LFM2 hybrid (full-attention + conv) pipeline model.
-
Parameters:
-
- pipeline_config (PipelineConfig) β The configuration for this pipeline.
- session (InferenceSession) β The container for the runtime for this model.
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
attention_biasβ
attention_bias: bool = False
Whether to use attention bias.
calculate_max_seq_len()β
classmethod calculate_max_seq_len(pipeline_config, huggingface_config)
Calculates the optimal max sequence length for the model.
Models are expected to implement this method. The following example shows how to implement it for a Mistral model:
class MistralModel(PipelineModel):
@classmethod
def calculate_max_seq_len(cls, pipeline_config, huggingface_config) -> int:
try:
return upper_bounded_default(
upper_bound=huggingface_config.max_seq_len,
default=pipeline_config.model.max_length,
)
except ValueError as e:
raise ValueError(
"Unable to infer max_length for Mistral, the provided "
f"max_length ({pipeline_config.model.max_length}) exceeds the "
f"model's max_seq_len ({huggingface_config.max_seq_len})."
) from e-
Parameters:
-
- pipeline_config (PipelineConfig) β Configuration for the pipeline.
- huggingface_config (AutoConfig) β Hugging Face model configuration.
-
Returns:
-
The maximum sequence length to use.
-
Return type:
execute()β
execute(model_inputs)
Executes the graph with the given inputs.
-
Parameters:
-
model_inputs (ModelInputs) β The model inputs to execute, containing tensors and any other required data for model execution.
-
Returns:
-
ModelOutputs containing the pipelineβs output tensors.
-
Return type:
This is an abstract method that must be implemented by concrete PipelineModels to define their specific execution logic.
get_kv_params()β
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
norm_methodβ
norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'
Normalization layer.
prepare_initial_token_inputs()β
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepare the inputs for the first pass in multistep execution.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()β
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
release()β
release(request_id)
-
Parameters:
-
request_id (RequestID)
-
Return type:
-
None
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!