Python module
max.pipelines.architectures.minimax_m2
MiniMaxM2Configβ
class max.pipelines.architectures.minimax_m2.MiniMaxM2Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, rope_theta, rope_scaling_params, max_seq_len, intermediate_size, interleaved_rope_weights, vocab_size, dtype, model_quantization_encoding, quantization_config, kv_params, return_logits=ReturnLogits.LAST_TOKEN, norm_method='rms_norm', norm_dtype=None, attention_bias=False, rms_norm_eps=None, tie_word_embeddings=False, stacked_mlp=False, stacked_qkv=False, attention_multiplier, embedding_multiplier, residual_multiplier, devices, clip_qkv, quant_config=None, lora_config=None, longrope_scaling_params=None, logits_scaling=1.0, return_hidden_states=ReturnHiddenStates.NONE, use_subgraphs=True, data_parallel_degree=1, num_local_experts=256, num_experts_per_tok=8, norm_topk_prob=True, correction_bias_dtype=None, gate_dtype=None, attn_dtype=None, ep_config=None, partial_rotary_factor=1.0)
Bases: Llama3Config
Configuration for MiniMax-M2 MoE models.
Extends Llama3Config with MoE-specific parameters including sigmoid routing with expert score correction bias.
-
Parameters:
-
- hidden_size (int)
- num_attention_heads (int)
- num_key_value_heads (int)
- num_hidden_layers (int)
- rope_theta (float)
- rope_scaling_params (Llama3RopeScalingParams | None)
- max_seq_len (int)
- intermediate_size (int)
- interleaved_rope_weights (bool)
- vocab_size (int)
- dtype (DType)
- model_quantization_encoding (QuantizationEncoding | None)
- quantization_config (QuantizationConfig | None)
- kv_params (KVCacheParams)
- return_logits (ReturnLogits)
- norm_method (Literal['rms_norm', 'layer_norm'])
- norm_dtype (DType | None)
- attention_bias (bool)
- rms_norm_eps (float | None)
- tie_word_embeddings (bool)
- stacked_mlp (bool)
- stacked_qkv (bool)
- attention_multiplier (float)
- embedding_multiplier (float)
- residual_multiplier (float)
- devices (list[DeviceRef])
- clip_qkv (float | None)
- quant_config (QuantConfig | None)
- lora_config (LoRAConfig | None)
- longrope_scaling_params (LongRoPEScalingParams | None)
- logits_scaling (float)
- return_hidden_states (ReturnHiddenStates)
- use_subgraphs (bool)
- data_parallel_degree (int)
- num_local_experts (int)
- num_experts_per_tok (int)
- norm_topk_prob (bool)
- correction_bias_dtype (DType | None)
- gate_dtype (DType | None)
- attn_dtype (DType | None)
- ep_config (EPConfig | None)
- partial_rotary_factor (float)
attn_dtypeβ
Data type for attention weights. Detected from state dict during finalize().
calculate_attention_multiplier()β
static calculate_attention_multiplier(huggingface_config)
The attention multiplier for MiniMax-M2 models.
Uses the explicit head_dim from the config.
-
Parameters:
-
huggingface_config (AutoConfig) β The HuggingFace configuration object.
-
Returns:
-
The attention multiplier value.
-
Return type:
construct_kv_params()β
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Constructs KV cache parameters using explicit head_dim from config.
-
Parameters:
-
- huggingface_config (AutoConfig) β The HuggingFace configuration object.
- pipeline_config (PipelineConfig) β The MAX Engine pipeline configuration.
- devices (list[DeviceRef]) β Devices to use for the KV cache.
- kv_cache_config (KVCacheConfig) β Configuration for KV cache.
- cache_dtype (DType) β Data type for the cache.
-
Returns:
-
KVCacheParams object with the correct head_dim from config.
-
Return type:
correction_bias_dtypeβ
Data type of the e_score_correction_bias weight. Detected from state dict during finalize().
ep_configβ
ep_config: EPConfig | None = None
Expert parallelism configuration. None means no EP (single-GPU).
gate_dtypeβ
Data type for the gate linear layer. Detected from state dict during finalize().
initialize()β
classmethod initialize(pipeline_config, model_config=None)
Initializes a MiniMaxM2Config from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) β The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None)
-
Returns:
-
An initialized MiniMaxM2Config instance.
-
Return type:
initialize_from_config()β
classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)
Initializes a MiniMaxM2Config from pipeline and HuggingFace configs.
-
Parameters:
-
- pipeline_config (PipelineConfig) β The MAX Engine pipeline configuration.
- huggingface_config (AutoConfig) β The HuggingFace model configuration.
- model_config (MAXModelConfig | None) β The MAX Engine model configuration.
-
Returns:
-
An initialized MiniMaxM2Config instance.
-
Return type:
norm_topk_probβ
norm_topk_prob: bool = True
Whether to normalize top-k expert probabilities to sum to 1.
num_experts_per_tokβ
num_experts_per_tok: int = 8
Number of experts selected per token.
num_local_expertsβ
num_local_experts: int = 256
Number of local experts in each MoE layer.
partial_rotary_factorβ
partial_rotary_factor: float = 1.0
Fraction of head_dim used for rotary embeddings. For MiniMax-M2: rotary_dim/head_dim = 64/128 = 0.5.
MiniMaxM2Inputsβ
class max.pipelines.architectures.minimax_m2.MiniMaxM2Inputs(tokens, input_row_offsets, signal_buffers, return_n_logits, lora_grouped_offsets=None, num_active_loras=None, lora_end_idx=None, batch_seq_len=None, lora_ids_kv=None, lora_grouped_offsets_kv=None, data_parallel_splits=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None, ep_inputs=(), host_input_row_offsets=None)
Bases: Llama3Inputs
Inputs for MiniMax-M2 with EP and DP support.
-
Parameters:
-
- tokens (Buffer)
- input_row_offsets (Buffer)
- signal_buffers (list[Buffer])
- return_n_logits (Buffer)
- lora_grouped_offsets (Buffer | None)
- num_active_loras (Buffer | None)
- lora_end_idx (Buffer | None)
- batch_seq_len (Buffer | None)
- lora_ids_kv (Buffer | None)
- lora_grouped_offsets_kv (Buffer | None)
- data_parallel_splits (Buffer | Sequence[Sequence[int]] | None)
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- lora_ids (Buffer | None)
- lora_ranks (Buffer | None)
- hidden_states (Buffer | list[Buffer] | None)
- ep_inputs (tuple[Buffer, ...])
- host_input_row_offsets (Buffer | None)
buffersβ
Returns positional Buffer inputs for model ABI calls.
ep_inputsβ
host_input_row_offsetsβ
MiniMaxM2Modelβ
class max.pipelines.architectures.minimax_m2.MiniMaxM2Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)
Bases: AlwaysSignalBuffersMixin, LlamaModelBase
MiniMax-M2 pipeline model for text generation.
Uses AlwaysSignalBuffersMixin since VocabParallelEmbedding and ColumnParallelLinear always require signal buffers for allreduce.
-
Parameters:
-
- pipeline_config (PipelineConfig) β The configuration for this pipeline.
- session (InferenceSession) β The container for the runtime for this model.
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
attention_biasβ
attention_bias: bool = False
Whether to use attention bias.
get_kv_params()β
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
load_model()β
load_model(session)
-
Parameters:
-
session (InferenceSession)
-
Return type:
modelβ
model: Model
Compiled and initialized model ready for inference.
norm_methodβ
norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'
Normalization layer.
prepare_initial_token_inputs()β
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1)
Prepare the inputs for the first pass in multistep execution.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (Any)
- return_n_logits (int)
-
Return type:
prepare_next_token_inputs()β
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepare the inputs for the next token in multistep execution. This should avoid any device synchronization or copy operations.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
state_dictβ
Weights to load into the model.
MiniMaxM2ReasoningParserβ
class max.pipelines.architectures.minimax_m2.MiniMaxM2ReasoningParser(think_start_token_id, think_end_token_id, tool_call_start_token_id=None)
Bases: ReasoningParser
MiniMax-M2 reasoning parser for
Reasoning may end implicitly when a tool call begins (minimax:tool_call).
Reasoning may begin implicitly, without an explicit
-
Parameters:
from_tokenizer()β
async classmethod from_tokenizer(tokenizer)
Construct a reasoning parser from a tokenizer.
-
Parameters:
-
tokenizer (PipelineTokenizer[Any, Any, Any])
-
Return type:
stream()β
stream(delta_token_ids)
Identify a reasoning span within a streaming delta chunk.
-
Parameters:
-
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!