Python module
max.pipelines.architectures.step3p5
Step3p5Config
class max.pipelines.architectures.step3p5.Step3p5Config(*, hidden_size, num_attention_heads, num_key_value_heads, num_hidden_layers, rope_theta, rope_scaling_params, max_seq_len, intermediate_size, interleaved_rope_weights, vocab_size, dtype, model_quantization_encoding, quantization_config, kv_params, return_logits=ReturnLogits.LAST_TOKEN, norm_method='rms_norm', norm_dtype=None, attention_bias=False, rms_norm_eps=None, tie_word_embeddings=False, stacked_mlp=False, stacked_qkv=False, attention_multiplier, embedding_multiplier, residual_multiplier, devices, clip_qkv, quant_config=None, lora_config=None, longrope_scaling_params=None, logits_scaling=1.0, return_hidden_states=ReturnHiddenStates.NONE, use_subgraphs=True, data_parallel_degree=1, num_attention_groups=8, head_dim=128, sliding_window=512, layer_types=<factory>, sliding_num_attention_heads=96, sliding_num_attention_groups=8, per_layer_rope_theta=<factory>, partial_rotary_factors=<factory>, yarn_only_types=<factory>, use_head_wise_attn_gate=True, moe_num_experts=288, moe_top_k=8, moe_intermediate_size=1280, share_expert_dim=1280, moe_layers=<factory>, moe_router_scaling_factor=3.0, norm_expert_weight=True, swiglu_limits=<factory>, swiglu_limits_shared=<factory>)
Bases: Llama3Config
Model configuration for Step-3.5-Flash.
-
Parameters:
-
- hidden_size (int)
- num_attention_heads (int)
- num_key_value_heads (int)
- num_hidden_layers (int)
- rope_theta (float)
- rope_scaling_params (Llama3RopeScalingParams | None)
- max_seq_len (int)
- intermediate_size (int)
- interleaved_rope_weights (bool)
- vocab_size (int)
- dtype (DType)
- model_quantization_encoding (QuantizationEncoding | None)
- quantization_config (QuantizationConfig | None)
- kv_params (KVCacheParams)
- return_logits (ReturnLogits)
- norm_method (Literal['rms_norm', 'layer_norm'])
- norm_dtype (DType | None)
- attention_bias (bool)
- rms_norm_eps (float | None)
- tie_word_embeddings (bool)
- stacked_mlp (bool)
- stacked_qkv (bool)
- attention_multiplier (float)
- embedding_multiplier (float)
- residual_multiplier (float)
- devices (list[DeviceRef])
- clip_qkv (float | None)
- quant_config (QuantConfig | None)
- lora_config (LoRAConfig | None)
- longrope_scaling_params (LongRoPEScalingParams | None)
- logits_scaling (float)
- return_hidden_states (ReturnHiddenStates)
- use_subgraphs (bool)
- data_parallel_degree (int)
- num_attention_groups (int)
- head_dim (int)
- sliding_window (int)
- layer_types (list[str])
- sliding_num_attention_heads (int)
- sliding_num_attention_groups (int)
- per_layer_rope_theta (list[float])
- partial_rotary_factors (list[float])
- yarn_only_types (list[str])
- use_head_wise_attn_gate (bool)
- moe_num_experts (int)
- moe_top_k (int)
- moe_intermediate_size (int)
- share_expert_dim (int)
- moe_layers (set[int])
- moe_router_scaling_factor (float)
- norm_expert_weight (bool)
- swiglu_limits (list[float])
- swiglu_limits_shared (list[float])
calculate_attention_multiplier()
static calculate_attention_multiplier(huggingface_config)
Compute the attention scale for Step-3.5.
-
Parameters:
-
huggingface_config (AutoConfig) – The HuggingFace configuration object.
-
Returns:
-
The attention multiplier value.
-
Return type:
construct_kv_params()
static construct_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Construct KV cache parameters for Step-3.5.
Uses the maximum number of KV heads across all layer types, since the KV cache is allocated per-layer and sliding layers may have more KV heads than full attention layers.
-
Parameters:
-
- huggingface_config (AutoConfig) – The HuggingFace configuration object.
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- devices (list[DeviceRef]) – Devices to use for the KV cache.
- kv_cache_config (KVCacheConfig) – Configuration for KV cache.
- cache_dtype (DType) – Data type for the cache.
-
Returns:
-
KVCacheParams object.
-
Return type:
head_dim
head_dim: int = 128
Dimension of each attention head.
initialize()
classmethod initialize(pipeline_config, model_config=None)
Initializes a Step3p5Config instance from pipeline configuration.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- model_config (MAXModelConfig | None) – Optional MAX model configuration override.
-
Returns:
-
An initialized Step3p5Config instance.
-
Return type:
initialize_from_config()
classmethod initialize_from_config(pipeline_config, huggingface_config, model_config=None)
Initializes a Step3p5Config instance from pipeline and HuggingFace configs.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The MAX Engine pipeline configuration.
- huggingface_config (AutoConfig) – The HuggingFace model configuration.
- model_config (MAXModelConfig | None) – Optional MAX model configuration override.
-
Returns:
-
An initialized Step3p5Config instance.
-
Return type:
layer_types
‘full_attention’ or ‘sliding_attention’.
-
Type:
-
Per-layer attention type
moe_intermediate_size
moe_intermediate_size: int = 1280
Intermediate dimension of each MoE expert MLP.
moe_layers
Set of layer indices that use MoE (vs dense MLP).
moe_num_experts
moe_num_experts: int = 288
Number of routed experts in MoE layers.
moe_router_scaling_factor
moe_router_scaling_factor: float = 3.0
Scaling factor applied to routed expert weights.
moe_top_k
moe_top_k: int = 8
Number of experts activated per token.
norm_expert_weight
norm_expert_weight: bool = True
Whether to normalize top-k expert weights to sum to 1.
num_attention_groups
num_attention_groups: int = 8
Number of KV head groups (same as num_key_value_heads for full attn).
partial_rotary_factors
Per-layer partial rotary factors (0.5 for full attn, 1.0 for sliding).
per_layer_rope_theta
Per-layer RoPE theta values. If empty, uses a single rope_theta.
share_expert_dim
share_expert_dim: int = 1280
Intermediate dimension of the shared expert MLP.
sliding_num_attention_groups
sliding_num_attention_groups: int = 8
Number of KV head groups for sliding attention layers.
sliding_num_attention_heads
sliding_num_attention_heads: int = 96
Number of attention heads for sliding attention layers.
sliding_window
sliding_window: int = 512
Sliding window size for local attention layers.
swiglu_limits
Per-layer SwiGLU activation clipping thresholds for routed experts. 0.0 means no clipping. Non-zero values clamp intermediate activations.
swiglu_limits_shared
Per-layer SwiGLU activation clipping thresholds for shared experts.
use_head_wise_attn_gate
use_head_wise_attn_gate: bool = True
Whether to use per-head sigmoid attention gating (g_proj).
yarn_only_types
Layer types that use rope_scaling (e.g. [‘full_attention’]).
Step3p5Model
class max.pipelines.architectures.step3p5.Step3p5Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.LAST_TOKEN, return_hidden_states=ReturnHiddenStates.NONE)
Bases: AlwaysSignalBuffersMixin, LlamaModelBase
Step-3.5-Flash pipeline model implementation.
-
Parameters:
-
- pipeline_config (PipelineConfig) – The configuration for this pipeline.
- session (InferenceSession) – The container for the runtime for this model.
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
attention_bias
attention_bias: bool = False
Whether to use attention bias.
get_kv_params()
classmethod get_kv_params(huggingface_config, pipeline_config, devices, kv_cache_config, cache_dtype)
Returns the KV cache params for the pipeline model.
-
Parameters:
-
- huggingface_config (AutoConfig)
- pipeline_config (PipelineConfig)
- devices (list[DeviceRef])
- kv_cache_config (KVCacheConfig)
- cache_dtype (DType)
-
Return type:
model
model: Model
Compiled and initialized model ready for inference.
norm_method
norm_method: Literal['rms_norm'] | Literal['layer_norm'] = 'rms_norm'
Normalization layer.
state_dict
Weights to load into the model.
Step3p5PretrainedConfig
class max.pipelines.architectures.step3p5.Step3p5PretrainedConfig(**kwargs)
Bases: PreTrainedConfig
Custom PretrainedConfig for Step-3.5 so AutoConfig.from_pretrained() works.
This is the primary location for mapping Step-3.5 field names to the standard HuggingFace fields that Llama3Config expects. A subset of these aliases is also applied in Step3p5Config._ensure_hf_config_aliases() as a fallback when trust_remote_code=True loads the repo’s own config class instead of this one.
-
Parameters:
-
kwargs (object)
model_type
model_type: str = 'step3p5'
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!