Python module
max.pipelines.architectures.eagle3_deepseekV3
DeepseekV3 + Eagle3 speculator pipeline.
Eagle3DeepseekV3
class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3(config)
Bases: Module
Eagle3 draft model paired with a DeepseekV3 target.
-
Parameters:
-
config (DeepseekV3Config)
input_types()
input_types(kv_params)
-
Parameters:
-
kv_params (KVCacheParamInterface)
-
Return type:
-
tuple[TensorType | BufferType, …]
Eagle3DeepseekV3Inputs
class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Inputs(tokens, input_row_offsets, signal_buffers, host_input_row_offsets, batch_context_lengths, draft_tokens=None, draft_kv_blocks=None, seed=None, temperature=None, top_k=None, max_k=None, top_p=None, min_top_p=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None, return_n_logits, data_parallel_splits, ep_inputs=())
Bases: DeepseekV3Inputs
Inputs for the Eagle3 + DeepseekV3 unified model.
-
Parameters:
-
- tokens (Buffer)
- input_row_offsets (Buffer)
- signal_buffers (list[Buffer])
- host_input_row_offsets (Buffer)
- batch_context_lengths (list[Buffer])
- draft_tokens (Buffer | None)
- draft_kv_blocks (list[Buffer] | None)
- seed (Buffer | None)
- temperature (Buffer | None)
- top_k (Buffer | None)
- max_k (Buffer | None)
- top_p (Buffer | None)
- min_top_p (Buffer | None)
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- lora_ids (Buffer | None)
- lora_ranks (Buffer | None)
- hidden_states (Buffer | list[Buffer] | None)
- return_n_logits (Buffer)
- data_parallel_splits (Buffer)
- ep_inputs (tuple[Buffer, ...])
buffers
Returns positional Buffer inputs for model ABI calls.
draft_kv_blocks
draft_tokens
max_k
min_top_p
Per-batch sampling parameters consumed by the stochastic acceptance
sampler. max_k and min_top_p are 0-d CPU scalars; the rest are
[batch_size] tensors on the primary device.
seed
Per-execute int64 scalar seed consumed by the stochastic acceptance sampler (and, when enabled, the synthetic benchmarking sampler).
temperature
top_k
top_p
Eagle3DeepseekV3Model
class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.VARIABLE, return_hidden_states=ReturnHiddenStates.EAGLE3)
Bases: DeepseekV3Model
Eagle3 + DeepseekV3: target + draft in one compiled graph.
Loads target weights from a DeepseekV3-shaped main checkpoint and draft
weights from a separate Eagle3 checkpoint
(pipeline_config.draft_model).
-
Parameters:
-
- pipeline_config (PipelineConfig)
- session (InferenceSession)
- devices (list[Device])
- kv_cache_config (KVCacheConfig)
- weights (Weights)
- adapter (WeightsAdapter | None)
- return_logits (ReturnLogits)
- return_hidden_states (ReturnHiddenStates)
execute()
execute(model_inputs)
Execute and return all graph outputs for speculative decoding.
-
Parameters:
-
model_inputs (ModelInputs)
-
Return type:
-
UnifiedEagleOutputs
load_model()
load_model(session)
Load the model with the given weights.
-
Parameters:
-
session (InferenceSession)
-
Return type:
prepare_initial_token_inputs()
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1, draft_tokens=None, draft_kv_cache_buffers=None, **kwargs)
Prepares the initial inputs to be passed to execute().
The inputs and functionality can vary per model. For example, model
inputs could include encoded tensors, unique IDs per tensor when using
a KV cache manager, and kv_cache_inputs (or None if the model does
not use KV cache). This method typically batches encoded tensors,
claims a KV cache slot if needed, and returns the inputs and caches.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
- draft_tokens (Buffer | None)
- draft_kv_cache_buffers (list[Buffer] | None)
-
Return type:
prepare_next_token_inputs()
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
Eagle3DeepseekV3Unified
class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Unified(config, draft_config=None, speculative_config=None)
Bases: Module
Fused nn.Module: merge + target forward + greedy rejection + shift.
The target model returns concatenated hidden states from 3 intermediate
layers (first, middle, last). The draft model fuses these via fc and
generates the next speculative token.
-
Parameters:
-
- config (DeepseekV3Config)
- draft_config (DeepseekV3Config | None)
- speculative_config (SpeculativeConfig | None)
input_types()
input_types(kv_params, draft_kv_params=None)
Input types for the Eagle3 unified graph.
- Order: tokens, device_offsets, host_offsets, return_n_logits,
- data_parallel_splits, signal_buffers, target_kv_cache, batch_context_lengths, target_ep_inputs, draft_tokens, draft_kv_blocks_per_device, seed, temperature, top_k, max_k, top_p, min_top_p.
-
Parameters:
-
- kv_params (KVCacheParamInterface)
- draft_kv_params (KVCacheParams | None)
-
Return type:
-
tuple[TensorType | BufferType, …]
convert_eagle3_draft_state_dict()
max.pipelines.architectures.eagle3_deepseekV3.convert_eagle3_draft_state_dict(state_dict, huggingface_config=None, pipeline_config=None)
Convert an Eagle3 draft checkpoint to Eagle3DeepseekV3 module keys.
The Eagle3 checkpoint (nvidia/Kimi-K2.5-Thinking-Eagle3) has keys:
fc.*, layers.0.*, norm.*, lm_head.*.
All weights are loaded; norm and lm_head are kept independent
from the target model. Only embed_tokens is shared from the target.
-
Parameters:
-
- state_dict (dict[str, Weights]) – Raw Eagle3 checkpoint.
- huggingface_config (PreTrainedConfig | None)
- pipeline_config (PipelineConfig | None)
-
Returns:
-
State dict with keys matching
Eagle3DeepseekV3module hierarchy. -
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!