Skip to main content

Python module

max.pipelines.architectures.eagle3_deepseekV3

DeepseekV3 + Eagle3 speculator pipeline.

Eagle3DeepseekV3

class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3(config)

source

Bases: Module

Eagle3 draft model paired with a DeepseekV3 target.

Parameters:

config (DeepseekV3Config)

input_types()

input_types(kv_params)

source

Parameters:

kv_params (KVCacheParamInterface)

Return type:

tuple[TensorType | BufferType, …]

Eagle3DeepseekV3Inputs

class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Inputs(tokens, input_row_offsets, signal_buffers, host_input_row_offsets, batch_context_lengths, draft_tokens=None, draft_kv_blocks=None, seed=None, temperature=None, top_k=None, max_k=None, top_p=None, min_top_p=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None, return_n_logits, data_parallel_splits, ep_inputs=())

source

Bases: DeepseekV3Inputs

Inputs for the Eagle3 + DeepseekV3 unified model.

Parameters:

buffers

property buffers: tuple[Buffer, ...]

source

Returns positional Buffer inputs for model ABI calls.

draft_kv_blocks

draft_kv_blocks: list[Buffer] | None = None

source

draft_tokens

draft_tokens: Buffer | None = None

source

max_k

max_k: Buffer | None = None

source

min_top_p

min_top_p: Buffer | None = None

source

Per-batch sampling parameters consumed by the stochastic acceptance sampler. max_k and min_top_p are 0-d CPU scalars; the rest are [batch_size] tensors on the primary device.

seed

seed: Buffer | None = None

source

Per-execute int64 scalar seed consumed by the stochastic acceptance sampler (and, when enabled, the synthetic benchmarking sampler).

temperature

temperature: Buffer | None = None

source

top_k

top_k: Buffer | None = None

source

top_p

top_p: Buffer | None = None

source

Eagle3DeepseekV3Model

class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Model(pipeline_config, session, devices, kv_cache_config, weights, adapter=None, return_logits=ReturnLogits.VARIABLE, return_hidden_states=ReturnHiddenStates.EAGLE3)

source

Bases: DeepseekV3Model

Eagle3 + DeepseekV3: target + draft in one compiled graph.

Loads target weights from a DeepseekV3-shaped main checkpoint and draft weights from a separate Eagle3 checkpoint (pipeline_config.draft_model).

Parameters:

execute()

execute(model_inputs)

source

Execute and return all graph outputs for speculative decoding.

Parameters:

model_inputs (ModelInputs)

Return type:

UnifiedEagleOutputs

load_model()

load_model(session)

source

Load the model with the given weights.

Parameters:

session (InferenceSession)

Return type:

Model

prepare_initial_token_inputs()

prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1, draft_tokens=None, draft_kv_cache_buffers=None, **kwargs)

source

Prepares the initial inputs to be passed to execute().

The inputs and functionality can vary per model. For example, model inputs could include encoded tensors, unique IDs per tensor when using a KV cache manager, and kv_cache_inputs (or None if the model does not use KV cache). This method typically batches encoded tensors, claims a KV cache slot if needed, and returns the inputs and caches.

Parameters:

Return type:

Eagle3DeepseekV3Inputs

prepare_next_token_inputs()

prepare_next_token_inputs(next_tokens, prev_model_inputs)

source

Prepares the secondary inputs to be passed to execute().

While prepare_initial_token_inputs is responsible for managing the initial inputs. This function is responsible for updating the inputs, for each step in a multi-step execution pattern.

Parameters:

Return type:

Eagle3DeepseekV3Inputs

Eagle3DeepseekV3Unified

class max.pipelines.architectures.eagle3_deepseekV3.Eagle3DeepseekV3Unified(config, draft_config=None, speculative_config=None)

source

Bases: Module

Fused nn.Module: merge + target forward + greedy rejection + shift.

The target model returns concatenated hidden states from 3 intermediate layers (first, middle, last). The draft model fuses these via fc and generates the next speculative token.

Parameters:

input_types()

input_types(kv_params, draft_kv_params=None)

source

Input types for the Eagle3 unified graph.

Order: tokens, device_offsets, host_offsets, return_n_logits,
data_parallel_splits, signal_buffers, target_kv_cache, batch_context_lengths, target_ep_inputs, draft_tokens, draft_kv_blocks_per_device, seed, temperature, top_k, max_k, top_p, min_top_p.

Parameters:

Return type:

tuple[TensorType | BufferType, …]

convert_eagle3_draft_state_dict()

max.pipelines.architectures.eagle3_deepseekV3.convert_eagle3_draft_state_dict(state_dict, huggingface_config=None, pipeline_config=None)

source

Convert an Eagle3 draft checkpoint to Eagle3DeepseekV3 module keys.

The Eagle3 checkpoint (nvidia/Kimi-K2.5-Thinking-Eagle3) has keys: fc.*, layers.0.*, norm.*, lm_head.*.

All weights are loaded; norm and lm_head are kept independent from the target model. Only embed_tokens is shared from the target.

Parameters:

  • state_dict (dict[str, Weights]) – Raw Eagle3 checkpoint.
  • huggingface_config (PreTrainedConfig | None)
  • pipeline_config (PipelineConfig | None)

Returns:

State dict with keys matching Eagle3DeepseekV3 module hierarchy.

Return type:

dict[str, WeightData]