Python module
max.pipelines.architectures.unified_mtp_deepseekV3
DeepSeek-V3 multi-token prediction draft model for speculative decoding with unified graph compilation.
UnifiedMTPDeepseekV3Inputsβ
class max.pipelines.architectures.unified_mtp_deepseekV3.UnifiedMTPDeepseekV3Inputs(tokens, input_row_offsets, signal_buffers, host_input_row_offsets, batch_context_lengths, draft_tokens=None, draft_kv_blocks=None, seed=None, temperature=None, top_k=None, max_k=None, top_p=None, min_top_p=None, in_thinking_phase=None, *, kv_cache_inputs=None, lora_ids=None, lora_ranks=None, hidden_states=None, return_n_logits, data_parallel_splits, ep_inputs=())
Bases: DeepseekV3Inputs
Inputs for the UnifiedMTPDeepseekV3 model.
-
Parameters:
-
- tokens (Buffer)
- input_row_offsets (Buffer)
- signal_buffers (list[Buffer])
- host_input_row_offsets (Buffer)
- batch_context_lengths (list[Buffer])
- draft_tokens (Buffer | None)
- draft_kv_blocks (list[Buffer] | None)
- seed (Buffer | None)
- temperature (Buffer | None)
- top_k (Buffer | None)
- max_k (Buffer | None)
- top_p (Buffer | None)
- min_top_p (Buffer | None)
- in_thinking_phase (Buffer | None)
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- lora_ids (Buffer | None)
- lora_ranks (Buffer | None)
- hidden_states (Buffer | list[Buffer] | None)
- return_n_logits (Buffer)
- data_parallel_splits (Buffer)
- ep_inputs (tuple[Buffer, ...])
buffersβ
Returns positional Buffer inputs for model ABI calls.
draft_kv_blocksβ
draft_tokensβ
in_thinking_phaseβ
Per-batch bool flag marking rows currently inside a
<think>...</think> block; consumed by relaxed acceptance.
max_kβ
min_top_pβ
Per-batch sampling parameters consumed by the stochastic acceptance
sampler. max_k and min_top_p are 0-d CPU scalars; the rest are
[batch_size] tensors on the primary device.
seedβ
Per-execute int64 scalar seed consumed by the stochastic acceptance sampler (and, when enabled, the synthetic benchmarking sampler).
temperatureβ
top_kβ
top_pβ
UnifiedMTPDeepseekV3Modelβ
class max.pipelines.architectures.unified_mtp_deepseekV3.UnifiedMTPDeepseekV3Model(*args, **kwargs)
Bases: DeepseekV3Model
DeepseekV3 with MTP: merge + target + rejection + shift in one graph.
execute()β
execute(model_inputs)
Execute and return all 3 graph outputs for speculative decoding.
-
Parameters:
-
model_inputs (ModelInputs)
-
Return type:
-
UnifiedEagleOutputs
load_model()β
load_model(session)
Load the model with the given weights.
-
Parameters:
-
session (InferenceSession)
-
Return type:
prepare_initial_token_inputs()β
prepare_initial_token_inputs(replica_batches, kv_cache_inputs=None, return_n_logits=1, draft_tokens=None, draft_kv_cache_buffers=None, **kwargs)
Prepares the initial inputs to be passed to execute().
The inputs and functionality can vary per model. For example, model
inputs could include encoded tensors, unique IDs per tensor when using
a KV cache manager, and kv_cache_inputs (or None if the model does
not use KV cache). This method typically batches encoded tensors,
claims a KV cache slot if needed, and returns the inputs and caches.
-
Parameters:
-
- replica_batches (Sequence[Sequence[TextContext]])
- kv_cache_inputs (KVCacheInputs[Buffer, Buffer] | None)
- return_n_logits (int)
- draft_tokens (Buffer | None)
- draft_kv_cache_buffers (list[Buffer] | None)
-
Return type:
prepare_next_token_inputs()β
prepare_next_token_inputs(next_tokens, prev_model_inputs)
Prepares the secondary inputs to be passed to execute().
While prepare_initial_token_inputs is responsible for managing the initial inputs.
This function is responsible for updating the inputs, for each step in a multi-step execution pattern.
-
Parameters:
-
- next_tokens (Buffer)
- prev_model_inputs (ModelInputs)
-
Return type:
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!