Python module
log_probabilities
compute_log_probabilities_ragged()
max.pipelines.lib.log_probabilities.compute_log_probabilities_ragged(*, input_row_offsets: ndarray, logits: ndarray | None, next_token_logits: ndarray, tokens: ndarray, sampled_tokens: ndarray, batch_top_n: Sequence[int], batch_echo: Sequence[bool]) → list[max.pipelines.core.interfaces.response.LogProbabilities | None]
Computes the log probabilities for ragged model outputs.
-
Parameters:
- input_row_offsets – Token offsets into token-indexed buffers, by batch index. Should have 1 more element than there are batches (batch n is token indices [input_row_offsets[n], input_row_offsets[n+1])).
- logits – (tokens, vocab_dim) tensor full of tensor logits. Token dimension mapped to batches using input_row_offsets.
- next_token_logits – (batch, vocab_dim) tensor full of logits for next tokens per batch.
- sampled_tokens – (batch_dim,) tensor of sampled token per batch
- batch_top_n – Number of top log probabilities to return per input in the batch. For any element where top_n == 0, the LogProbabilities is skipped.
- batch_echo – Whether to include input tokens in the returned log probabilities.
-
Returns:
Computed log probabilities for each item in the batch.
log_softmax()
max.pipelines.lib.log_probabilities.log_softmax(x: ndarray, axis: int = -1) → ndarray
Compute the logarithm of the softmax function.
This implementation uses the identity log(softmax(x)) = x - log(sum(exp(x))) with numerical stability improvements to prevent overflow/underflow.
-
Parameters:
- x – Input array
- axis – Axis to compute values along
-
Returns:
Array with same shape as x, representing log(softmax(x))
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!