Python module
registry
Model registry, for tracking various model variants.
PipelineRegistry
class max.pipelines.registry.PipelineRegistry(architectures: list[max.pipelines.registry.SupportedArchitecture])
register()
register(architecture: SupportedArchitecture)
Add new architecture to registry.
reset()
reset() → None
retrieve()
retrieve(pipeline_config: PipelineConfig) → tuple[max.pipelines.interfaces.PipelineTokenizer, max.pipelines.interfaces.TokenGenerator]
retrieve_factory()
retrieve_factory(pipeline_config: PipelineConfig) → tuple[max.pipelines.interfaces.PipelineTokenizer, Callable[[], max.pipelines.interfaces.TokenGenerator]]
validate_pipeline_config()
validate_pipeline_config(pipeline_config: PipelineConfig) → PipelineConfig
Update pipeline config with appropriate values if not provided. If invalid config is provided, error out with detailed reason.
SupportedArchitecture
class max.pipelines.registry.SupportedArchitecture(name: str, versions: list[max.pipelines.registry.SupportedVersion], default_version: str, pipeline_model: Type[PipelineModel], tokenizer: Type[TextTokenizer | TextAndVisionTokenizer], default_weights_format: WeightsFormat, weight_converters: dict[max.pipelines.config.WeightsFormat, Type[max.graph.weights.weights.WeightsConverter]] | None = None)
SupportedVersion
class max.pipelines.registry.SupportedVersion(name: str, encodings: dict[max.pipelines.config.SupportedEncoding, tuple[list[max.pipelines.hf_utils.HuggingFaceFile], list[max.pipelines.kv_cache.cache_params.KVCacheStrategy]]], default_encoding: SupportedEncoding)
default_cache_strategy()
default_cache_strategy(encoding: SupportedEncoding) → KVCacheStrategy
Get the default cache strategy for an encoding.
huggingface_files()
huggingface_files(encoding: SupportedEncoding) → list[max.pipelines.hf_utils.HuggingFaceFile]
is_supported_cache_strategy()
is_supported_cache_strategy(encoding: SupportedEncoding, cache_strategy: KVCacheStrategy) → bool
Identify if an encoding supports a specific cache_strategy.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!