Skip to main content

Python module

lora_config

MAX LoRA configuration.

LoRAConfig

class max.pipelines.lib.lora_config.LoRAConfig(enable_lora: 'bool' = False, lora_paths: 'list[str]' = <factory>, max_lora_rank: 'int' = 16, max_num_loras: 'int' = 1, _config_file_section_name: 'str' = 'lora_config')

Parameters:

  • enable_lora (bool)
  • lora_paths (list[str])
  • max_lora_rank (int)
  • max_num_loras (int)
  • _config_file_section_name (str)

enable_lora

enable_lora: bool = False

Enables LoRA on the server

help()

static help()

Documentation for this config class. Return a dictionary of config options and their descriptions.

Return type:

dict[str, str]

lora_paths

lora_paths: list[str]

List of statically defined LoRA paths

max_lora_rank

max_lora_rank: int = 16

Maximum rank of all possible LoRAs

max_num_loras

max_num_loras: int = 1

The maximum number of active LoRAs in a batch.

This controls how many LoRA adapters can be active simultaneously during inference. Lower values reduce memory usage but limit concurrent adapter usage.

Was this page helpful?