intelligence.zenith_tune.tuners.preset
PresetTuner: Preset-based hyperparameter optimization.
This module provides PresetTuner, which uses Preset + Strategy + Evaluator to automatically optimize hyperparameters without requiring users to write objective functions.
TrialExhausted Objects
class TrialExhausted(Exception)
Raised by eval_fn when the trial budget has been exceeded.
ConsecutiveDuplicateLimitExceeded Objects
class ConsecutiveDuplicateLimitExceeded(Exception)
Raised when consecutive duplicate cache hits exceed the configured limit.
ConsecutivePruneLimitExceeded Objects
class ConsecutivePruneLimitExceeded(Exception)
Raised when consecutive pruned trials exceed the configured limit.
PresetTuner Objects
class PresetTuner()
Preset-based hyperparameter tuner.
PresetTuner orchestrates the optimization loop using three components:
- Preset: Defines what parameters to optimize and how to apply them
- Strategy: Determines how to explore the search space
- Evaluator: Evaluates objective values from execution results
Unlike Manual-based tuners (GeneralTuner, etc.), PresetTuner does not use Optuna's trial.suggest_*() API. Instead, Optuna is used only for history management, persistence, and visualization.
Example:
preset = MyFrameworkPreset() tuner = PresetTuner( command="python train.py", preset=preset, study_name="my_study", ) best_value, best_params = tuner.optimize(n_trials=100)
__init__
def __init__(command: str,
preset: TuningPreset | str,
strategy: TuningStrategy | str | None = None,
evaluator: TuningEvaluator | str | None = None,
output_dir: str | None = None,
study_name: str | None = None,
db_path: str | None = None,
dump_env: bool = False,
tune_args: dict[str, str] | None = None)
Initialize the PresetTuner.
Arguments:
command- The base command string to tune.preset- The tuning preset instance or name. If string, looks up in the preset registry.strategy- The tuning strategy instance or name. If string, looks up in the strategy registry with tune_args. If None, uses the preset's default strategy.evaluator- The tuning evaluator instance or name. If string, looks up in the evaluator registry with tune_args. If None, uses preset's recommended evaluator.output_dir- Directory to store study results.study_name- Name of the Optuna study.db_path- Path to existing database file for resuming studies.dump_env- If True, dump all environment variables in trial logs.tune_args- Keyword arguments passed to the preset, strategy, and evaluator constructors when resolved by name.
optimize
def optimize(
n_trials: int,
skip_default: bool = False,
skip_duplicate: bool = True,
max_consecutive_duplicates: int = DEFAULT_MAX_CONSECUTIVE_DUPLICATES,
max_consecutive_pruned: int = 1000,
timeout: float | None = None,
dynamic_timeout: float | None = None
) -> tuple[float | None, dict[str, Any]]
Run the optimization loop.
Each trial applies parameters to the command via the preset, executes it with subprocess.run, and evaluates the output with the evaluator.
When the preset provides default parameters, a baseline trial using
those defaults is run first (before the optimization trials). Use
skip_default=True to skip this baseline.
Arguments:
n_trials- Number of unique command executions. Duplicate parameter sets (when skip_duplicate=True) do not count toward this budget.skip_default- If True, skip the baseline trial with default params.skip_duplicate- If True, skip command execution for duplicate parameter sets and return the cached objective value.max_consecutive_duplicates- Maximum number of consecutive duplicate cache hits before stopping. Prevents infinite loops when the search space is exhausted.max_consecutive_pruned- Maximum number of consecutive pruned trials before raising ConsecutivePruneLimitExceeded. Prevents infinite loops when all parameters are pruned.timeout- Static timeout per trial in seconds. None means no static timeout.dynamic_timeout- Multiplier for dynamic timeout. When set, the timeout is computed as best_duration * dynamic_timeout. None means no dynamic timeout. When both timeout and dynamic_timeout are set, the effective timeout is min(timeout, best_duration * dynamic_timeout).Note- when resuming via--db-path, dynamic timeout only considers trials from the current session; durations from previous sessions stored in the database are not used.
Returns:
Tuple of (best_value, best_params). Returns (None, empty dict) if all trials failed.
apply_from_db
@classmethod
def apply_from_db(cls,
db_path: str,
preset: TuningPreset | str,
command: str,
tune_args: dict[str, str] | None = None) -> int
Load an existing study and execute the command with the best parameters.
Arguments:
db_path- Path to the database file.preset- The tuning preset instance or name.command- The base command string to apply parameters to.tune_args- Keyword arguments passed to the preset constructor.
Returns:
The subprocess return code.
Raises:
FileNotFoundError- If the database file does not exist.ValueError- If the study has no completed trials.
analyze_from_db
@classmethod
def analyze_from_db(cls, db_path: str) -> None
Load an existing study from a database and run analysis.
Arguments:
db_path- Path to the database file.
analyze
def analyze() -> None
Analyze the optimization results and generate visualizations.