Customization
This page explains how to customize no-code tuning behavior. By creating custom presets and evaluators, you can support any framework or metric.
Creating a Custom Preset
To create a preset for your own framework, inherit from TuningPreset and register it with the registry.
from aibooster.intelligence.zenith_tune.presets.base import (
IntParameter,
FloatParameter,
CategoricalParameter,
BoolParameter,
TuningPreset,
)
from aibooster.intelligence.zenith_tune.presets.registry import PresetRegistry
from aibooster.intelligence.zenith_tune.command import CommandBuilder
from aibooster.intelligence.zenith_tune.strategies.random import RandomStrategy
from aibooster.intelligence.zenith_tune.evaluators.regex import RegexEvaluator
@PresetRegistry.register("my_framework")
class MyFrameworkPreset(TuningPreset):
def get_search_space(self):
return {
"batch_size": IntParameter(low=8, high=256, step=8, default=32),
"learning_rate": FloatParameter(low=1e-5, high=1e-2, log=True),
"use_amp": BoolParameter(default=True),
"optimizer": CategoricalParameter(choices=["adam", "sgd", "adamw"]),
}
def apply_parameters(self, command, params):
cmd = CommandBuilder(command)
cmd.update(f"--batch-size {params['batch_size']}")
cmd.update(f"--lr {params['learning_rate']}")
cmd.update(f"--optimizer {params['optimizer']}")
if params["use_amp"]:
cmd.append("--fp16")
else:
cmd.remove("--fp16")
return cmd.get_command()
def get_recommended_strategy(self):
return RandomStrategy()
def get_recommended_evaluator(self):
return RegexEvaluator(
pattern=r"loss:\s*([\d.]+)",
selector="last",
)
TuningPreset Methods
The following methods must be implemented:
| Method | Description |
|---|---|
get_search_space() | Define the parameters to search and their ranges |
apply_parameters(command, params) | Apply explored parameters to the command and return the modified command string |
get_recommended_strategy() | Return the default strategy |
get_recommended_evaluator() | Return the default evaluator |
The following can be optionally overridden:
| Method | Description |
|---|---|
prune(params) | Skip invalid parameter combinations before execution. Return True to skip |
Mapping --args to Constructor
Values passed via --args key1=val1,key2=val2 are mapped to the preset's __init__ keyword arguments.
By defining required arguments in the constructor, you can require values from users via --args on the CLI.
@PresetRegistry.register("my_framework")
class MyFrameworkPreset(TuningPreset):
def __init__(self, *, n_gpus: int, lr_min: float = 1e-5, **kwargs):
self._n_gpus = int(n_gpus)
self._lr_min = float(lr_min)
zenithtune optimize --preset my_framework --args n_gpus=8,lr_min=1e-4 -- bash train.sh
Parameter Types
| Type | Description | Key Fields |
|---|---|---|
IntParameter | Integer | low, high, step, default |
FloatParameter | Floating-point | low, high, log (log scale), default |
CategoricalParameter | Categorical | choices, default |
BoolParameter | Boolean | default |
All fields except low, high, and choices are optional.
CommandBuilder
A utility for applying parameters to commands in apply_parameters.
| Method | Behavior | Example |
|---|---|---|
update(option) | Overwrite existing option, or append if not present | cmd.update("--lr 0.01") |
append(option) | Add value to existing option (allows duplicates) | cmd.append("--fp16") |
remove(option) | Remove an option | cmd.remove("--fp16") |
get_command() | Get the modified command string | cmd.get_command() |
Initializing with normalize=True treats hyphens and underscores as equivalent when matching (e.g., --batch-size matches --batch_size).
Creating a Custom Strategy
To create a custom search strategy, inherit from TuningStrategy and register it with the StrategyRegistry.
from aibooster.intelligence.zenith_tune.strategies.base import TuningStrategy
from aibooster.intelligence.zenith_tune.strategies.registry import StrategyRegistry
@StrategyRegistry.register("my_strategy")
class MyStrategy(TuningStrategy):
def optimize(self, eval_fn, search_space, direction):
while True:
params = ... # Select parameters from search_space
eval_fn(params) # Run a trial (raises TrialExhausted when budget is exceeded)
The search progresses by repeatedly calling eval_fn(params) inside the optimize method. When the trial budget (--n-trials) is exceeded, eval_fn raises a TrialExhausted exception to terminate the search.
Creating a Staged BlackBox Strategy
To create a strategy that explores parameters in two stages, inherit from StagedBlackboxStrategy.
It provides a framework for 2-stage search: Stage 1 performs a coarse search over structural parameters, then Stage 2 fixes the best configuration and optimizes the remaining parameters. Each stage uses Optuna TPE.
megatron-staged-blackbox is a concrete implementation of this class.
from aibooster.intelligence.zenith_tune.strategies.staged_blackbox import (
StagedBlackboxStrategy,
)
from aibooster.intelligence.zenith_tune.strategies.registry import StrategyRegistry
@StrategyRegistry.register("my_staged")
class MyStagedStrategy(StagedBlackboxStrategy):
def __init__(self, *, stage1_trials: int = 30, **kwargs):
super().__init__(stage1_trials=stage1_trials, **kwargs)
def _validate_search_space(self, search_space):
# Validate that search_space contains required parameters
...
def _enumerate_candidates(self, search_space):
# Return a list of encoded strings representing Stage 1 candidates
return ["candidate_1", "candidate_2", ...]
def _build_stage1_params(self, candidate):
# Decode candidate string and return Stage 1 parameter dict
# Stage 2 parameters are fixed to baseline values
return {"param_a": ..., "param_b": ..., "param_c": default, "param_d": default}
def _build_stage2_params(self, trial, search_space, stage1_best):
# Fix Stage 1 best params and suggest Stage 2 params from Optuna trial
params = dict(stage1_best)
params["param_c"] = trial.suggest_categorical("param_c", ...)
params["param_d"] = trial.suggest_int("param_d", ...)
return params
def _log_stage1_start(self, n_candidates):
...
def _log_stage1_best(self, best_params, value):
...
def _log_stage2_start(self, stage1_best):
...
Methods that must be implemented in StagedBlackboxStrategy subclasses:
| Method | Description |
|---|---|
_validate_search_space(search_space) | Validate that search space meets requirements for both stages |
_enumerate_candidates(search_space) | Return a list of encoded strings representing Stage 1 candidates |
_build_stage1_params(candidate) | Decode a candidate string and return Stage 1 parameter dict |
_build_stage2_params(trial, search_space, stage1_best) | Build the Stage 2 parameter dict |
_log_stage1_start(n_candidates) | Log Stage 1 start info |
_log_stage1_best(best_params, value) | Log Stage 1 best result |
_log_stage2_start(stage1_best) | Log Stage 2 start info |
To use a custom strategy as a preset-specific strategy, return it from the preset's get_recommended_strategy. Parameters passed via --args are also mapped to the strategy's constructor via _coerce_kwargs.
Creating a Custom Evaluator
Using RegexEvaluator
To extract values from stdout using regular expressions, use RegexEvaluator. Return it from your preset's get_recommended_evaluator.
from aibooster.intelligence.zenith_tune.evaluators.regex import RegexEvaluator
from aibooster.intelligence.zenith_tune.evaluators.base import Direction
# Minimize loss (default)
RegexEvaluator(pattern=r"loss:\s*([\d.]+)", selector="last")
# Maximize throughput
RegexEvaluator(
pattern=r"throughput:\s*([\d.]+)",
selector="mean",
direction=Direction.MAXIMIZE,
)
selector: Aggregation method when the regex matches multiple times
| selector | Description |
|---|---|
first | First match |
last | Last match (default) |
min / max | Minimum / maximum value |
mean / median | Mean / median value |