No-Code Tuning Examples
This page provides a step-by-step walkthrough of no-code tuning using the demo preset.
Prerequisites
Uses intelligence/zenith_tune/nocode/fake_train.py from aibooster-examples.
This script is a dummy training script whose execution time varies depending on --batch-size, --num-workers, and the OMP_NUM_THREADS environment variable.
# Verify it works
python fake_train.py --epochs 3
Starting training: epochs=3, batch_size=32, num_workers=4, OMP_NUM_THREADS=(not set)
Epoch 1/3 completed in 0.330s
Epoch 2/3 completed in 0.323s
Epoch 3/3 completed in 0.337s
Total training time: 0.99s
1. Check Available Presets
zenithtune optimize --list --dev
Presets:
dev:demo
Strategies:
grid
random
Evaluators:
duration
2. Run Tuning
Run 5 trials with the dev:demo preset.
zenithtune optimize --preset dev:demo --dev --n-trials 5 -- python fake_train.py --epochs 3
A total of 5 trials including the baseline (running the original command as-is) are executed, and the study is saved under outputs/.
3. Analyze Tuning Results
zenithtune analyze outputs/study_YYYYMMDD_HHMMSS/study.db
Analysis graphs (history.png, timeline.png, importances.png) are generated in the study directory.
4. Run Command with Optimal Parameters
zenithtune apply --db-path outputs/study_YYYYMMDD_HHMMSS/study.db --preset dev:demo --dev -- python fake_train.py --epochs 10
Applies the best parameters discovered during tuning and runs the command.
You can specify different options from those used during tuning, such as --epochs 10, for production runs.
Inside the Demo Preset
The dev:demo preset optimizes the following 3 parameters.
| Parameter | Applied to | Search Range |
|---|---|---|
batch_size | --batch-size | 8–256 (step 8) |
num_workers | --num-workers | 1–16 |
omp_num_threads | Environment variable OMP_NUM_THREADS | 1–number of CPU cores |
RegexEvaluator is used to extract Total training time: X.XXs from stdout and minimize it.