Skip to main content
Version: v2511

Converting and Evaluating using ConversionWorkflow

What is ConversionWorkflow?

  • It is a framework for performing AcuiRT conversion and evaluation in a consistent manner. By providing a dataset, a model, and an evaluation algorithm to ConversionWorkflow, you can perform conversion using AcuiRT, evaluate the converted model, and acquire profile results.

How to use ConversionWorkflow

1. Preparation of Model and Dataset

  • Prepare the model and dataset by referring to Basic Usage.

2. Using ConversionWorkflow

  • Declare a ConversionWorkflow instance and configure the following settings:
    • model: The model to be converted.
    • evaluator: A class that evaluates the model's output.
      • The evaluator must comply with the EvaluateProtocol interface.
    • eval_dataset: The dataset used for evaluation.
    • eval_dataset_post_process: Post-processing function for the dataset.
    • coversion_dataset: The dataset referenced during conversion (Optional).
      • Specify this if the dataset used during conversion differs from the evaluation dataset, or if you are performing calibration/conversion using a train dataset and evaluating with a test dataset.
    • model_post_process: Post-processing function for inference results (Optional).
    • model_post_porocess_rt: Post-processing function for the converted model's inference results (Optional).
      • Specify this if the output data format changes due to optimization.
    • eval_non_converted_model: Whether to evaluate the model before conversion (Optional).
    • settings_torch_profiler: Settings for the torch profiler (Optional).
      • By default, the following settings are applied, recording all CPU and CUDA activities. Please change these settings as necessary, as they affect the profiler's performance. Note that if schedule is not specified, profiling will be executed for all data, which may significantly increase processing time.
        • activity: [torch.profiler.ProfilerActivity.CPU, torch.profiler.ProfilerActivity.CUDA]
        • profile_memory: False
    • profile_max_depth: The maximum depth of child module layers to track with the torch profiler (Optional).
      • By default, all function calls during inference are tracked and automatically recorded in the profiler.
    • exporters: Exporters to output reports (Optional).
      • By default, no Exporter is specified. As needed, import JsonExporter or LoggingExporter from aibooster.intelligence.acuirt.utils and provide instances in a list format. It is also possible to use custom Exporters by complying with ExporterProtocol.

3. Execute Conversion

  • Call the run method to execute the conversion. The execution results will return the converted model and a report.

    model_converted, report = workflow.run(
    config,
    export_path
    )
  • For config, provide a dict describing the conversion settings or AcuiRTBaseConversionConfig and its derived dataclasses.

    # Example using dict
    config = dict(
    rt_mode="onnx",
    auto=True,
    )

    # Example using AcuiRTBaseConversionConfig
    from aibooster.intelligence.acuirt.dataclasses import AcuiRTBaseConversionConfig
    config_dataclass = AcuiRTBaseConversionConfig(
    rt_mode="onnx",
    auto=True,
    children=None,
    input_shapes=None,
    input_args=None,
    )

4. Check Conversion Results

  • The report obtained in Step 3 includes information on inference speed and accuracy after conversion, as well as details of any errors that occurred during conversion.

Remarks

  • A sample program using ConversionWorkflow is available in intelligence/acuirt/image_classification_resnet50.py within aibooster-examples.