intelligence.acuirt.observe.evaluate
NanoTimer Objects
class NanoTimer()
Context manager for measuring elapsed time in nanoseconds. Usage:
with NanoTimer() as timer:
# code block to measure
elapsed_time = timer.elapsed_timedelta
Attributes:
elapsed_timedeltanp.timedelta64 - The elapsed time as a numpy timedelta64 object.
AcuiRTEvalProtocol Objects
class AcuiRTEvalProtocol(Protocol)
Protocol for evaluating model performance. Designed to evaluate model outputs online.
update
def update(result: Any) -> None
Update the evaluator with new results.
aggregate
def aggregate() -> Dict[str, Real]
Aggregate and return the evaluation metrics.
reset
def reset() -> None
Reset the evaluator state.
AcuiRTModelEvaluator Objects
class AcuiRTModelEvaluator()
AcuiRTModelEvaluator evaluates a model's performance.
This class runs inference on a given model using a provided data loader, aggregates evaluation metrics via the evaluator protocol, and measures latency.
Attributes:
evaluatorAcuiRTEvalProtocol - Protocol instance to update and aggregate results.data_loaderIterable[Union[Tuple[Tuple, Dict], Dict]] - Data loader yielding inputs.data_loader_post_processOptional[Callable] - Optional post‑processing for each batch.
__init__
def __init__(evaluator: AcuiRTEvalProtocol,
data_loader: Iterable[Union[Tuple[Tuple, Dict], Dict]],
data_loader_post_process: Optional[Callable] = None)
Initialize the evaluator.
Arguments:
evaluatorAcuiRTEvalProtocol - Protocol to handle evaluation updates.data_loaderIterable[Union[Tuple[Tuple, Dict], Dict]] - Iterable providing batches.data_loader_post_processOptional[Callable], optional - Function to post‑process each batch before inference.
__call__
def __call__(model: torch.nn.Module,
post_process: Optional[Callable[[Any, Any], Any]] = None)
Run evaluation with the given model.
Arguments:
modeltorch.nn.Module - The model to evaluate.post_processCallable[[Any], Any] - Function to process model outputs before evaluation.
Returns:
AcuiRTPerformanceReport- Report containing accuracy and latency metrics.