Setup
AcuiRT is included in Docker Images for DRIVE Orin environments or in the aibooster Python package, making it easy to use.
When using the Python package, you can easily install it via pip install.
pip install aibooster
The recommended version of the aibooster package as of AIBooster v2509 is aibooster~=0.1.0 (0.1.0 or higher but less than 0.2.0).
The supported Python versions for aibooster are as follows:
| Architecture | Python Version |
|---|---|
| x86_64 | 3.8, 3.9, 3.10, 3.11, 3.12 |
| aarch64 | 3.8 |
Environment Setup
How to set up the environment on DRIVE Orin
Prerequisites
- Works only when NVIDIA DRIVE Orin is running on DRIVE OS 6.0.10.
Setting up the environment on DRIVE Orin using Docker
- As with setting up the runtime environment for DRIVE OS 6.0.10, the DRIVE Orin unit and a development PC (host PC) are required.
-
Please complete the setup of DRIVE OS 6.0.10 by following the official NVIDIA guide.
- In step 3 “Flash Using the DRIVE OS Docker Container” if you pull a Docker image with the latest tag, explicitly specify the
6.0.10.0-0009tag to avoid pull other versions.
- In step 3 “Flash Using the DRIVE OS Docker Container” if you pull a Docker image with the latest tag, explicitly specify the
-
Launch the AcuiRT Docker image on DRIVE Orin. The Docker image contains the AcuiRT runtime environment and required libraries.
sudo docker run -it --rm --privileged --runtime nvidia --gpus all --network host public.ecr.aws/z0a7o9s7/aibooster/intelligence/acuirt:0.1.0 /bin/bash
Environment setup on DRIVE Orin without using Docker
-
Set up the DRIVE OS in the same way as the Setting up the environment on DRIVE Orin using Docker.
-
Download HPC-X and extract it to a directory of your choice.
- In the Download Center, please select as shown below.
- ARCHIVE VERSIONS
- Version Archive: 2.9.0
- DOCA-OFED/MLNX_OFED/OFED: inbox
- DOCA-OFED/MLNX_OFED/OFED Ver: inbox
- OS Distro: Ubuntu
- OS Distro Ver: 20.04
- Arch: aarch64
- In the Download Center, please select as shown below.
-
Add the HPC‑X
ompi/libdirectory toLD_LIBRARY_PATH.export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/ompi/lib -
Create a Python virtual environment using
venv. Ifvenvcannot be run, add thepython3-venvpackage via apt. Also, because the virtual environment needs to use the globally installed tensorrt package, be sure to include the--system-site-packagesoption.python -m venv .venv --system-site-packages
source .venv/bin/activate -
Update pip
pip install -U pip -
Install torch, torchvision, and torch2trt from a wheel.
wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torch-1.13.0a0%2Bgitunknown-cp38-cp38-linux_aarch64.whl
wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torchvision-0.14.0a0%2B5ce4506-cp38-cp38-linux_aarch64.whl
wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torch2trt-0.5.0-py3-none-any.whl
pip install torch-1.13.0a0+gitunknown-cp38-cp38-linux_aarch64.whl
pip install torchvision-0.14.0a0+5ce4506-cp38-cp38-linux_aarch64.whl
pip install torch2trt-0.5.0-py3-none-any.whl -
Install AcuiRT
cd /path/to/faib/intelligence/components/acuirt
pip install .
How to set up the environment outside the recommended environment
- If you are using AcuiRT in an environment other than DRIVE Orin, please set up the environment following the steps below.
Operating Environment
- Python >= 3.8
- pip >= 21.3
- CUDA
Installation Steps
-
Install PyTorch and torchvision. Skip if already installed.
pip install torch torchvision -
Install TensorRT. Check the version of CUDA you are using and install the corresponding TensorRT. You can verify the CUDA version from the
CUDA Version: x.xshown in the output ofnvidia-smi.-
If the CUDA version is
12.xpip install tensorrt-cu12 -
If the CUDA version is
13.xpip install tensorrt-cu13
-
-
Clone torch2trt from GitHub and install it by running setup.py.
git clone https://github.com/NVIDIA-AI-IOT/torch2trt
cd torch2trt && python setup.py install -
Install AcuiRT. Missing dependency packages will be installed automatically.
cd intelligence/components/acuirt
pip install .