Skip to main content
Version: v2509

Setup

Environment Setup

How to set up the environment on DRIVE Orin

Prerequisites

  • Works only when NVIDIA DRIVE Orin is running on DRIVE OS 6.0.10.

Setting up the environment on DRIVE Orin using Docker

  • As with setting up the runtime environment for DRIVE OS 6.0.10, the DRIVE Orin unit and a development PC (host PC) are required.
  1. Please complete the setup of DRIVE OS 6.0.10 by following the official NVIDIA guide.

    • In step 3 “Flash Using the DRIVE OS Docker Container” if you pull a Docker image with the latest tag, explicitly specify the 6.0.10.0-0009 tag to avoid pull other versions.
  2. Launch the AcuiRT Docker image on DRIVE Orin. The Docker image contains the AcuiRT runtime environment and required libraries.

    sudo docker run -it --rm --privileged --runtime nvidia --gpus all --network host public.ecr.aws/z0a7o9s7/aibooster/intelligence/acuirt:0.1.0 /bin/bash

Environment setup on DRIVE Orin without using Docker

  1. Set up the DRIVE OS in the same way as the Setting up the environment on DRIVE Orin using Docker.

  2. Download HPC-X and extract it to a directory of your choice.

    • In the Download Center, please select as shown below.
      • ARCHIVE VERSIONS
      • Version Archive: 2.9.0
      • DOCA-OFED/MLNX_OFED/OFED: inbox
      • DOCA-OFED/MLNX_OFED/OFED Ver: inbox
      • OS Distro: Ubuntu
      • OS Distro Ver: 20.04
      • Arch: aarch64
  3. Add the HPC‑X ompi/lib directory to LD_LIBRARY_PATH.

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/ompi/lib
  4. Create a Python virtual environment using venv. If venv cannot be run, add the python3-venv package via apt. Also, because the virtual environment needs to use the globally installed tensorrt package, be sure to include the --system-site-packages option.

    python -m venv .venv --system-site-packages
    source .venv/bin/activate
  5. Update pip

    pip install -U pip
  6. Install torch, torchvision, and torch2trt from a wheel.

    wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torch-1.13.0a0%2Bgitunknown-cp38-cp38-linux_aarch64.whl
    wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torchvision-0.14.0a0%2B5ce4506-cp38-cp38-linux_aarch64.whl
    wget https://assets.aibooster.fixstars.com/intelligence/acuirt/torch2trt-0.5.0-py3-none-any.whl

    pip install torch-1.13.0a0+gitunknown-cp38-cp38-linux_aarch64.whl
    pip install torchvision-0.14.0a0+5ce4506-cp38-cp38-linux_aarch64.whl
    pip install torch2trt-0.5.0-py3-none-any.whl
  7. Install AcuiRT

    cd /path/to/faib/intelligence/components/acuirt
    pip install .
  • If you are using AcuiRT in an environment other than DRIVE Orin, please set up the environment following the steps below.

Operating Environment

  • Python >= 3.8
  • pip >= 21.3
  • CUDA

Installation Steps

  1. Install PyTorch and torchvision. Skip if already installed.

    pip install torch torchvision
  2. Install TensorRT. Check the version of CUDA you are using and install the corresponding TensorRT. You can verify the CUDA version from the CUDA Version: x.x shown in the output of nvidia-smi.

    • If the CUDA version is 12.x

      pip install tensorrt-cu12
    • If the CUDA version is 13.x

      pip install tensorrt-cu13
  3. Clone torch2trt from GitHub and install it by running setup.py.

    git clone https://github.com/NVIDIA-AI-IOT/torch2trt
    cd torch2trt && python setup.py install
  4. Install AcuiRT. Missing dependency packages will be installed automatically.

    cd intelligence/components/acuirt
    pip install .