Skip to main content
Version: v2512

Environment Setup

AcuiRT Environment Setup

  1. Install AIBooster and related packages by referring to the How to set up the environment outside the recommended environment.

DETR Environment Setup

  1. Clone the aibooster-examples repository and change the working directory into DETR.

    git clone -b 0.4.0 https://github.com/fixstars/aibooster-examples && cd aibooster-examples/intelligence/acuirt/detr/baseline
  2. Install the packages required by DETR.

    pip install -r requirements.txt
  3. Dataset Preparation

    • Download the evaluation dataset for the COCO Dataset. Please specify an arbitrary path for /path/to/dataset.

      wget http://images.cocodataset.org/annotations/annotations_trainval2017.zip
      wget http://images.cocodataset.org/zips/val2017.zip
      unzip annotations_trainval2017.zip -d /path/to/dataset
      unzip val2017.zip -d /path/to/dataset
    • Please confirm that the dataset structure is as follows.

      coco
      ├── annotations
      │ ├── captions_train2017.json
      │ ├── captions_val2017.json
      │ ├── instances_train2017.json
      │ ├── instances_val2017.json
      │ ├── instances_val2017_subset.json
      │ ├── person_keypoints_train2017.json
      │ └── person_keypoints_val2017.json
      └── val2017
    • val2017 contains 5000 images, so inference and evaluation take a long time. For simplicity, we will create a subset containing 50 randomly selected images.

    python create_subset.py --val_json_path /path/to/dataset/coco/annotations/instances_val2017.json --output_json_path /path/to/dataset/coco/annotatinos/instances_val2017_subset.json
  4. Download the pre-trained weights.

    wget https://dl.fbaipublicfiles.com/detr/detr-r101-2c7b67e5.pth
  5. Run inference.

    python main.py --batch_size 1 --no_aux_loss --eval --backbone resnet101 --resume ./detr-r101-2c7b67e5.pth --coco_path /path/to/dataset/coco

    It will be successful if a recognition accuracy log like the one below is output.

    IoU metric: bbox
    Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.531
    Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.727
    Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.560
    Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.300
    Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.553
    Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.720
    Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.428
    Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.625
    Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.648
    Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.394
    Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.655
    Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.814