Skip to content
This repository has been archived by the owner on Dec 15, 2023. It is now read-only.

Code for the publication: Preventing Errors in Person Detection: A Part-Based Self-Monitoring Framework

License

Notifications You must be signed in to change notification settings

FraunhoferIKS/smf-object-detection

Repository files navigation

Preventing Errors in Person Detection: A Part-Based Self-Monitoring Framework

This repository contains the code for re-producing the experiments of the paper "Preventing Errors in Person Detection: A Part-Based Self-Monitoring Framework" to be presented at IEEE IV 2023.

Training and testing has been done with mmdetection. All models (both person detectors and body-part detectors) have been trained on a reduced training set of COCO2014 where DensePose annotations (body-parts) for all visible persons are available. For evaluation, the COCO2014 validation split and the PascalVOC2010 trainval split has been used.

Abstract

The ability to detect learned objects regardless of their appearance is crucial for autonomous systems in real-world applications. Especially for detecting humans, which is often a fundamental task in safety-critical applications, it is vital to prevent errors. To address this challenge, we propose a self-monitoring framework that allows for the perception system to perform plausibility checks at runtime. We show that by incorporating an additional component for detecting human body parts, we are able to significantly reduce the number of missed human detections by factors of up to 9 when compared to a baseline setup, which was trained only on holistic person objects. Additionally, we found that training a model jointly on humans and their body parts leads to a substantial reduction in false positive detections by up to 50% compared to training on humans alone. We performed comprehensive experiments on the publicly available datasets DensePose and Pascal VOC in order to demonstrate the effectiveness of our framework.

Installation

The code has been tested with Python 3.8, PyTorch 1.9.0, and CUDA 11.1.

Prepare Submodule

git submodule init
git submodule update --recursive --remote

Create and activate virtual environment

python3 -m venv venv
source venv/bin/activate

Install dependencies

./install.sh

Add the project path to environment variables

Open ~/.bashrc, and add the following line to the end.

export PYTHONPATH=<path_of_project>:$PYTHONPATH

Workflow

Download Datasets

Download COCO2014, DensePose, and PascalVOC2010.

Create Annotations

  1. Convert VOC Annotations to COCO Annotations

Copy voc2coco.py from here into your working directory and run

python3 voc2coco.py --ann_dir <path-to-voc>/Annotations --ann_ids <path-to-voc>/ImageSets/Main/trainval.txt --labels <path-to-voc>/labels.txt --output <path-to-anns-dir>/voc2010_trainval_cocoformat.json --ext xml
  1. Prepare Annotations for training and evaluating

Generates annotations for person detector (only 1 class: person), body-parts detector (8 classes) and joint person-body-parts detector (9 classes). Replace dataset annotation paths in the script with your own.

./scripts/create_annotations.sh

Training

You can train the models with mmdetection. Replace the paths in the "TO MODIFY"-section of the config file with your own and run:

python mmdetection/tools/train.py <path-to-config-python-file>

Store detections for further evaluations and experiments

Replace paths with your own and run:

./scripts/store_detections.sh

Run Analysis to calculate confidence thresholds

By running the following script, the confidence thresholds that have best precision-recall trade-off are computed and visualized. These thresholds are then later needed to perform the runtime monitoring experiments.

./scripts/run_analysis.sh

Run Experiments

By running the following script, you can reproduce the per-image and per-object experiment, where the final results are visualized in a table formatted in latex.

./scripts/run_experiments.sh

Demo

Run the following script to visualize the output of the monitor

./scripts/run_monitor_on_single_image.sh

Purpose of this project

This software was solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.

Citation

If you find our work useful in your research, please consider citing:

@inproceedings{schwaiger2023smf,
    title={Preventing Errors in Person Detection: A Part-Based Self-Monitoring Framework},
    author={Schwaiger, Franziska and Matic, Andrea and Roscher, Karsten and Günnemann, Stephan},
    journal={IEEEIV},
    year={2023}
}

About

Code for the publication: Preventing Errors in Person Detection: A Part-Based Self-Monitoring Framework

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published