Skip to content

This package contains the code used by Kamaro Engineering e.V. for the Task 3 (object detection) on the virtual Field Robot Event 2021 (June 8-10).

License

Unknown and 2 other licenses found

Licenses found

Unknown
LICENSE
BSD-3-Clause
LICENSE.kamaro
BSD-3-Clause
LICENSE.pytorch
Notifications You must be signed in to change notification settings

Kamaro-Engineering/fre21_object_detection

FRE Object Detection

Code style: black License: GPL v3

This package contains the code used by Kamaro Engineering e.V. for the Task 3 (object detection) on the virtual Field Robot Event 2021 (June 8-10).

This package assumes you are working on ROS Noetic. Older versions of ROS will be incompatible, since this package uses Python 3.

Working principle

We use camera data and perform semantic segmentation on camera images. Each pixel is assigned a class (background/maize/weed/litter). If a sufficient number of pixels in the current image belong to either the "weed" or the "litter" class, this is counted as a detection.

How to use

Our fully trained network used in Task 3 is available in the ONNX format on our NextCloud.

If you want to use it right away, place it into the resources/ folder in this repository with the filename net.onnx.

To start the object detection use:

rosrun fre_object_detection fre21_object_detection_node.py

The following topics are used:

Node [/fre21_object_detection_node]
Publications: 
 * /detector_debug [sensor_msgs/Image]
 * /fre_detections [std_msgs/String]
 * /rosout [rosgraph_msgs/Log]

Subscriptions: 
 * /front/image_raw [sensor_msgs/Image]
 * /odometry/filtered [nav_msgs/Odometry]

Dataset

Our dataset containing 192 hand-labeled images is available on our NextCloud. You will need to download it, or create your own dataset, in order to train the network. Place the downloaded files into the gazebo_data/ directory in the root of this repository, containing the images and labels subdirectories.

Dataset format

There are two folders:

  • images - contains data (640x480 pixels, RGB, PNG)
  • labels - files have the same names as in images, with relevant areas painted in
    • red (#ff0000) to mark weeds
    • green (#00ff00) to mark maize plants
    • blue (#0000ff) to mark litter

The data loader can be found in gazebo_screenshot_dataset.py. Some tolerance is applied to the color values. Other colors are ignored and interpreted as the background class.

How to train your own Network

For our entry, in the FRE 2021 competition, we used ResNet50 with a 75%/25% split on training/validation data. In this repository, we provide three Jupyter notebooks illustrating the training process, in order to document our training process and to facilitate training on new datasets.

The code in the training notebook is heavily based on a tutorial from Pytorch's official documentation. As you can see in the notebook, we used a pretrained ResNet50 network from the torchvision packages as a starting point. You can use any other semantic segmentation network that is capable segmenting a 640x480 RGB image into four classes and has the same resolution for the output. Once you export it to onnx, it should work with our ROS node as a drop-in replacement.

Different models usually have different requirements regarding performance and memory, and produce outputs of varying quality. We found ResNet50 to produce quality results, while being trainable on older graphics cards (e.g. GTX 1080) and having sufficient performance when being run on the CPU for inference during the competition.

Install dependencies

(ROS Noetic only)

rosdep install fre_object_detection
pip3 install onnxruntime

Rosdep should install all missing dependencies. Here is a list of all needed packages:

ros-noetic-rospy
ros-noetic-std-msgs
ros-noetic-sensor-msgs
ros-noetic-nav-msgs
ros-noetic-geometry-msgs
ros-noetic-cv-bridge
python3-rospkg
python3-numpy
python3-opencv
onnxruntime # pip3

For training your own networks, you additionaly need pytorch and torchvision, which should be installed according to their official instructions for your platform. For running the notebooks, you will also need:

pip3 install jupyter ipywidgets matplotlib

License License

The code in this repository is (c) by Kamaro Engineering e.V., subject to the file LICENSE.kamaro.

The neural network training code has been derived from Pytorch's official tutorial written by Nathan Inkawhich. That code is (c) by the Pytorch contributors and made available under the terms of LICENSE.pytorch.

Releases

No releases published

Packages

No packages published