Skip to content

yolort: yet another yolov5, with its runtime stack for libtorch, onnx, tvm and specialized accelerators

License

Notifications You must be signed in to change notification settings

Tomakko/yolov5-rt-stack

 
 

Repository files navigation

YOLOv5 Runtime Stack

CI testing codecov PyPI version PyPI downloads Github downloads Slack


What it is. Yet another implementation of Ultralytics's YOLOv5, and with modules refactoring to adapt to different deployment scenarios such as libtorch, onnxruntime, tvm and so on.

About the code. Follow the design principle of detr:

object detection should not be more difficult than classification, and should not require complex libraries for training and inference.

yolort is very simple to implement and experiment with. You like the implementation of torchvision's faster-rcnn, retinanet or detr? You like yolov5? You love yolort!

YOLO inference demo

🆕 What's New

  • Support exporting to TorchScript model. Oct. 8, 2020.
  • Support inferring with LibTorch C++ interface. Oct. 10, 2020.
  • Add TorchScript C++ inference example. Nov. 4, 2020.
  • Refactor YOLO modules and support dynamic shape/batch inference. Nov. 16, 2020.
  • Support exporting to ONNX, and inferring with ONNXRuntime interface. Nov. 17, 2020.
  • Add graph visualization tools. Nov. 21, 2020.
  • Add TVM compile and inference notebooks. Feb. 5, 2021.

🛠️ Usage

There are no extra compiled components in yolort and package dependencies are minimal, so the code is very simple to use.

Installation and Inference Examples

  • Above all, follow the official instructions to install PyTorch 1.7.0+ and torchvision 0.8.1+

  • Installation via Pip

    Simple installation from PyPI

    pip install -U yolort

    Or from Source

    # clone yolort repository locally
    git clone https://github.com/zhiqwang/yolov5-rt-stack.git
    cd yolov5-rt-stack
    # install in editable mode
    pip install -e .
  • Install pycocotools (for evaluation on COCO):

    pip install -U 'git+https://github.com/ppwwyyxx/cocoapi.git#subdirectory=PythonAPI'
  • To read a source of image(s) and detect its objects 🔥

    from yolort.models import yolov5s
    
    # Load model
    model = yolov5s(pretrained=True, score_thresh=0.45)
    model.eval()
    
    # Perform inference on an image file
    predictions = model.predict('bus.jpg')
    # Perform inference on a list of image files
    predictions = model.predict(['bus.jpg', 'zidane.jpg'])

Loading via torch.hub

The models are also available via torch hub, to load yolov5s with pretrained weights simply do:

model = torch.hub.load('zhiqwang/yolov5-rt-stack', 'yolov5s', pretrained=True)

Updating checkpoint from ultralytics/yolov5

The module state of yolort has some differences comparing to ultralytics/yolov5. We can load ultralytics's trained model checkpoint with minor changes, and we have converted ultralytics's release v3.1 and v4.0. For example, if you want to convert a yolov5s (release 4.0) model, you can just run the following script. You can also see our how-to-align-with-ultralytics-yolov5 notebook for more details.

from yolort.utils import update_module_state_from_ultralytics

# Update module state from ultralytics
model = update_module_state_from_ultralytics(arch='yolov5s', version='v4.0')
# Save updated module
torch.save(model.state_dict(), 'yolov5s_updated.pt')

Inference on LibTorch backend 🚀

We provide a notebook to demonstrate how the model is transformed into torchscript. And we provide an C++ example of how to infer with the transformed torchscript model. For details see the GitHub Actions.

🎨 Model Graph Visualization

Now, yolort can draw the model graph directly, checkout our visualize-jit-models notebook to see how to use and visualize the model graph.

YOLO model visualize

🎓 Acknowledgement

  • The implementation of yolov5 borrow the code from ultralytics.
  • This repo borrows the architecture design and part of the code from torchvision.

🤗 Contributing

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us. BTW, leave a 🌟 if you liked it, this means a lot to us, and this is the easiest way to support us :)

About

yolort: yet another yolov5, with its runtime stack for libtorch, onnx, tvm and specialized accelerators

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 81.4%
  • C++ 18.1%
  • CMake 0.5%