Skip to content

Latest commit

 

History

History
108 lines (87 loc) · 4.31 KB

File metadata and controls

108 lines (87 loc) · 4.31 KB

SSD-MobileNet inference

Description

This document has instructions for running SSD-MobileNet inference using Intel(R) Extension for TensorFlow with Intel(R) Data Center GPU Flex Series.

Hardware Requirements:

  • Intel® Data Center GPU Flex Series

Software Requirements:

  • Ubuntu 20.04 (64-bit)

  • Intel GPU Drivers: Intel® Data Center GPU Flex Series 419.40

    Release OS Intel GPU Install Intel GPU Driver
    v1.0.0 Ubuntu 20.04 Intel® Data Center GPU Flex Series Refer to the Installation Guides for latest driver installation. If install the verified Intel® Data Center GPU Flex Series 419.40, please append the specific version after components, such as apt-get install intel-opencl-icd=22.28.23726.1+i419~u20.04
  • Intel® oneAPI Base Toolkit 2022.3: Need to install components of Intel® oneAPI Base Toolkit

    • Intel® oneAPI DPC++ Compiler
    • Intel® oneAPI Math Kernel Library (oneMKL)
    • Download and install the verified DPC++ compiler and oneMKL in Ubuntu 20.04.

      $ wget https://registrationcenter-download.intel.com/akdlm/irc_nas/18852/l_BaseKit_p_2022.3.0.8767_offline.sh
      # 4 components are necessary: DPC++/C++ Compiler, DPC++ Libiary, Threading Building Blocks and oneMKL
      $ sh ./l_BaseKit_p_2022.3.0.8767_offline.sh

      For any more details, please follow the procedure in https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit.html.

    • Set environment variables Default installation location {ONEAPI_ROOT} is /opt/intel/oneapi for root account, ${HOME}/intel/oneapi for other accounts
      source {ONEAPI_ROOT}/setvars.sh

Datasets

Download and preprocess the COCO dataset using the instructions here. After running the conversion script you should have a directory with the COCO dataset in the TF records format.

Set the DATASET_DIR to point to the TF records directory when running SSD-MobileNet.

Quick Start Scripts

Script name Description
online_inference.sh Runs online inference for int8 precision
batch_inference.sh Runs batch inference for int8 precision
accuracy.sh Measures the model accuracy for int8 precision

Run the model

Install the following pre-requisites:

  • Python version 3.9

  • Create and activate virtual environment.

    virtualenv -p python <virtualenv_name>
    source <virtualenv_name>/bin/activate
  • Install TensorFlow and Intel® Extension for TensorFlow (ITEX):

    Intel® Extension for TensorFlow requires stock TensorFlow v2.10.0 to be installed

    pip install tensorflow==2.10.0
    pip install --upgrade intel-extension-for-tensorflow[gpu]

    To verify that TensorFlow and ITEX are correctly installed:

    python -c "import intel_extension_for_tensorflow as itex; print(itex.__version__)"
    
  • Download the frozen graph model file, and set the FROZEN_GRAPH environment variable to point to where it was saved:

    wget https://storage.googleapis.com/intel-optimized-tensorflow/models/gpu/ssd_mobilenet_v1_int8_itex.pb
  • Install model specific dependencies:

    pip install pycocotools

See the datasets section of this document for instructions on downloading and preprocessing the ImageNet dataset. The path to the ImageNet TF records files will need to be set as the DATASET_DIR environment variable prior to running a quickstart script.

To run the model on Baremetal

This snippet shows how to run a quickstart script:

export DATASET_DIR=<path to the preprocessed COCO TF dataset>
export OUTPUT_DIR=<path to where output log files will be written>
export PRECISION=int8
export FROZEN_GRAPH=<path to pretrained model file (*.pb)>

Run quickstart script:
./quickstart/object_detection/tensorflow/ssd-mobilenet/inference/gpu/<script name>.sh

License

LICENSE