Skip to content

Latest commit

 

History

History
127 lines (101 loc) · 5.54 KB

File metadata and controls

127 lines (101 loc) · 5.54 KB

TensorFlow MLPerf 3D U-Net inference

Description

This document has instructions for running MLPerf 3D U-Net inference using Intel-optimized TensorFlow.

Datasets

Download Brats 2019 separately and unzip the dataset.

Set the DATASET_DIR to point to the directory that contains the dataset files when running MLPerf 3D U-Net accuracy script.

Quick Start Scripts

Script name Description
inference.sh Runs realtime inference using a default batch_size=1 for the specified precision (int8, fp32 or bfloat16). To run inference for throughtput, set BATCH_SIZE environment variable.
inference_realtime_multi_instance.sh Runs multi instance realtime inference using 4 cores per instance for the specified precision (int8, fp32 or bfloat16) with 100 steps and 50 warmup steps. Dummy data is used for performance evaluation. Waits for all instances to complete, then prints a summarized throughput value.
inference_throughput_multi_instance.sh Runs multi instance batch inference using 1 instance per socket for the specified precision (int8, fp32 or bfloat16) with 100 steps and 50 warmup steps. Dummy data is used for performance evaluation. Waits for all instances to complete, then prints a summarized throughput value.
accuracy.sh Measures the inference accuracy (providing a DATASET_DIR environment variable is required) for the specified precision (int8, fp32 or bfloat16).

Run the model

Setup your environment using the instructions below, depending on if you are using AI Kit on Linux or Windows systems.

Setup using AI Kit Setup without AI Kit

To run using AI Kit on Linux you will need:

  • Activate the tensorflow conda environment
    conda activate tensorflow

To run without AI Kit you will need:

  • Python 3
  • intel-tensorflow>=2.5.0
  • git
  • A clone of the Model Zoo repo
    git clone https://github.com/IntelAI/models.git

Download the pre-trained model based on precision:

In this example, we are using the model, trained using the fold 1 BRATS 2019 data. The validation files have been copied from here.

Set the the PRETRAINED_MODEL environment variable to point to where the pre-trained model file was downloaded

Run on Linux

Install dependencies:

# install numactl
pip install numactl

# install the model dependencies in requirements.txt if you would run accuracy.sh
pip install -r models/benchmarks/image_segmentation/tensorflow/3d_unet_mlperf/requirements.txt

Set the environment variables and one of the quickstart scripts. Currently, for performance evaluation dummy data is used. Set DATASET_DIR if you run accuracy.sh to calculate the model accuracy.

# navigate to your Model Zoo directory
cd models

export DATASET_DIR=<path to the dataset directory>
export PRECISION=<set the precision "fp32", "int8" or "bfloat16">
export OUTPUT_DIR=<path to the directory where log files will be written>
export PRETRAINED_MODEL=<path to the pretrained model file based on the chosen precision>
# For a custom batch size, set env var `BATCH_SIZE` or it will run with a default value.
export BATCH_SIZE=<customized batch size value>

# run a script for (example inference_realtime_multi_instance.sh)
./quickstart/image_segmentation/tensorflow/3d_unet_mlperf/inference/cpu/inference_realtime_multi_instance.sh

Run on Windows

If not already setup, please follow instructions for environment setup on Windows.

Set the environment variables and run inference.sh from the quickstart script. Currently, for performance evaluation dummy data is used.

# navigate to your Model Zoo directory
cd models

set PRECISION=<set the precision "fp32" or "bfloat16">
set OUTPUT_DIR=<path to the directory where log files will be written>
set PRETRAINED_MODEL=<path to the pretrained model file based on the chosen precision>
# Set the BATCH_SIZE, or the script will use a default value BATCH_SIZE="1".
set BATCH_SIZE=<customized batch size value>

# run a script for inference
bash quickstart\image_segmentation\tensorflow\3d_unet_mlperf\inference\cpu\inference.sh

Note: You may use cygpath to convert the Windows paths to Unix paths before setting the environment variables. As an example, if the output folder location is D:\user\output, convert the Windows path to Unix as shown:

cygpath D:\user\output
/d/user/output

Then, set the OUTPUT_DIR environment variable set OUTPUT_DIR=/d/user/output.

License

Licenses can be found in the model package, in the licenses directory.