This repository contains the code for re-producing the experiments of the paper "Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection" presented at AISafety workshop (IJCAI 2023).
Framework | ID & OOD Distributions |
---|---|
The list of images used during training as OOD dataset has been provided in the file openimagesv4.txt
As the use of machine learning continues to expand, the importance of ensuring its safety cannot be overstated. A key concern in this regard is the ability to identify whether a given sample is from the training distribution, or is an "Out-Of-Distribution" (OOD) sample. In addition, adversaries can manipulate OOD samples in ways that lead a classifier to make a confident prediction. In this study, we present a novel approach for certifying the robustness of OOD detection within a
This code has been tested with python 3.8, torch 1.12.0+cu113 and torchvision 0.13.0+cu113
Let's create a new environment
conda create -n distro python=3.8
and activate it
conda activate distro
thus, we can install the dependencies
pip install -r requirements.txt
additionally we need to install AutoAttack
pip install git+https://github.com/fra31/auto-attack
Please download the pre-trained models from ProoD and locate them such as
'/your/path/to/the/models/ProoD/*'
where *
is CIFAR10
or CIFAR100
.
Additionally download VOS, Logit Norm and Diffusion and place them in
'/your/path/to/the/models/our/*'
where *
indicates
CIFAR10/vos.pt
for VOS trained onCIFAR10
CIFAR10/logitnorm.pt
for LogitNorm trained onCIFAR10
CIFAR10/denoiser.pt
for the diffusion model trained onCIFAR10
CIFAR100/denoiser.pt
for the diffusion model trained onCIFAR100
Please create a new .env
file here and than specify your enviornment variables:
MODELS_PATH='/your/path/to/the/models/'
DATASETS_PATH='/your/path/to/the/datasets/'
OUTPUT_PATH='/your/path/to/the/results/'
conda activate distro
run the experiments inside this folder
python . --experiment $EXPERIMENT --clean True
subsitute $EXPERIMENT with one of the following:
plain
- Plain ResNetoe
- Outlier Exposure ResNetvos
- VOS WideResNetlogit
- Logit Norm WideResNetacet
- ACET DenseNetatom
- ATOM DenseNetgood
- GOOD CNN size XLprood
- ProoD ResNet + CNN size Sdistro
- Our work: Diffusion + ResNet + CNN size S
To compute the adversarial accuracy under the l-inf
norm with epsilon
equal to 2/255
or 8/255
python . --experiment $EXPERIMENT --adv_robustness True
To compute the certify robustness under the l-2
norm with sigma
equal to 0.12
or 0.25
python . --experiment $EXPERIMENT --certify_robustness True
To compute the clean AUC, AUPR and FPR
python . --experiment $EXPERIMENT --clean True
To compute the guaranteed l-inf
norm GAUC, GAUPR and GFPR with epsilon
equal to 0.01
python . --experiment $EXPERIMENT --guar True
To compute the guaranteed l-2
norm GAUC, GAUPR and GFPR with sigma
equal to 0.12
python . --experiment $EXPERIMENT --certify True
To compute the adversarial AAUC, AAUPR and AFPR with `epsilon
python . --experiment $EXPERIMENT --adv True
-
--dataset $DATASET
can becifar10
orcifar100
--batch_size $BATCH_SIZE
-
--score $SCORE_FUNCTION
can besoftmax
orenergy
This software was solely developed for and published as part of the publication cited above. It will neither be maintained nor monitored in any way.
If you find our work useful in your research, please consider citing:
@article{franco2023diffusion,
title={Diffusion Denoised Smoothing for Certified and Adversarial Robust Out-Of-Distribution Detection},
author={Franco, Nicola and Korth, Daniel and Lorenz, Jeanette Miriam and Roscher, Karsten and Guennemann, Stephan},
journal={arXiv preprint arXiv:2303.14961},
year={2023}
}