Skip to content
Change the repository type filter

All

    Repositories list

    • The official implementation of ECCV'24 paper "To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now". This work introduces one fast and effective attack method to evaluate the harmful-content generation ability of safety-driven unlearned diffusion models.
      Python
      MIT License
      35201Updated Oct 18, 2024Oct 18, 2024
    • SCSS
      2100Updated Oct 11, 2024Oct 11, 2024
    • "Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning" by Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu
      Python
      MIT License
      41110Updated Oct 10, 2024Oct 10, 2024
    • [ECCV24] "Challenging Forgets: Unveiling the Worst-Case Forget Sets in Machine Unlearning" by Chongyu Fan*, Jiancheng Liu*, Alfred Hero, Sijia Liu
      Python
      MIT License
      01500Updated Oct 9, 2024Oct 9, 2024
    • DeepZero

      Public
      [ICLR'24] "DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training" by Aochuan Chen*, Yimeng Zhang*, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu
      Python
      MIT License
      43900Updated Oct 9, 2024Oct 9, 2024
    • SOUL

      Public
      Official repo for EMNLP'24 paper "SOUL: Unlocking the Power of Second-Order Optimization for LLM Unlearning"
      Python
      MIT License
      21310Updated Oct 1, 2024Oct 1, 2024
    • Official implementation of NeurIPS'24 paper "Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models". This work adversarially unlearns the text encoder to enhance the robustness of unlearned DMs against adversarial prompt attacks and achieves a better balance between unlearning performance and image generation
      Jupyter Notebook
      Creative Commons Attribution 4.0 International
      12100Updated Sep 28, 2024Sep 28, 2024
    • QF-Attack

      Public
      [CVPR23W] "A Pilot Study of Query-Free Adversarial Attack against Stable Diffusion" by Haomin Zhuang, Yihua Zhang and Sijia Liu
      Python
      22100Updated Aug 27, 2024Aug 27, 2024
    • [NeurIPS 2024 D&B Track] UnlearnCanvas: A Stylized Image Dataaset to Benchmark Machine Unlearning for Diffusion Models by Yihua Zhang, Chongyu Fan, Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jiancheng Liu, Gaoyuan Zhang, Gaowen Liu, Ramana Kompella, Xiaoming Liu, Sijia Liu
      Python
      25060Updated Aug 24, 2024Aug 24, 2024
    • [ICLR24 (Spotlight)] "SalUn: Empowering Machine Unlearning via Gradient-based Weight Saliency in Both Image Classification and Generation" by Chongyu Fan*, Jiancheng Liu*, Yihua Zhang, Eric Wong, Dennis Wei, Sijia Liu
      Python
      MIT License
      139600Updated Aug 9, 2024Aug 9, 2024
    • BiBadDiff

      Public
      "From Trojan Horses to Castle Walls: Unveiling Bilateral Backdoor Effects in Diffusion Models" by Zhuoshi Pan*, Yuguang Yao*, Gaowen Liu, Bingquan Shen, H. Vicky Zhao, Ramana Rao Kompella, Sijia Liu
      Python
      0510Updated Mar 25, 2024Mar 25, 2024
    • [ICLR2024]"Backdoor Secrets Unveiled: Identifying Backdoor Data with Optimized Scaled Prediction Consistency" by Soumyadeep Pal, Yuguang Yao, Ren Wang, Bingquan Shen, Sijia Liu
      Python
      0110Updated Mar 14, 2024Mar 14, 2024
    • [NeurIPS23 (Spotlight)] "Model Sparsity Can Simplify Machine Unlearning" by Jinghan Jia*, Jiancheng Liu*, Parikshit Ram, Yuguang Yao, Gaowen Liu, Yang Liu, Pranay Sharma, Sijia Liu
      Python
      MIT License
      76220Updated Mar 12, 2024Mar 12, 2024
    • .github

      Public
      0000Updated Feb 11, 2024Feb 11, 2024
    • 0100Updated Dec 18, 2023Dec 18, 2023
    • DP4TL

      Public
      [NeurIPS2023] "Selectivity Drives Productivity: Efficient Dataset Pruning for Enhanced Transfer Learning" by Yihua Zhang*, Yimeng Zhang*, Aochuan Chen*, Jinghan Jia, Jiancheng Liu, Gaowen Liu, Mingyi Hong, Shiyu Chang, Sijia Liu
      Python
      2800Updated Oct 12, 2023Oct 12, 2023
    • RED-adv

      Public
      "Can Adversarial Examples Be Parsed to Reveal Victim Model Information?" by Yuguang Yao*, Jiancheng Liu*, Yifan Gong*, Xiaoming Liu, Yanzhi Wang, Xue Lin, Sijia Liu
      Python
      0700Updated Oct 5, 2023Oct 5, 2023
    • CLAW-SAT

      Public
      [SANER 2023] CLAWSAT: Towards Both Robust and Accurate Code Models.
      Python
      MIT License
      1500Updated Oct 5, 2023Oct 5, 2023
    • ILM-VP

      Public
      [CVPR23] "Understanding and Improving Visual Prompting: A Label-Mapping Perspective" by Aochuan Chen, Yuguang Yao, Pin-Yu Chen, Yihua Zhang, and Sijia Liu
      Python
      MIT License
      135100Updated Sep 17, 2023Sep 17, 2023
    • [ICCV23] Robust Mixture-of-Expert Training for Convolutional Neural Networks by Yihua Zhang, Ruisi Cai, Tianlong Chen, Guanhua Zhang, Huan Zhang, Pin-Yu Chen, Shiyu Chang, Zhangyang (Atlas) Wang, Sijia Liu
      Python
      14010Updated Aug 27, 2023Aug 27, 2023
    • BLOC-IRM

      Public
      0100Updated May 1, 2023May 1, 2023
    • BiP

      Public
      [NeurIPS22] "Advancing Model Pruning via Bi-level Optimization" by Yihua Zhang*, Yuguang Yao*, Parikshit Ram, Pu Zhao, Tianlong Chen, Mingyi Hong, Yanzhi Wang, and Sijia Liu
      Python
      3914211Updated Apr 12, 2023Apr 12, 2023
    • [NeurIPS 22] "Fairness Reprogramming" by Guanhua Zhang*, Yihua Zhang*, Yang Zhang, Wenqi Fan, Qing Li, Sijia Liu, Shiyu Chang
      Python
      0200Updated Dec 7, 2022Dec 7, 2022
    • Fast-BAT

      Public
      [ICML22] "Revisiting and Advancing Fast Adversarial Training through the Lens of Bi-level Optimization" by Yihua Zhang*, Guanhua Zhang*, Prashant Khanduri, Mingyi Hong, Shiyu Chang, and Sijia Liu
      Shell
      57310Updated Oct 25, 2022Oct 25, 2022
    • [ICLR22] "Reverse Engineering of Imperceptible Adversarial Image Perturbations" by Yifan Gong*, Yuguang Yao*, Yize Li, Yimeng Zhang, Xiaoming Liu, Xue Lin, Sijia Liu
      Python
      MIT License
      0400Updated Mar 26, 2022Mar 26, 2022
    • [ICLR22] "How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective" by Yimeng Zhang, Yuguang Yao, Jinghan Jia, Jinfeng Yi, Mingyi Hong, Shiyu Chang, Sijia Liu
      Python
      MIT License
      0400Updated Mar 4, 2022Mar 4, 2022