Skip to content
/ WeakSAM Public

[ACM MM 2024] WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition

Notifications You must be signed in to change notification settings

hustvl/WeakSAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WeakSAM

Segment Anything Meets Weakly-supervised Instance-level Recognition

Lianghui Zhu1 *,Junwei Zhou1 *,Yan Liu2, Xin Hao2, Wenyu Liu1, Xinggang Wang1 📧

1 School of EIC, Huazhong University of Science and Technology, 2 Alipay Tian Qian Security Lab

(*) equal contribution, (📧) corresponding author.

ArXiv Preprint (arXiv 2402.14812), Project Page(WeakSAM project page)

PWC PWC PWC PWC PWC PWC

News

  • Feb. 22nd, 2024: We released our paper on Arxiv. Further details can be found in code and our updated arXiv.

Abstract

Weakly supervised visual recognition using inexact supervision is a critical yet challenging learning problem. It significantly reduces human labeling costs and traditionally relies on multi-instance learning and pseudo-labeling. This paper introduces WeakSAM and solves the weakly-supervised object detection (WSOD) and segmentation by utilizing the pre-learned world knowledge contained in a vision foundation model, i.e., the Segment Anything Model (SAM). WeakSAM addresses two critical limitations in traditional WSOD retraining, i.e., pseudo ground truth (PGT) incompleteness and noisy PGT instances, through adaptive PGT generation and Region of Interest (RoI) drop regularization. It also addresses the SAM's problems of requiring prompts and category unawareness for automatic object detection and segmentation. Our results indicate that WeakSAM significantly surpasses previous state-of-the-art methods in WSOD and WSIS benchmarks with large margins, i.e. average improvements of 7.4% and 8.5%, respectively.

Highlight performances

Overview

We first introduce classification clues and spatial points as automatic SAM prompts, which address the problem of SAM requiring interactive prompts. Next, we use the WeakSAM-proposals in the WSOD pipeline, in which the weakly-supervised detector performs class-aware perception to annotate pseudo ground truth (PGT). Then, we analyze the incompleteness and noise problem existing in PGT and propose adaptive PGT generation, RoI drop regularization to address them, respectively. Finally, we use WeakSAM-PGT to prompt SAM for WSIS extension. (The snowflake mark means the model is frozen.)

WeakSAM pipeline

Main results

For WSOD task:

Dataset WSOD method WSOD performance Retrain method Retrain performance
VOC2007 WeakSAM(OICR) 58.9 AP50 Faster R-CNN 65.7 AP50
DINO 66.1 AP50
WeakSAM(MIST) 67.4 AP50 Faster R-CNN 71.8 AP50
DINO 73.4 AP50
COCO2014 WeakSAM(OICR) 19.9 mAP Faster R-CNN 22.3 mAP
DINO 24.9 mAP
WeakSAM(MIST) 22.9 mAP Faster R-CNN 23.8 mAP
DINO 26.6 mAP

For WSIS task:

Dataset Retrain method AP25 AP50 AP70 AP75
VOC2012 Mask R-CNN 70.3 59.6 43.1 36.2
Mask2Former 73.4 64.4 49.7 45.3
Dataset Retrain method AP[50:95] AP50 AP75
COCOval2017 Mask R-CNN 20.6 33.9 22.0
Mask2Former 25.2 38.4 27.0
COCOtest-dev Mask R-CNN 21.0 34.5 22.2
Mask2Former 25.9 39.9 27.9

Data & Preliminaries

Generation & Training pipelines

Citation

If you find this repository/work helpful in your research, welcome to cite the paper and give a ⭐.

@article{zhu2024weaksam,
  title={WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition},
  author={Zhu, Lianghui and Zhou, Junwei and Liu, Yan and Hao, Xin and Liu, Wenyu and Wang, Xinggang},
  journal={Proceedings of the 32nd ACM International Conference on Multimedia},
  year={2024}
}

Acknowledgement

Thanks for these wonderful works and their codebases! ❤️ MIST, WSOD2, Segment-anything, WeakTr, SoS-WSOD

About

[ACM MM 2024] WeakSAM: Segment Anything Meets Weakly-supervised Instance-level Recognition

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published