Skip to content

A simple opencv gui with minimal dependencies that lets a user annotate instance segmenation data with Segment Anything Model

Notifications You must be signed in to change notification settings

backprop64/sam_annotator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

GUI for interactive segmentation with Segment Anything Model

A simple gui with minimal dependencies that lets a user annotate instance segmenation data with Segment Anything Model interactivly, automatically turning user clicks (point prompts) into instance masks. This annotation tool assumes that you're creating class-agnostic detection dataset meaning everything gets stored with the same class ID.

This tool was developed to quickly annotate instance segementation datasets used for developing a mouse detection model (DAMM)

(1) Setup The Codebase

$ conda create -n sam_annotator python=3.9 
$ conda activate sam_annotator
$ git clone https://github.com/backprop64/sam_annotator
$ pip install -r sam_annotator/requirements.txt

important: download pytorch with cuda support to use gpu acceleration

$ conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia # example installation with cuda 11.8 support

(2) Get Segment Anything Model Weights (from SAM repo)

  • Download one of the models and add it to the argument when runninng the sam_data_annotator.py script (see below)
  • smaller models will be faster (...larger models will be more accurate): the size of the models from smallest to largest are the 'vit_b', 'vit_l', and 'vit_h'

Click the links below to download the checkpoint for the corresponding model type.

(3) Start Annotating

Arguments:

run sam_data_annotator.py with the following arguments:

  • --images_path: Path to folder containing images for annotation
  • --metadata_path: Path to metadata file (not required for first-time use, see the note)
  • --sam_weights_path: Path to SAM model weights

Note: If you're starting annotations for the first time, the metadata file will be created automatically in the image folder.

Controls:

Once an image is displayed, use the following controls:

clicking objects to create masks (giving sam point prompts)

  • Left click: Add a foreground point (pixel belonging to the object)
  • Right click: Add a background point (pixel not belonging to the object)

Navigation

  • space: Start annotating the next instance within the same image
    • Clears current point prompt
    • Adds mask/box to annotation
  • esc: Save current annotation and go to the next image
    • Triggers SAM to encode the next image (may take a few seconds depending on hardware)
  • q: Quit GUI
    • Current image annotation won't be saved
    • All previous annotations will be saved

(Optional) Demo our tool on a small set of cat images:

Download the vit_b: ViT-B SAM model and run the following command in your terminal (while inside the sam_annotator folder)

python sam_data_annotator.py --images_path demo_images --sam_weights_path path/to/sam_vit_b_01ec64.pth 

Citing our annotation tool

If this tool was useful for your project, please cite us!

@article{kaul2024damm,
      author    = {Gaurav Kaul and Jonathan McDevitt and Justin Johnson and Ada Eban-Rothschild},
      title     = {DAMM for the detection and tracking of multiple animals within complex social and environmental settings},
      journal   = {bioRxiv},
      year      = {2024}
}

About

A simple opencv gui with minimal dependencies that lets a user annotate instance segmenation data with Segment Anything Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages