Segment Anything 2.1 (SAM 2.1) β ONNX Models
ONNX exports of Meta's SAM 2.1 backbones β the maintenance release of SAM 2 β packaged for direct use with onnxruntime and AnyLabeling.
Why this repo exists
SAM 2.1 ships incremental quality improvements over SAM 2 with the same architecture and runtime cost. ONNX gives you a portable, dependency-light runtime that works in Python, C++, JavaScript, and most embedded targets. These exports are the ones AnyLabeling consumes for its smart-labeling features.
Variants
Each .zip bundles the encoder + decoder ONNX files for that backbone. The _20260221 suffix is the export date.
| File | Backbone | Size |
|---|---|---|
sam2.1_hiera_tiny_20260221.zip |
Hiera-T | 111 MB |
sam2.1_hiera_small_20260221.zip |
Hiera-S | 136 MB |
sam2.1_hiera_base_plus_20260221.zip |
Hiera-B+ | 259 MB |
sam2.1_hiera_large_20260221.zip |
Hiera-L | 768 MB |
Quick start
pip install huggingface_hub onnxruntime
from huggingface_hub import hf_hub_download
import zipfile, onnxruntime as ort
zip_path = hf_hub_download(repo_id="vietanhdev/segment-anything-2.1-onnx-models",
filename="sam2.1_hiera_tiny_20260221.zip")
with zipfile.ZipFile(zip_path) as z:
z.extractall("./sam21_tiny")
enc = ort.InferenceSession("./sam21_tiny/encoder.onnx", providers=["CPUExecutionProvider"])
print([(i.name, i.shape, i.type) for i in enc.get_inputs()])
For the full image β mask pipeline (encoder + decoder + prompt handling), see how AnyLabeling wires it: https://github.com/vietanhdev/anylabeling
Use with AnyLabeling
These models drop into AnyLabeling's auto-labeling backend without conversion. See the AnyLabeling docs for the model-config wiring.
Source weights
Original SAM 2.1 weights and license: https://github.com/facebookresearch/sam2
This repo redistributes the same weights in ONNX format. License unchanged from upstream (Apache 2.0).
Citation
@misc{nguyen2026sam21_onnx,
author = {Nguyen, Viet-Anh and {Neural Research Lab}},
title = {SAM 2.1 ONNX Models},
year = {2026},
url = {https://huggingface.co/vietanhdev/segment-anything-2.1-onnx-models}
}
For the underlying model, cite Meta's SAM 2 paper (SAM 2.1 is a release of the same model family):
@article{ravi2024sam2,
title = {SAM 2: Segment Anything in Images and Videos},
author = {Ravi, Nikhila and others},
journal = {arXiv:2408.00714},
year = {2024}
}
Acknowledgments
Thanks to Meta AI Research for the SAM 2.x line. This repo packages their work for edge inference.