aimagelab/ReT-M2KR
Preview • Updated • 701 • 2
How to use aimagelab/ReT2-M2KR-SigLIP2-ViT-L with Transformers:
# Load model directly
from transformers import AutoModel
model = AutoModel.from_pretrained("aimagelab/ReT2-M2KR-SigLIP2-ViT-L", dtype="auto")Official implementation of ReT-2: Recurrence Meets Transformers for Universal Multimodal Retrieval.
This model features visual and textual backbones based on google/siglip2-large-patch16-256.
The backbones have been fine-tuned on the M2KR dataset.
@article{caffagni2025recurrencemeetstransformers,
title={{Recurrence Meets Transformers for Universal Multimodal Retrieval}},
author={Davide Caffagni and Sara Sarto and Marcella Cornia and Lorenzo Baraldi and Rita Cucchiara},
journal={arXiv preprint arXiv:2509.08897},
year={2025}
}
Base model
google/siglip2-large-patch16-256