Model Card for TWIN-InternVL3_5-1B

This repository contains the InternVL3.5-1B model post-trained on the TWIN dataset, as introduced in the paper Same or Not? Enhancing Visual Perception in Vision-Language Models.

TWIN is a large-scale dataset of 561,000 image-pair queries designed to enhance the perceptual abilities of Vision-Language Models (VLMs). It tasks models to determine whether two visually similar images depict the same object, encouraging attention to nuanced visual cues. Fine-tuning on TWIN yields significant gains in fine-grained recognition across various domains like art, animals, plants, and landmarks.

Resources

Citation

If you use TWIN in your research, please consider citing the work:

@misc{marsili2025notenhancingvisualperception,
      title={Same or Not? Enhancing Visual Perception in Vision-Language Models}, 
      author={Damiano Marsili and Aditya Mehta and Ryan Y. Lin and Georgia Gkioxari},
      year={2025},
      eprint={2512.23592},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2512.23592}, 
}
Downloads last month
10
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for glab-caltech/TWIN-InternVL3_5-1B

Collection including glab-caltech/TWIN-InternVL3_5-1B