douwh's picture
Update model card with link to most recent paper and full citations (#2)
9e03115 verified
metadata
base_model:
  - internlm/internlm2-chat-1_8b
language:
  - multilingual
library_name: transformers
license: mit
pipeline_tag: image-text-to-text
tags:
  - internvl
  - vision
  - ocr
  - custom_code
  - moe
base_model_relation: merge

Mono-InternVL-2B-S1-3

This repository contains the Mono-InternVL-2B model after S1.1 concept learning, S1.2 semantic learning, and S1.3 alignment learning.

Please refer to our paper, project page and GitHub repository for introduction and usage.

Citation

If you find this project useful in your research, please consider citing:

@article{mono_internvl_v1,
  title={Mono-InternVL: Pushing the Boundaries of Monolithic Multimodal Large Language Models with Endogenous Visual Pre-training},
  author={Luo, Gen and Yang, Xue and Dou, Wenhan and Wang, Zhaokai and Liu, Jiawen and Dai, Jifeng and Qiao, Yu and Zhu, Xizhou},
  journal={arXiv preprint arXiv:2410.08202},
  year={2024}
}

@article{mono_internvl_v1.5,
  title={Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models},
  author={Luo, Gen and Dou, Wenhan and Li, Wenhao and Wang, Zhaokai and Yang, Xue and Tian, Changyao and Li, Hao and Wang, Weiyun and Wang, Wenhai and Zhu, Xizhou and Qiao, Yu and Dai, Jifeng},
  journal={arXiv preprint arXiv:2507.12566},
  year={2025}
}