| <div align="center"> | |
| <h1>Demystifying Reinforcement Learning in Agentic Reasoning<h1> | |
| <p align="center"> <a href="https://arxiv.org/abs/2510.11701"> <img src="https://img.shields.io/badge/Paper-Arxiv-red?logo=arxiv&logoColor=red" alt="Paper on arXiv"/> </a> <a href="https://github.com/Gen-Verse/Open-AgentRL"> <img src="https://img.shields.io/badge/Open--AgentRL-GitHub-black?logo=github&logoColor=white" alt="Open-AgentRL on GitHub"/> </a> <a href="https://huggingface.co/datasets/Gen-Verse/Open-AgentRL-30K"> <img src="https://img.shields.io/badge/30K_RL_Dataset-Hugging%20Face-orange?logo=huggingface&logoColor=yellow" alt="30K RL Dataset"/> </a> <a href="https://huggingface.co/Gen-Verse/DemyAgent-4B"> <img src="https://img.shields.io/badge/DemyAgent--4B-Hugging%20Face-FFCC00?logo=huggingface&logoColor=yellow" alt="DemyAgent-4B Model"/> </a> </p> </div> | |
| ## π― About This Repository | |
| This repository contains the **Qwen2.5-7B-RA-SFT** model weights, a 7B-sized agentic reasoning model that is finetuned with our 3k Agentic SFT dataset, based on [Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct). | |
| ## π Introduction | |
| In our work, we systematically investigate three dimensions of agentic RL: **data, algorithms, and reasoning modes**. Our findings reveal: | |
| - π― **Data Quality Matters**: Real end-to-end trajectories and high-diversity datasets significantly outperform synthetic alternatives | |
| - β‘ **Training Efficiency**: Exploration-friendly techniques like reward clipping and entropy maintenance boost training efficiency | |
| - π§ **Reasoning Strategy**: Deliberative reasoning with selective tool calls surpasses frequent invocation or verbose self-reasoning | |
| We contribute high-quality SFT and RL datasets, demonstrating that **simple recipes enable even 4B models to outperform 32B models** on the most challenging reasoning benchmarks. | |
| ## π¦ Resources | |
| | **Type** | **Name** | **Link** | | |
| | --------- | ------------------- | ------------------------------------------------------------ | | |
| | π Dataset | 3K Agentic SFT Data | [π€ HuggingFace](https://huggingface.co/datasets/Gen-Verse/Open-AgentRL-SFT-3K) | | |
| | π Dataset | 30K Agentic RL Data | [π€ HuggingFace](https://huggingface.co/datasets/Gen-Verse/Open-AgentRL-30K) | | |
| | π€ Model | **Qwen2.5-7B-RA-SFT** | [π€ HuggingFace](https://huggingface.co/Gen-Verse/Qwen2.5-7B-RA-SFT) | | |
| | π€ Model | Qwen3-4B-RA-SFT | [π€ HuggingFace](https://huggingface.co/Gen-Verse/Qwen3-4B-RA-SFT) | | |
| | π€ Model | DemyAgent-4B | [π€ HuggingFace](https://huggingface.co/Gen-Verse/DemyAgent-4B) | | |
| ## π Citation | |
| ```bibtex | |
| @article{yu2025demystify, | |
| title={Demystifying Reinforcement Learning in Agentic Reasoning}, | |
| author={Yu, Zhaochen and Yang, Ling and Zou, Jiaru and Yan, Shuicheng and Wang, Mengdi}, | |
| journal={arXiv preprint arXiv:2510.11701}, | |
| year={2025} | |
| } | |
| ``` | |