--- license: mit --- # Datacard This is the official fine-tuning dataset provided by VLABench, with 500 episodes each task. The current version includes 10 primitive tasks. ## Source - Project Page: [https://vlabench.github.io/](https://vlabench.github.io/) - Arxiv Paper: [https://arxiv.org/abs/2412.18194](https://arxiv.org/abs/2412.18194) - Code: [https://github.com/OpenMOSS/VLABench](https://github.com/OpenMOSS/VLABench) ## Uses Download all archive files and use the following command to extract: ```sh cat rdt_data.tar.gz.* | tar -xzvf - ``` In the resulting `VLABench_release` folder, there will be two folders: `primitive`(release now) and `composite`(under management). In `primitive` folder, there are ten sub-folders and the dataset HDF5 files can be listed as: ``` VLABench_release └── primitive ├── add_condiment └── episode_0.hdf5 ... ├── insert_flower └── episode_0.hdf5 ... ├── select_book └── ... ├── select_chemistry_tube └── ... ├── select_drink └── ... ├── select_fruit └── ... ├── select_mahjong └── ... ├── select_painting └── ... ├── select_poker └── ... └── select_toy └── ... ``` An example of the single episode data is: ``` data 2025-02-23 20:46:40 instruction (1,) |S38 ['Please put the striped_10 in any hole.'] meta_info entities (6,) |S15 ['billiards_table', 'striped_10', 'striped_14', 'striped_11', 'striped_12', 'solid_1'] episode_config () | S1879 target_entity (1,) |S10 ['striped_10'] observation depth (212, 4, 480, 480) float32 ee_state (212, 8) float32 point_cloud_colors (212, 11905, 3) float32 point_cloud_points (212, 11905, 3) float32 q_acceleration (212, 7, 1) float32 q_state (212, 7, 1) float32 q_velocity (212, 7, 1) float32 rgb (212, 4, 480, 480, 3) uint8 robot_mask (212, 4, 480, 480) float32 trajectory (212, 8) float32 ``` ## Citation If you find our work helps,please cite us: ``` @misc{zhang2024vlabench, title={VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks}, author={Shiduo Zhang and Zhe Xu and Peiju Liu and Xiaopeng Yu and Yuan Li and Qinghui Gao and Zhaoye Fei and Zhangyue Yin and Zuxuan Wu and Yu-Gang Jiang and Xipeng Qiu}, year={2024}, eprint={2412.18194}, archivePrefix={arXiv}, primaryClass={cs.RO}, url={https://arxiv.org/abs/2412.18194}, } ```