jesbu1 nielsr HF Staff commited on
Commit
f88dfd4
·
verified ·
1 Parent(s): cc2b165

Improve dataset card: Add metadata, links, description, and citation (#1)

Browse files

- Improve dataset card: Add metadata, links, description, and citation (da8eb673a2e4d2c43a6ee8c80610a8fdab4825ad)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +47 -3
README.md CHANGED
@@ -1,3 +1,47 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ - keypoint-detection
6
+ tags:
7
+ - robot-manipulation
8
+ - vision-language-models
9
+ - zero-shot-generalization
10
+ - bridge-v2
11
+ ---
12
+
13
+ # PEEK VLM Path/Mask Labels for BRIDGE_v2
14
+
15
+ This dataset contains the PEEK VLM (Policy-agnostic Extraction of Essential Keypoints) generated path and mask labels for the BRIDGE_v2 dataset. These labels are an integral part of the research presented in the paper [PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies](https://huggingface.co/papers/2509.18282).
16
+
17
+ PEEK fine-tunes Vision-Language Models (VLMs) to predict a unified point-based intermediate representation for robot manipulation. This representation consists of:
18
+ 1. **End-effector paths:** specifying what actions to take.
19
+ 2. **Task-relevant masks:** indicating where to focus.
20
+
21
+ These annotations are directly overlaid onto robot observations, making the representation policy-agnostic and transferable across architectures. This dataset provides these automatically generated labels for the BRIDGE_v2 dataset, enabling researchers to readily use them for policy training and enhancement to boost zero-shot generalization.
22
+
23
+ ## Paper
24
+ [PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies](https://huggingface.co/papers/2509.18282)
25
+
26
+ ## Project Page
27
+ [https://peek-robot.github.io](https://peek-robot.github.io/)
28
+
29
+ ## Code/Github Repository
30
+ The main PEEK framework and associated code can be found on the Github repository:
31
+ [https://github.com/peek-robot/peek](https://github.com/peek-robot/peek)
32
+
33
+ ## Sample Usage
34
+ This dataset provides the pre-computed PEEK VLM path and mask labels for the BRIDGE_v2 dataset. These labels are intended to be integrated with existing BRIDGE_v2 data to guide and enhance robot manipulation policies during training and inference, as described in the PEEK paper. Users can download these labels and utilize them within their policy learning pipelines to equip manipulation policies with minimal cues for improved zero-shot generalization. For detailed instructions on how to incorporate these labels into policy training or for examples of VLM data labeling, please refer to the [PEEK Github repository](https://github.com/peek-robot/peek), particularly the `peek_vlm` folder.
35
+
36
+ ## Citation
37
+
38
+ If you find this dataset useful for your research, please cite the original paper:
39
+
40
+ ```bibtex
41
+ @inproceedings{zhang2025peek,
42
+ title={PEEK: Guiding and Minimal Image Representations for Zero-Shot Generalization of Robot Manipulation Policies},
43
+ author={Jesse Zhang and Marius Memmel and Kevin Kim and Dieter Fox and Jesse Thomason and Fabio Ramos and Erdem Bıyık and Abhishek Gupta and Anqi Li},
44
+ booktitle={arXiv:2509.18282},
45
+ year={2025},
46
+ }
47
+ ```