--- dataset_info: features: - name: uid dtype: string - name: video_id dtype: string - name: start_second dtype: float32 - name: end_second dtype: float32 - name: caption dtype: string - name: fx dtype: float32 - name: fy dtype: float32 - name: cx dtype: float32 - name: cy dtype: float32 - name: vid_w dtype: int32 - name: vid_h dtype: int32 - name: annotation list: - name: mano_params struct: - name: global_orient list: float32 - name: hand_pose list: float32 - name: betas list: float32 - name: is_right dtype: bool - name: keypoints_3d list: float32 - name: keypoints_2d list: float32 - name: vertices list: float32 - name: box_center list: float32 - name: box_size dtype: float32 - name: camera_t list: float32 - name: focal_length list: float32 splits: - name: train num_examples: 241912 - name: test num_examples: 5108 configs: - config_name: default data_files: - split: train path: EgoHaFL_train.csv - split: test path: EgoHaFL_test.csv license: mit language: - en pretty_name: EgoHaFL:Egocentric 3D Hand Forecasting Dataset with Language Instruction size_categories: - 200K └── list (length = 16) β”œβ”€β”€ [0] β”‚ β”œβ”€β”€ mano_params β”‚ β”‚ β”œβ”€β”€ global_orient β”‚ β”‚ β”œβ”€β”€ hand_pose β”‚ β”‚ └── betas β”‚ β”œβ”€β”€ is_right β”‚ β”œβ”€β”€ keypoints_3d β”‚ β”œβ”€β”€ keypoints_2d β”‚ β”œβ”€β”€ vertices β”‚ β”œβ”€β”€ box_center β”‚ β”œβ”€β”€ box_size β”‚ β”œβ”€β”€ camera_t β”‚ └── focal_length β”œβ”€β”€ [1] β”‚ └── ... β”œβ”€β”€ [2] β”‚ └── ... └── ... ``` --- ## πŸŽ₯ **Source of Video Data** The video clips used in **EgoHaFL** originate from the **Ego4D V1** dataset. For our experiments, we use the **original-length videos compressed to 224p resolution** to ensure efficient storage and training. Official Ego4D website: πŸ”— **[https://ego4d-data.org/](https://ego4d-data.org/)** --- ## 🧩 **Example of Use** For details on how to load and use the EgoHaFL dataset, please refer to the **dataloader implementation** in our GitHub repository: πŸ”— **[https://github.com/ut-vision/SFHand](https://github.com/ut-vision/SFHand)** --- ## 🧠 **Supported Research Tasks** * Egocentric 3D hand forecasting * Hand motion prediction and trajectory modeling * 3D hand pose estimation * Hand–object interaction understanding * Video–language multimodal modeling * Temporal reasoning with 3D human hands --- ## πŸ“š Citation If you use this dataset or find it helpful in your research, please cite: ```latex @article{liu2025sfhand, title={SFHand: A Streaming Framework for Language-guided 3D Hand Forecasting and Embodied Manipulation}, author={Liu, Ruicong and Huang, Yifei and Ouyang, Liangyang and Kang, Caixin and and Sato, Yoichi}, journal={arXiv preprint arXiv:2511.18127}, year={2025} } ```