jasonzhango commited on
Commit
a3e4c02
Β·
verified Β·
1 Parent(s): 15fb821

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -0
README.md ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="left">
2
+ <a href="https://github.com/fudan-zvg/spar.git">
3
+ <img alt="GitHub Code" src="https://img.shields.io/badge/Code-spar-black?&logo=github&logoColor=white" />
4
+ </a>
5
+ <a href="https://arxiv.org/abs/xxx">
6
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-spar-red?logo=arxiv" />
7
+ </a>
8
+ <a href="https://fudan-zvg.github.io/spar">
9
+ <img alt="Website" src="https://img.shields.io/badge/🌎_Website-spar-blue" />
10
+ </a>
11
+ </p>
12
+
13
+
14
+ # πŸ“¦ Spatial Perception And Reasoning Dataset – RGBD (SPAR-7M-RGBD)
15
+ > A large-scale multimodal dataset for **3D-aware spatial perception and reasoning** in vision-language models.
16
+
17
+ **SPAR-7M-RGBD** extends the original [SPAR-7M](https://huggingface.co/datasets/jasonzhango/SPAR-7M--dataset) with additional **depths**, **camera intrinsics**, and **pose information**. It contains over **7 million QA pairs** across **33 spatial tasks**, built from **4,500+ richly annotated indoor 3D scenes**.
18
+
19
+ This version supports **single-view**, **multi-view**, and **video-based** inputs.
20
+
21
+
22
+ ## πŸ“₯ Download
23
+ We provide **two versions** of the dataset:
24
+
25
+ | Version | Description |
26
+ |------------------|---------------------------------------------------------------------|
27
+ | `SPAR-7M` | RGB-only images + QA annotations |
28
+ | `SPAR-7M-RGBD` | Includes **depths**, **camera intrinsics**, and **pose matrices** for 3D-aware training
29
+
30
+ You can download both versions from **Hugging Face**:
31
+
32
+ ```bash
33
+ # Download SPAR-7M (default)
34
+ huggingface-cli download jasonzhango/SPAR-7M --repo-type dataset
35
+
36
+ # Download SPAR-7M-RGBD (with depth and camera parameters)
37
+ huggingface-cli download jasonzhango/SPAR-7M-RGBD --repo-type dataset
38
+ ```
39
+
40
+ These datasets are split into multiple .tar.gz parts due to Hugging Face file size limits. After downloading all parts, run the following to extract:
41
+ ```
42
+ # For SPAR-7M
43
+ cat spar-*.tar.gz | tar -xvzf -
44
+
45
+ # For SPAR-7M-RGBD
46
+ cat spar-rgbd-*.tar.gz | tar -xvzf -
47
+ ```
48
+
49
+ Alternatively, if Hugging Face is not accessible, you can use the [provided script](https://hf-mirror.com/):
50
+ ```
51
+ wget https://hf-mirror.com/hfd/hfd.sh
52
+
53
+ chmod a+x hfd.sh
54
+
55
+ export HF_ENDPOINT=https://hf-mirror.com
56
+
57
+ ./hfd.sh jasonzhango/SPAR-7M --dataset
58
+ ./hfd.sh jasonzhango/SPAR-7M-RGBD --dataset
59
+ ```
60
+
61
+ The dataset directory structure is:
62
+ ```
63
+ spar/
64
+ β”œβ”€β”€ rxr/
65
+ β”œβ”€β”€ scannet/
66
+ β”‚ β”œβ”€β”€ images/
67
+ β”‚ | └── scene0000_00/
68
+ β”‚ | β”œβ”€β”€ image_color/
69
+ β”‚ | β”œβ”€β”€ video_color/
70
+ β”‚ | β”œβ”€β”€ image_depth/ # only in SPAR-7M-RGBD
71
+ β”‚ | β”œβ”€β”€ video_depth/ # only in SPAR-7M-RGBD
72
+ β”‚ | β”œβ”€β”€ pose/ # only in SPAR-7M-RGBD
73
+ β”‚ | β”œβ”€β”€ video_pose/ # only in SPAR-7M-RGBD
74
+ β”‚ | β”œβ”€β”€ intrinsic/ # only in SPAR-7M-RGBD
75
+ β”‚ | └── video_idx.txt
76
+ β”‚ └── qa_jsonl/
77
+ β”‚ β”œβ”€β”€ train/
78
+ β”‚ | β”œβ”€β”€ depth_prediction_oo/
79
+ β”‚ | | β”œβ”€β”€ fill/
80
+ β”‚ | | | └── fill_76837.jsonl
81
+ β”‚ | | β”œβ”€β”€ select/
82
+ β”‚ | | └── sentence/
83
+ β”‚ | β”œβ”€β”€ obj_spatial_relation_oc/
84
+ β”‚ | └── spatial_imagination_oo_mv/
85
+ β”‚ └── val/
86
+ β”œβ”€β”€ scannetpp/
87
+ └── structured3d/
88
+ ```
89
+ Each QA task (e.g., `depth_prediction_oc`, `spatial_relation_oo_mv`, etc.) is organized by **task type**, with subfolders for different **answer formats**:
90
+ - `fill/` β€” numerical or descriptive answers
91
+ - `select/` β€” multiple choice
92
+ - `sentence/` β€” natural language answers
93
+
94
+ ## πŸ“š Bibtex
95
+
96
+ If you find this project or dataset helpful, please consider citing our paper:
97
+
98
+ ```bibtex
99
+ @article{zhang2025from,
100
+ title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
101
+ author={Zhang, Jiahui and Chen, Yurui and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Zhou, Yanpeng and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
102
+ year={2025},
103
+ journal={arXiv preprint arXiv:xx},
104
+ }
105
+ ```
106
+
107
+ <!-- ## πŸ“„ License
108
+
109
+ This dataset is licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0)**.
110
+
111
+ You may use, share, modify, and redistribute this dataset **for any purpose**, including commercial use, as long as proper attribution is given.
112
+
113
+ [Learn more](https://creativecommons.org/licenses/by/4.0/) -->