Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<p align="left">
|
| 2 |
+
<a href="https://github.com/fudan-zvg/spar.git">
|
| 3 |
+
<img alt="GitHub Code" src="https://img.shields.io/badge/Code-spar-black?&logo=github&logoColor=white" />
|
| 4 |
+
</a>
|
| 5 |
+
<a href="https://arxiv.org/abs/xxx">
|
| 6 |
+
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-spar-red?logo=arxiv" />
|
| 7 |
+
</a>
|
| 8 |
+
<a href="https://fudan-zvg.github.io/spar">
|
| 9 |
+
<img alt="Website" src="https://img.shields.io/badge/π_Website-spar-blue" />
|
| 10 |
+
</a>
|
| 11 |
+
</p>
|
| 12 |
+
|
| 13 |
+
|
| 14 |
+
# π¦ Spatial Perception And Reasoning Dataset β RGBD (SPAR-7M-RGBD)
|
| 15 |
+
> A large-scale multimodal dataset for **3D-aware spatial perception and reasoning** in vision-language models.
|
| 16 |
+
|
| 17 |
+
**SPAR-7M-RGBD** extends the original [SPAR-7M](https://huggingface.co/datasets/jasonzhango/SPAR-7M--dataset) with additional **depths**, **camera intrinsics**, and **pose information**. It contains over **7 million QA pairs** across **33 spatial tasks**, built from **4,500+ richly annotated indoor 3D scenes**.
|
| 18 |
+
|
| 19 |
+
This version supports **single-view**, **multi-view**, and **video-based** inputs.
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
## π₯ Download
|
| 23 |
+
We provide **two versions** of the dataset:
|
| 24 |
+
|
| 25 |
+
| Version | Description |
|
| 26 |
+
|------------------|---------------------------------------------------------------------|
|
| 27 |
+
| `SPAR-7M` | RGB-only images + QA annotations |
|
| 28 |
+
| `SPAR-7M-RGBD` | Includes **depths**, **camera intrinsics**, and **pose matrices** for 3D-aware training
|
| 29 |
+
|
| 30 |
+
You can download both versions from **Hugging Face**:
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
# Download SPAR-7M (default)
|
| 34 |
+
huggingface-cli download jasonzhango/SPAR-7M --repo-type dataset
|
| 35 |
+
|
| 36 |
+
# Download SPAR-7M-RGBD (with depth and camera parameters)
|
| 37 |
+
huggingface-cli download jasonzhango/SPAR-7M-RGBD --repo-type dataset
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
These datasets are split into multiple .tar.gz parts due to Hugging Face file size limits. After downloading all parts, run the following to extract:
|
| 41 |
+
```
|
| 42 |
+
# For SPAR-7M
|
| 43 |
+
cat spar-*.tar.gz | tar -xvzf -
|
| 44 |
+
|
| 45 |
+
# For SPAR-7M-RGBD
|
| 46 |
+
cat spar-rgbd-*.tar.gz | tar -xvzf -
|
| 47 |
+
```
|
| 48 |
+
|
| 49 |
+
Alternatively, if Hugging Face is not accessible, you can use the [provided script](https://hf-mirror.com/):
|
| 50 |
+
```
|
| 51 |
+
wget https://hf-mirror.com/hfd/hfd.sh
|
| 52 |
+
|
| 53 |
+
chmod a+x hfd.sh
|
| 54 |
+
|
| 55 |
+
export HF_ENDPOINT=https://hf-mirror.com
|
| 56 |
+
|
| 57 |
+
./hfd.sh jasonzhango/SPAR-7M --dataset
|
| 58 |
+
./hfd.sh jasonzhango/SPAR-7M-RGBD --dataset
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
The dataset directory structure is:
|
| 62 |
+
```
|
| 63 |
+
spar/
|
| 64 |
+
βββ rxr/
|
| 65 |
+
βββ scannet/
|
| 66 |
+
β βββ images/
|
| 67 |
+
β | βββ scene0000_00/
|
| 68 |
+
β | βββ image_color/
|
| 69 |
+
β | βββ video_color/
|
| 70 |
+
β | βββ image_depth/ # only in SPAR-7M-RGBD
|
| 71 |
+
β | βββ video_depth/ # only in SPAR-7M-RGBD
|
| 72 |
+
β | βββ pose/ # only in SPAR-7M-RGBD
|
| 73 |
+
β | βββ video_pose/ # only in SPAR-7M-RGBD
|
| 74 |
+
β | βββ intrinsic/ # only in SPAR-7M-RGBD
|
| 75 |
+
β | βββ video_idx.txt
|
| 76 |
+
β βββ qa_jsonl/
|
| 77 |
+
β βββ train/
|
| 78 |
+
β | βββ depth_prediction_oo/
|
| 79 |
+
β | | βββ fill/
|
| 80 |
+
β | | | βββ fill_76837.jsonl
|
| 81 |
+
β | | βββ select/
|
| 82 |
+
β | | βββ sentence/
|
| 83 |
+
β | βββ obj_spatial_relation_oc/
|
| 84 |
+
β | βββ spatial_imagination_oo_mv/
|
| 85 |
+
β βββ val/
|
| 86 |
+
βββ scannetpp/
|
| 87 |
+
βββ structured3d/
|
| 88 |
+
```
|
| 89 |
+
Each QA task (e.g., `depth_prediction_oc`, `spatial_relation_oo_mv`, etc.) is organized by **task type**, with subfolders for different **answer formats**:
|
| 90 |
+
- `fill/` β numerical or descriptive answers
|
| 91 |
+
- `select/` β multiple choice
|
| 92 |
+
- `sentence/` β natural language answers
|
| 93 |
+
|
| 94 |
+
## π Bibtex
|
| 95 |
+
|
| 96 |
+
If you find this project or dataset helpful, please consider citing our paper:
|
| 97 |
+
|
| 98 |
+
```bibtex
|
| 99 |
+
@article{zhang2025from,
|
| 100 |
+
title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
|
| 101 |
+
author={Zhang, Jiahui and Chen, Yurui and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Zhou, Yanpeng and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
|
| 102 |
+
year={2025},
|
| 103 |
+
journal={arXiv preprint arXiv:xx},
|
| 104 |
+
}
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
<!-- ## π License
|
| 108 |
+
|
| 109 |
+
This dataset is licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0)**.
|
| 110 |
+
|
| 111 |
+
You may use, share, modify, and redistribute this dataset **for any purpose**, including commercial use, as long as proper attribution is given.
|
| 112 |
+
|
| 113 |
+
[Learn more](https://creativecommons.org/licenses/by/4.0/) -->
|