DenisKochetov commited on
Commit
c71cc9f
·
verified ·
1 Parent(s): 06646f7

add readme

Browse files
Files changed (1) hide show
  1. README.md +41 -3
README.md CHANGED
@@ -1,3 +1,41 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # [APTv2 Dataset](https://github.com/ViTAE-Transformer/APTv2)
5
+
6
+ **APTv2** is a large-scale benchmark for **animal pose estimation and tracking** across 30 species.
7
+ It provides high-quality **keypoint** and **tracking annotations** for 84,611 animal instances spanning **2,749 video clips** (41,235 frames total).
8
+
9
+ ### 📦 Dataset Overview
10
+
11
+ * **Total videos:** 2,749
12
+ * **Frames per clip:** 15
13
+ * **Total frames:** 41,235
14
+ * **Annotated instances:** 84,611
15
+ * **Species:** 30
16
+ * **Tracks:**
17
+
18
+ 1. Single-frame pose estimation
19
+ 2. Low-data generalization
20
+ 3. Pose tracking
21
+
22
+ ### 🧠 Citation
23
+
24
+ If you use this dataset, please cite:
25
+
26
+ ```bibtex
27
+ @misc{yang2023aptv2,
28
+ title={APTv2: Benchmarking Animal Pose Estimation and Tracking with a Large-scale Dataset and Beyond},
29
+ author={Yuxiang Yang and Yingqi Deng and Yufei Xu and Jing Zhang},
30
+ year={2023},
31
+ eprint={2312.15612},
32
+ archivePrefix={arXiv},
33
+ primaryClass={cs.CV}
34
+ }
35
+ ```
36
+
37
+ ### 📚 Reference
38
+
39
+ Original paper: [APTv2 on arXiv](https://arxiv.org/abs/2312.15612)
40
+
41
+ Code: [Github](https://github.com/ViTAE-Transformer/APTv2)