nielsr HF Staff commited on
Commit
1ba4c4b
ยท
verified ยท
1 Parent(s): aeedd0a

Improve dataset card: Add project overview, sample usage, and refine metadata

Browse files

This PR enhances the dataset card for `SlimPajama-Meta-rater-Cleanliness-30B` by:
- Adding `license: apache-2.0` and additional tags (`data-selection`, `data-quality`) to the YAML metadata for better discoverability and accuracy.
- Including a direct link to the project's ACL Anthology page.
- Integrating the "Overview" and "PRRC Framework" sections from the original GitHub repository to provide richer context about the Meta-rater project and its methodology.
- Providing a clear sample code snippet for loading the dataset using the Hugging Face `datasets` library.
- Updating the citation to the official ACL Anthology BibTeX entry.

These updates aim to provide users with a more comprehensive understanding of the dataset and facilitate its usage.

Files changed (1) hide show
  1. README.md +76 -11
README.md CHANGED
@@ -1,12 +1,15 @@
1
  ---
2
- task_categories:
3
- - text-generation
4
  language:
5
  - en
6
- tags:
7
- - pretrain
8
  size_categories:
9
  - 10B<n<100B
 
 
 
 
 
 
 
10
  ---
11
 
12
  # Top 30B token SlimPajama Subset selected by the Cleanliness rater
@@ -14,6 +17,30 @@ size_categories:
14
  This repository contains the dataset described in the paper [Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models](https://huggingface.co/papers/2504.14194).
15
 
16
  Code: https://github.com/opendatalab/Meta-rater
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
  ## Dataset Description
19
 
@@ -24,6 +51,23 @@ This dataset contains the top 30B tokens from the SlimPajama-627B corpus, select
24
  - **Quality metric**: Cleanliness (0โ€“5 scale, see below)
25
  - **Annotation coverage**: 100% of selected subset
26
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
27
  ## Dataset Statistics
28
 
29
  - **Total tokens**: 30B (subset of SlimPajama-627B)
@@ -49,14 +93,35 @@ Scores are assigned by a ModernBERT model fine-tuned on Llama-3.3-70B-Instruct a
49
 
50
  ## Citation
51
 
52
- If you use this dataset, please cite:
53
 
54
  ```bibtex
55
- @article{zhuang2025meta,
56
- title={Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models},
57
- author={Zhuang, Xinlin and Peng, Jiahui and Ma, Ren and Wang, Yinfan and Bai, Tianyi and Wei, Xingjian and Qiu, Jiantao and Zhang, Chi and Qian, Ying and He, Conghui},
58
- journal={arXiv preprint arXiv:2504.14194},
59
- year={2025}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
  }
61
  ```
62
 
@@ -72,4 +137,4 @@ This dataset is released under the same license as the original SlimPajama datas
72
 
73
  ---
74
 
75
- **Made with โค๏ธ by the OpenDataLab team**
 
1
  ---
 
 
2
  language:
3
  - en
 
 
4
  size_categories:
5
  - 10B<n<100B
6
+ task_categories:
7
+ - text-generation
8
+ tags:
9
+ - pretrain
10
+ - data-quality
11
+ - data-selection
12
+ license: apache-2.0
13
  ---
14
 
15
  # Top 30B token SlimPajama Subset selected by the Cleanliness rater
 
17
  This repository contains the dataset described in the paper [Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models](https://huggingface.co/papers/2504.14194).
18
 
19
  Code: https://github.com/opendatalab/Meta-rater
20
+ Project page: https://aclanthology.org/2025.acl-long.533/
21
+
22
+ ## ๐ŸŽฏ Overview
23
+
24
+ The composition of pre-training datasets for large language models (LLMs) remains largely undisclosed, hindering transparency and efforts to optimize data qualityโ€”a critical driver of model performance. **Meta-rater** introduces a groundbreaking multi-dimensional data selection framework that **doubles convergence speed** and improves downstream task performance by **3.23%** compared to random selection.
25
+
26
+ ### ๐Ÿ† Key Achievements
27
+
28
+ - **๐Ÿ“ˆ 2x Faster Convergence**: Meta-rater achieves equivalent performance using only 15B tokens compared to 30B tokens with random selection
29
+ - **๐ŸŽฏ 3.23% Performance Gain**: Significant improvement over random sampling on downstream tasks
30
+ - **๐Ÿ” Multi-dimensional Quality Assessment**: Novel PRRC framework (Professionalism, Readability, Reasoning, Cleanliness)
31
+ - **๐Ÿ“Š Scalable Framework**: Benefits persist and increase from 1.3B to 7.2B parameter models
32
+ - **๐Ÿ—๏ธ Comprehensive Dataset**: First fully annotated 627B-token SlimPajama with 25 quality metrics
33
+
34
+ ## ๐Ÿง  PRRC Framework
35
+
36
+ We introduce four novel evaluation dimensions to comprehensively assess data quality:
37
+
38
+ | Dimension | Description | F1 Score |
39
+ |-----------|-------------|----------|
40
+ | **๐ŸŽ“ Professionalism** | Degree of expertise and technical knowledge required | 91.57% |
41
+ | **๐Ÿ“– Readability** | Ease of understanding and text clarity | 87.47% |
42
+ | **๐Ÿงฎ Reasoning** | Complexity of logical thinking and analysis | 89.59% |
43
+ | **โœจ Cleanliness** | Format quality and noise-free content | 87.88% |
44
 
45
  ## Dataset Description
46
 
 
51
  - **Quality metric**: Cleanliness (0โ€“5 scale, see below)
52
  - **Annotation coverage**: 100% of selected subset
53
 
54
+ ## Sample Usage
55
+
56
+ You can load this dataset using the Hugging Face `datasets` library:
57
+
58
+ ```python
59
+ from datasets import load_dataset
60
+
61
+ # Load the dataset
62
+ dataset = load_dataset("opendatalab/SlimPajama-Meta-rater-Cleanliness-30B")
63
+
64
+ # Print dataset information
65
+ print(dataset)
66
+
67
+ # Access a sample from the training split
68
+ # print(dataset["train"][0])
69
+ ```
70
+
71
  ## Dataset Statistics
72
 
73
  - **Total tokens**: 30B (subset of SlimPajama-627B)
 
93
 
94
  ## Citation
95
 
96
+ If you use Meta-rater in your research, please cite our paper:
97
 
98
  ```bibtex
99
+ @inproceedings{zhuang-etal-2025-meta,
100
+ title = "Meta-rater: A Multi-dimensional Data Selection Method for Pre-training Language Models",
101
+ author = "Zhuang, Xinlin and
102
+ Peng, Jiahui and
103
+ Ma, Ren and
104
+ Wang, Yinfan and
105
+ Bai, Tianyi and
106
+ Wei, Xingjian and
107
+ Jiantao, Qiu and
108
+ Zhang, Chi and
109
+ Qian, Ying and
110
+ He, Conghui",
111
+ editor = "Che, Wanxiang and
112
+ Nabende, Joyce and
113
+ Shutova, Ekaterina and
114
+ Pilehvar, Mohammad Taher",
115
+ booktitle = "Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
116
+ month = jul,
117
+ year = "2025",
118
+ address = "Vienna, Austria",
119
+ publisher = "Association for Computational Linguistics",
120
+ url = "https://aclanthology.org/2025.acl-long.533/",
121
+ doi = "10.18653/v1/2025.acl-long.533",
122
+ pages = "10856--10896",
123
+ ISBN = "979-8-89176-251-0",
124
+ abstract = "The composition of pre-training datasets for large language models (LLMs) remains largely undisclosed, hindering transparency and efforts to optimize data quality{---}a critical driver of model performance. Current data selection methods, such as natural language quality assessments, diversity-based filters, and classifier-based approaches, are limited by single-dimensional evaluation or redundancy-focused strategies. To address these gaps, we propose four dimensions to evaluate data quality: professionalism, readability, reasoning, and cleanliness. We further introduce \textbf{Meta-rater}, a multi-dimensional data selection method that integrates these dimensions with existing quality metrics through learned optimal weightings. Meta-rater employs proxy models to train a regression model that predicts validation loss, enabling the identification of optimal combinations of quality scores. Experiments demonstrate that Meta-rater \textbf{doubles convergence speed} for 1.3B parameter models and improves downstream task performance by \textbf{3.23{\%}}, with advantages that scale to models as large as 7.2B parameters. Our work establishes that holistic, multi-dimensional quality integration significantly outperforms conventional single-dimension approaches, offering a scalable paradigm for enhancing pre-training efficiency and model capability. To advance future research, we release scripts, data, and models at \url{https://github.com/opendatalab/Meta-rater}."
125
  }
126
  ```
127
 
 
137
 
138
  ---
139
 
140
+ **Made with โค๏ธ by the OpenDataLab team**