dolma3_mix-6T-1025 / README.md
soldni's picture
Update README.md
3c5cf9f verified
metadata
license: odc-by
task_categories:
  - text-generation
language:
  - en
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/common_crawl-art_and_design-0016/*

⚠️ NOTE: FULL Dataset Coming very Soon! ⚠️


Logo for Dolma Mix

Dolma 3 Mix (6T)

The Dolma 3 Mix (6T) is the collection of data used during the pretraining stage to train the Olmo-3-1025-7B model. This dataset is made up of ~6 trillion tokens from a diverse mix of web content, academic publications, code, and more. The majority of this dataset comes from Common Crawl.

For more information on Dolma, please see our original release here.

Dataset Sources

Source Sizes

This dataset contains the full mix of documents used to train Olmo 3 7B.

Source Doc Type Tokens Bytes (uncompressed) Documents License
common_crawl web pages 4.51T 18.0TB 3.15B ODC-BY
olmocr_science_pdfs academic papers 805B 3.22TB 83.8M ODC-BY
stack_edu code 409B 1.64TB 525.8M ODC-BY
finemath-3plus mathematics 151B 607GB 95.5M ODC-BY
rpj-proofpile-arxiv research papers 50.9B 203GB 9.10M ODC-BY
dolma1_7-wiki-en encyclopedic 2.51B 10.0GB 4.24M ODC-BY
Total 5.93T 23.7TB 3.87B ODC-BY

Mix Compositions

Source 6T
Source % Mix %
common_crawl 76.07% 76.07%
olmocr_science_pdfs 13.57% 13.57%
stack_edu 6.89% 6.89%
finemath-3plus 2.56% 2.56%
rpj-proofpile-arxiv 0.86% 0.86%
dolma1_7-wiki-en 0.04% 0.04%

Licensing Information

Dolma 3 mix is licensed under the Open Data Commons Attribution License v1.0 (ODC-By). It is intended for research and educational use. For more information, please see our Responsible Use Guidelines.

Citation

A technical manuscript is forthcoming! Find the paper at: https://allenai.org/papers/olmo3