annotations_creators:
- Duygu Altinok
language:
- tr
license:
- cc-by-sa-4.0
multilinguality:
- monolingual
source_datasets:
- original
pretty_name: BellaTurca
config_names:
- AkademikDerlem
- OzenliDerlem
- temiz-mC4
- temiz-OSCAR
dataset_info:
- config_name: AkademikDerlem
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3720092
num_examples: 668109
- config_name: OzenliDerlem
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 4619020
num_examples: 1388533
- config_name: Kitaplar
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12680424
num_examples: 100049280
- config_name: temiz-OSCAR
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12680424
num_examples: 19612617
- config_name: temiz-mC4
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 12680424
num_examples: 55882753
configs:
- config_name: AkademikDerlem
data_files:
- split: train
path: akademik-derlem/*jsonl
- config_name: OzenliDerlem
data_files:
- split: train
path: ozenli-derlem/*jsonl
- config_name: temiz-OSCAR
data_files:
- split: train
path: temiz-oscar/*jsonl
- config_name: temiz-mC4
data_files:
- split: train
path: temiz-mC4/*jsonl
size_categories:
- 100M<n<1B
Dataset Card for BellaTurca
BellaTurca is the first large-scale Turkish corpus collection for training Turkish language models. The total size is around 245GB and 30 billion words. BellaTurca's focus is high quality, diversity as well as the size.
This collection is made up of five datasets: AkademikDerlem, OzenliDerlem, ForumSohbetleri, Temiz OSCAR and Temiz mC4. Originally there was a book corpus included, but it is excluded due to containing copyrighted material.
AkademikDerlemis compiled from online publications and theses (that come with a permissive licence), compiled mostly from Dergipark.OzenliDerlemis compiled from web, the selection of websites that are curated with quite özen according to the curation topics to ensure the quality. The topics include travel, books, fairy tales, movie reviews, technology and writing; in general entel dantel😬😬😬 topics.ForumSohbetleriis compiled from popular forum websites such as memurlar.net, kadinlarklubu.com and more.Temiz OSCARincludes several OSCAR corpora, cleaned extensively to ensure the quality.Temiz mC4is a version of CulturaX that is cleaned further to ensure the quality.
Each of the subcorpus has their own HF repos. The cleaning of AkademikDerlem mostly focuses of fixing OCR mistakes.
The cleaning pipelines of Temiz mC4 and Temiz OSCAR focus on filtering low quality content, such as ads, repetetive content and adult content.
Both of these datasets are dedupped. Temiz OSCAR is dedupped per split, then also cross-split, as later OSCAR sets usually contain some parts of previous sets. Temiz mC4 is dedupped within itself as well. Then we also cross-dedupped Temiz OSCAR against temiz mC4 as both datasets come from Common Crawl and might have some instances in common.
OzenliDerlem's text had already been chosen from high quality content promising websites, yet we still did some rounds of cleaning for quality. ForumSohbetleri was cleaned carefully especially for undesired characters, without losing emoji characters.
For more details about the cleaning pipeline, compilation process and more of Bella Turca; please refer to the publication as well as the blog post.
| Dataset | num instances | size | num of words |
|---|---|---|---|
| AkademikDerlem | 668.109 | 3.8GB | 430M |
| OzenliDerlem | 1.391.239 | 4.6GB | 557M |
| ForumSohbetleri | 2.925.975 | 13.41GB | 1.7B |
| temiz-OSCAR | 19.612.617 | 48GB | 5.91B |
| temiz-mC4 | 55.882.753 | 122GB | 15.4B |
| Total | 105.157.983 | 246.5GB | 30.89B |
ForumSohbetleri split is served in its HF repo, due to the its structured nature. For more information please visit its dedicated HF repo.
Instances
A typical instance from the dataset looks like:
{
"text": Türkiyenin önde gelen ilaç şirketlerinden Nobel İlaç, enfeksiyonlardan korunma yolları içerikli eğitim programı ile, \
Junior Chamber International (JCI) tarafından 2016 Uluslararası Kurumsal Sosyal Sorumluluk ödülüne layık görüldü. Yarım asrı
aşkın süredir insan sağlığının korunması ve iyileştirilmesi alanında çalışan Türkiyenin önde gelen ilaç firmalarından, %100 Türk \
sermayeli Nobel İlaç, kurumsal sosyal sorumluluk alanındaki kararlılığını bu proje ile bir kez daha gösterdi. Sağlıklı yaşam \
bilincinin erken yaşlarda eğitim yoluyla geliştirilmesi gerektiğine inanan Nobel İlaç ve gönüllü çalışanları, 7 farklı oturumda \
ilkokul çağındaki 120 çocuğa ulaşarak enfeksiyonlardan korunma yolları içerikli eğitimler verdiler. 114 ülkede 169.000 üyesi \
bulunan, toplumlarda pozitif değişime ve gelişime katkıda bulunmak için gençlerin liderlik, girişimcilik becerilerini ve sosyal \
sorumluluk bilincini geliştirme misyonunu üstlenen Junior Chamber International (JCI), Nobel İlaçın bu eğitim programını 2016 Uluslararası Kurumsal Sosyal Sorumluluk Ödülüne layık gördü. Nobel İlaç bu proje ile ülkemizin geleceğini şekillendirecek çocuklarımıza, nitelikli ve kaliteli eğitim verilmesine destek olarak, sağlıklı ve bilinçli bireyler yetişmesine katkı sağlamayı hedeflemiştir. Aynı zamanda ülkemiz çocuklarında farkındalık uyandırarak, gelecekte yapılacak benzer toplumsal projelerde aktif görev almaları için onlara rol model olmayı amaçlamıştır."
Citation
@InProceedings{10.1007/978-3-031-70563-2_16,
author="Altinok, Duygu",
editor="N{\"o}th, Elmar
and Hor{\'a}k, Ale{\v{s}}
and Sojka, Petr",
title="Bella Turca: A Large-Scale Dataset of Diverse Text Sources for Turkish Language Modeling",
booktitle="Text, Speech, and Dialogue",
year="2024",
publisher="Springer Nature Switzerland",
address="Cham",
pages="196--213",
abstract="In recent studies, it has been demonstrated that incorporating diverse training datasets enhances the overall knowledge and generalization capabilities of large-scale language models, especially in cross-domain scenarios. In line with this, we introduce Bella Turca: a comprehensive Turkish text corpus, totaling 265GB, specifically curated for training language models. Bella Turca encompasses 25 distinct subsets of 4 genre, carefully chosen to ensure diversity and high quality. While Turkish is spoken widely across three continents, it suffers from a dearth of robust data resources for language modelling. Existing transformers and language models have primarily relied on repetitive corpora such as OSCAR and/or Wiki, which lack the desired diversity. Our work aims to break free from this monotony by introducing a fresh perspective to Turkish corpora resources. To the best of our knowledge, this release marks the first instance of such a vast and diverse dataset tailored for the Turkish language. Additionally, we contribute to the community by providing the code used in the dataset's construction and cleaning, fostering collaboration and knowledge sharing.",
isbn="978-3-031-70563-2"
}
Acknowledgments
This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).