Datasets:
The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Dataset Card for distilled-yodas-spanish
Dataset Summary
Repository under construction We are in the process of uploading the audio files. Sorry for the inconvenience.
Distilled YODAS Spanish is a high-quality subset of the Spanish portion of the YouTube-Oriented Dataset for Audio and Speech (YODAS). While the full YODAS corpus contains over 37,000 hours of Spanish speech across 43 million files, this dataset provides a distilled version of approximately 8,000 validated hours.
To construct this resource, we applied filtering steps to retain only utterances between 2–30 seconds and with at least three words per transcription. These filtered segments were then validated using two dedicated Spanish verification models (Model A and Model B), alongside the automatic transcriptions originally provided by YODAS.
Consensus criteria were used to ensure transcription quality:
- ABR: triple matches among Model A, Model B, and the YODAS reference
- AB, AR, BR: two-source matches between the models and/or the reference
From the highest-confidence ABR subset, 30 hours were reserved for validation and another 30 hours for testing. The training splits combine all consensus categories, yielding a total of 7,997 hours of validated Spanish speech.
This corpus enables large-scale, high-quality training and evaluation for Automatic Speech Recognition (ASR) and related tasks in Spanish.
Example Usage
The Distilled YODAS Spanish Corpus is divided into 6 loadable splits: train-ABR, train-AB, train-AR, train-AB, test-ABR and validation-ABR. To load the whole dataset, do:
from datasets import load_dataset
ds_dcys = load_dataset("langtech-veu/distilled-yodas-spanish")
To load a specific split (for example, the split with the best quality transcripts), do:
from datasets import load_dataset
ds_dcys_pm = load_dataset("langtech-veu/distilled-yodas-spanish",split="train-ABR")
Supported Tasks
automatic-speech-recognition: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
Languages
The audio is in Spanish.
Dataset Structure
Data Instances
{
'audio_id': '0ApSOzuJ1z0-00001-00000971-00001510',
'audio': {
'path': '/home/carlos/.cache/HuggingFace/datasets/downloads/extracted/0b7d329065bdd7754db2d5938929a50e2c441b15c2b809b70dd19f7c3550938a/train-ABR_001/0AtKppuIzVA/0ApSOzuJ1z0-00001-00000971-00001510.flac',
'array': array([0.0007019 , 0.00039673, 0.00064087, ..., 0.0012207 , 0.00146484,
0.0017395 ]),
'sampling_rate': 16000
},
'corpus_id': 'distilled-yodas-spanish',
'split': 'train-ABR',
'language': 'Spanish',
'duration': 6.5279998779296875,
'video_id': '0AtKppuIzVA',
'consensus': 'ABR',
'normalized_text': 'y esta necesidad que tengo de construir'
}
Data Fields
audio_id(string) - Unique identifier for the audio segment.audio(datasets.Audio) - A dictionary containing the path to the audio file, the decoded audio array, and the sampling rate. In non-streaming mode (default), the path points to the locally extracted audio file. In streaming mode, it corresponds to the relative path of the audio inside its archive, as files are not extracted locally.corpus_id(string) - id of the datasetsplit(string) - Indicates both the dataset split (train, validation, or test) and the consensus category (ABR, AB, AR, BR).language(string) - Language of the speech segment. In this dataset, all segments are in Spanish.duration(float32) - Duration of the audio file in seconds.video_id(string) - YouTube video identifier. When appended to https://www.youtube.com/watch?v=, it leads to the original video.consensus(string) - Indicates which systems agreed on the number of words. For instance, AB means that Model A and Model B both generated the same transcription.normalized_text(string) - Final transcription after normalization (e.g., lowercasing, punctuation removal, etc.).
Data Splits
The corpus is divided into the following splits.
| Split | Size (h) | Files | Consensus |
|---|---|---|---|
| Test-ABR | 30h42m | 33,757 | ABR |
| Validation-ABR | 30h00m | 33,005 | ABR |
| Train-ABR | 1,512h15m | 1,703,686 | ABR |
| Train-AB | 3,550h38m | 4,004,768 | AB |
| Train-AR | 1,071h09m | 1,219,697 | AR |
| Train-BR | 1,803h28m | 2,194,531 | BR |
To load a specific split, please check the above section "Example Usage".
Dataset Creation
Curation Rationale
The motivation for curating this dataset stems from the need for high-quality ASR training data in Spanish. While the original YODAS corpus provides large quantities of speech, its transcriptions vary in quality.
To distill the most reliable segments, we trained two independent ASR systems (verification models A and B) and selected transcriptions based on system agreement. Perfect agreement was used as a strong indicator of correctness.
This approach enables the creation of a high-confidence dataset with minimal human effort, making it especially valuable for training robust ASR models in under-resourced languages.
Source Data
Initial Data Collection and Normalization
The audio data in this corpus was sourced directly from the YODAS dataset developed by ESPnet. No additional recordings were collected.
We did not alter the original segmentation or reprocess the audio files. Instead, we focused on curating the transcriptions by applying an automatic verification strategy.
Specifically, the corpus was processed using Verification Models A and B, trained independently on different datasets. Segments were retained based on model agreement—either through identical transcriptions. This process produced a high-confidence subset of the original corpus.
Annotations
Annotation process
To further evaluate the effectiveness of our automatic validation protocol, we conducted human verification on a randomly selected subset of utterances from each consensus type (AB, AR, BR, ABR). One hundred audio segments for each consensus, totaling 400 samples, was randomly selected. Each sample was reviewed independently by three members of our annotation team. The annotators had to listen to the audio recordings and mark the transcription as correct or incorrect. Any deviation in words, missing content, or misrecognized speech was flagged as incorrect.
Who are the annotators?
The annotators are part of the annotation team of the Barcelona Supercomputing Center and they were led by Carme Armentano-Oller. The annotation team consists of Paula Arnas, Marc Casadesús, Núria Poch, Carles-Andreu Rodríguez, and Carla Sanjuan.
Personal and Sensitive Information
The dataset consists of public YouTube videos with a CC license. You agree not to attempt to determine the identity of speakers in this dataset.
Considerations for Using the Data
Social Impact of Dataset
The Distilled YODAS Spanish Corpus is a source of spontaneous speech data that will be valuable in the development of speech technologies for Spanish.
Discussion of Biases
The language is limited to the YouTube videos used to create the corpus and may not be representative of all domains.
Other Known Limitations
While the Distilled YODAS Spanish dataset provides nearly 8,000 hours of validated speech, a few limitations should be noted:
- Automatic transcription origin: The original YODAS transcriptions are automatically generated, which means that some residual errors may persist even after consensus validation.
- Domain bias: Since the source data comes from YouTube, the speech may not be fully representative of other domains such as telephone conversations, formal meetings, or broadcast news.
- Consensus filtering: The dataset only includes segments where at least two transcriptions matched. This improves reliability but may also discard useful speech segments where models disagreed.
- Language variety: The dataset focuses on Spanish, but it may not equally represent all dialectal varieties across the Spanish-speaking world.
- Speaker diarization: No speaker diarization or speaker count verification was performed. Some audio segments may feature multiple speakers.
- Background conditions: Noise levels, music presence, and overlapping speech were not systematically assessed or annotated.
- Code-switching: No explicit detection or annotation of code-switching was applied. Some segments may include speech in other languages (e.g., English, Catalan, etc.), without estimation of frequency or distribution.
Additional Information
Dataset Curators
The corpus was curated by Carlos Daniel Hernández Mena in 2025 in the Language Technologies Laboratory of the Barcelona Supercomputing Center under the supervision of Cristina España-Bonet.
Contact
For further information, please email [email protected].
Licensing Information
CC-BY-3.0 (Same as source).
Citation Information
@misc{BSC-disyodases-2025,
title = {The Distilled YODAS Spanish Corpus},
author = {Hern{\'a}ndez Mena, Carlos Daniel and Armentano-Oller, Carme and Espa{\~n}a-Bonet, Cristina},
year = {2025},
howpublished = {\url{https://huggingface.co/datasets/BSC-LT/distilled-yodas-spanish}},
note = {Barcelona Supercomputing Center}
}
Funding
This work is funded by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project Desarrollo de Modelos ALIA.
The training of the model was possible thanks to the computing time provided by Barcelona Supercomputing Center through MareNostrum 5.
We acknowledge the EuroHPC Joint Undertaking for awarding us access to MareNostrum5 as BSC, Spain.
Special thanks to Irene Baucells and Joan Llop who conducted experiments with two Speech-LLMs that will be reported in the paper presenting this dataset.
- Downloads last month
- 24