Datasets:
File size: 3,641 Bytes
f4d6358 120f1a8 9dacf73 120f1a8 9dacf73 f4d6358 120f1a8 9dacf73 120f1a8 f4d6358 120f1a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 |
---
dataset_info:
features:
- name: speaker
dtype: string
- name: prompt_text
dtype: string
- name: chosen_text
dtype: string
- name: rejected_text
dtype: string
- name: prompt
dtype: audio
- name: chosen
dtype: audio
- name: rejected
dtype: audio
- name: auto_bleu2
dtype: float64
splits:
- name: validation
num_bytes: 12199479621.038
num_examples: 20006
- name: train
num_bytes: 28797300145.392
num_examples: 47928
download_size: 36106016770
dataset_size: 40996779766.43
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: mit
task_categories:
- audio-to-audio
language:
- en
size_categories:
- 10K<n<100K
---
# SpokenSwag
We present here SpokenSwag as described in the paper ["_Slamming_: Training a Speech Language Model on One GPU in a Day"](https://arxiv.org/abs/2502.15814).
This dataset is based on [allenai/swag](https://huggingface.co/datasets/allenai/swag) and synthetised with 4 speakers from [hexgrad/Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M).
We show that perfoming DPO over the dataset can really improve performance of Speech Language Models.
We encourage you to also see the following resources, for further information:
**Project Page:** https://pages.cs.huji.ac.il/adiyoss-lab/slamming/ \
**Paper:** https://arxiv.org/abs/2502.15814 \
**Code:** https://github.com/slp-rl/slamkit
If you use our dataset, please cite the paper as follows:
```
@misc{maimon2025slamming,
title={Slamming: Training a Speech Language Model on One GPU in a Day},
author={Gallil Maimon and Avishai Elmakies and Yossi Adi},
year={2025},
eprint={2502.15814},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2502.15814},
}
```
## Dataset Summary
A dataset used for post-training spoken language models with DPO, which was showed to notably improve semantic abilities.
Specifically, the dataset is based on text only dataset [allenai/swag](https://huggingface.co/datasets/allenai/swag), and taking the correct
answer as the chosen contiuation and a random wrong answer as negative one. These were then synthesised using TTS by
[hexgrad/Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M). We use 4 speakers - 2 male and 2 female. We generate both train and
validation splits from the original dataset.
## Download
#### Using 🤗 Datasets
```python
from datasets import load_dataset
# entire dataset
spoken_swag = load_dataset('slprl/SpokenSwag')
```
We refer you to the _SlamKit_ [codebase](https://github.com/slp-rl/slamkit) to see how you can train a SpeechLM with DPO over the dataset.
## Data Fields
The data has several fields:
- `speaker`: One of the Kokoro voices - https://huggingface.co/hexgrad/Kokoro-82M/blob/main/VOICES.md
- `prompt_text`: The text of the prompt recording.
- `chosen_text`: The text of the chosen recording.
- `rejected_text`: The text of the rejected recording.
- `prompt`: The prompt audio sample
- `array`: array of audio samples
- `sample_rate`: audio sampling rate
- `path`: path to the audio file saved location
- `chosen`: The chosen audio sample
- `array`: array of audio samples
- `sample_rate`: audio sampling rate
- `path`: path to the audio file saved location
- `rejected`: The rejected audio sample
- `array`: array of audio samples
- `sample_rate`: audio sampling rate
- `path`: path to the audio file saved location
- `auto_bleu2`: The Auto-Bleu score with bi-grams, used to detect and filter repetetive samples |