Datasets:
File size: 7,628 Bytes
55576b4 83d6c2b 6145ed9 55576b4 83d6c2b 55576b4 8d30f66 83d6c2b 55576b4 6145ed9 55576b4 6145ed9 55576b4 934cc81 55576b4 14ae97a 55576b4 14ae97a 55576b4 14ae97a 6145ed9 14ae97a 55576b4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 |
---
license: mit
task_categories:
- audio-classification
language:
- en
- zh
tags:
- keyword-spotting
- tts-synthetic
- multilingual
- speech-commands
- tinyml
- edge-ai
---
# SynTTS-Commands: A Multilingual Synthetic Speech Command Dataset
<!-- Badges Row -->





[](https://arxiv.org/abs/2511.07821)
[](https://huggingface.co/lugan/SynTTS-Commands-Media-Benchmarks)
[](https://github.com/lugan113/SynTTS-Commands-Official)
[](https://opensource.org/licenses/MIT)
## π Introduction
**SynTTS-Commands** is a large-scale, multilingual synthetic speech command dataset specifically designed for low-power Keyword Spotting (KWS) and speech command recognition tasks. As presented in the paper [SynTTS-Commands: A Public Dataset for On-Device KWS via TTS-Synthesized Multilingual Speech](https://huggingface.co/papers/2511.07821), this dataset is generated using advanced Text-to-Speech (TTS) technologies, aiming to address the scarcity of high-quality training data in the fields of TinyML and Edge AI.
### π― Core Features
- **Multilingual Coverage**: Includes bilingual speech commands in both **Chinese** and **English**.
- **High-Quality Synthesis**: Generated via advanced TTS technology, ensuring high naturalness in speech.
- **Speaker Diversity**: Incorporates multiple acoustic feature sources to ensure a rich variety of speaker styles.
- **Real-World Scenarios**: Commands are designed for practical applications, including smart homes, in-car systems, and multimedia control.
- **Rigorous Quality Assurance**: All speech data has been screened via ASR models combined with manual human verification.
## π Dataset Overview
### Statistics
The **SynTTS-Commands-Media-Dataset** contains a total of **384,621 speech samples**, covering **48 distinct multimedia control commands**. It is divided into four subsets with the following distribution:
| Subset | Speakers | Commands | Samples | Duration (hrs) | Size (GB) |
|------|----------|--------|----------|------------|----------|
| Free-ST-Chinese | 855 | 25 | 21,214 | 6.82 | 2.19 |
| Free-ST-English | 855 | 23 | 19,228 | 4.88 | 1.57 |
| VoxCeleb1&2-Chinese | 7,245 | 25 | 180,331 | 58.03 | 18.6 |
| VoxCeleb1&2-English | 7,245 | 23 | 163,848 | 41.6 | 13.4 |
| **Total** | **8,100** | **48** | **384,621** | **111.33** | **35.76** |
### Dataset Highlights
- **Massive Scale**: Totaling **111.33 hours** and **35.76 GB** of synthetic speech data, making it one of the largest synthetic speech command datasets for academic research.
- **Extensive Speaker Diversity**: Covers **8,100 unique speakers**, spanning various accent groups, age ranges, and recording conditions.
- **Multi-Dimensional Research Support**: The four-subset structure enables research into cross-lingual speaker adaptation, speaker diversity effects, and acoustic robustness in different recording environments.
- **Application-Oriented**: Specifically focused on multimedia playback control scenarios, providing high-quality training data for real-world deployment.
### Directory Structure
```text
SynTTS-Commands-Media-Dataset/
βββ Free_ST_Chinese/ # 21,214 Chinese media control samples (855 speakers)
βββ Free_ST_English/ # 19,228 English media control samples (855 speakers)
βββ VoxCeleb1&2_Chinese/ # 180,331 Chinese media control samples (7,245 speakers)
βββ VoxCeleb1&2_English/ # 163,848 English media control samples (7,245 speakers)
βββ reviewed_bad/ # Rejected speech samples (failed quality audit)
βββ splits_by_language/ # Dataset splits organized by language
β βββ train/ # Training set
β βββ val/ # Validation set
β βββ test/ # Test set
βββ comprehensive_metadata.csv # Complete metadata file
```
π― Media Command Categories
English Media Control Commands (23 Classes)
Playback Control: "Play", "Pause", "Resume", "Play from start", "Repeat song"
Navigation: "Previous track", "Next track", "Last song", "Skip song", "Jump to first track"
Volume Control: "Volume up", "Volume down", "Mute", "Set volume to 50%", "Max volume"
Communication: "Answer call", "Hang up", "Decline call"
Wake Words: "Hey Siri", "OK Google", "Hey Google", "Alexa", "Hi Bixby"
Chinese Media Control Commands (25 Classes)
Playback Control: "ζζΎ", "ζε", "η»§η»ζζΎ", "δ»ε€΄ζζΎ", "εζ²εΎͺη―"
Navigation: "δΈδΈι¦", "δΈδΈι¦", "δΈδΈζ²", "δΈδΈζ²", "θ·³ε°η¬¬δΈι¦", "ζζΎδΈδΈεΌ δΈθΎ"
Volume Control: "ε’ε€§ι³ι", "εε°ι³ι", "ιι³", "ι³ιθ°ε°50%", "ι³ιζε€§"
Communication: "ζ₯ε¬η΅θ―", "ζζη΅θ―", "ζζ₯ζ₯η΅"
Wake Words: "ε°η±εε¦", "Hello ε°ζΊ", "ε°θΊε°θΊ", "ε¨ δΈζε°θ΄", "ε°εΊ¦ε°εΊ¦", "倩η«η²Ύη΅"
## π Benchmark Results and Analysis
We present a comprehensive benchmark of **six representative acoustic models** on the SynTTS-Commands-Media Dataset across both English (EN) and Chinese (ZH) subsets. All models are evaluated in terms of **classification accuracy**, **cross-entropy loss**, and **parameter count**, providing insights into the trade-offs between performance and model complexity in multilingual voice command recognition.
### Performance Summary
| Model | EN Loss | EN Accuracy | EN Params | ZH Loss | ZH Accuracy | ZH Params |
| :--- | :--- | :--- | :--- | :--- | :--- | :--- |
| **MicroCNN** | 0.2304 | 93.22% | 4,189 | 0.5579 | 80.14% | 4,255 |
| **DS-CNN** | 0.0166 | 99.46% | 30,103 | 0.0677 | 97.18% | 30,361 |
| **TC-ResNet** | 0.0347 | 98.87% | 68,431 | 0.0884 | 96.56% | 68,561 |
| **CRNN** | **0.0163** | **99.50%** | 1.08M | 0.0636 | **97.42%** | 1.08M |
| **MobileNet-V1** | 0.0167 | **99.50%** | 2.65M | **0.0552** | 97.92% | 2.65M |
| **EfficientNet** | 0.0182 | 99.41% | 4.72M | 0.0701 | 97.93% | 4.72M |
## πΊοΈ Roadmap & Future Expansion
We are expanding SynTTS-Commands beyond multimedia to support broader Edge AI applications.
π **[Click here to view our detailed Future Work Plan & Command List](https://github.com/lugan113/SynTTS-Commands-Official/blob/main/Future_Work_Plan.md)**
Our upcoming domains include:
* π **Smart Home:** Far-field commands for lighting and appliances.
* π **In-Vehicle:** Robust commands optimized for high-noise driving environments.
* π **Urgent Assistance:** Safety-critical keywords (e.g., "Call 911", "Help me") focusing on high recall.
We invite the community to review our [Command Roadmap](https://github.com/lugan113/SynTTS-Commands-Official/blob/main/Future_Work_Plan.md) and suggest additional keywords!
π Citation
If you use this dataset in your research, please cite our paper:
@misc{gan2025synttscommands,
title={SynTTS-Commands: A Public Dataset for On-Device KWS via TTS-Synthesized Multilingual Speech},
author={Lu Gan and Xi Li},
year={2025},
eprint={2511.07821},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2511.07821},
doi={10.48550/arXiv.2511.07821}
} |