Parakeet-TDT-0.6B Slovenian

Fine-tuned NVIDIA Parakeet-TDT-0.6B-v3 for Slovenian automatic speech recognition, augmented with TTS-generated synthetic data.

This model is part of the paper: "Synthetic Speech Augmentation for Low-Resource Estonian and Slovenian ASR: Comparing Parakeet-TDT and Whisper" (Interspeech 2026). Paper coming soon.

Model Description

  • Architecture: FastConformer encoder + Token-and-Duration Transducer (TDT) decoder
  • Parameters: 0.6B
  • Tokenizer: 8,192-token SentencePiece BPE
  • Base model: nvidia/parakeet-tdt-0.6b-v3
  • Fine-tuning data: CommonVoice 17.0 Slovenian + ~5,850 synthetic sentences (LLM-generated text + OpenAI TTS)
  • Training config: CV + Synth Unfiltered (full synthetic corpus without quality filtering)

Evaluation Results

Raw WER/CER (no text normalization)

Test Set WER CER
CommonVoice 17 Test 11.40 2.83
CommonVoice 17 Val 10.63 2.45
FLEURS Test 34.89 9.88

Normalized WER/CER (lowercase + punctuation removal)

Test Set WER CER
CommonVoice 17 Test 8.81 2.39
CommonVoice 17 Val 8.50 2.08
FLEURS Test 17.74 6.22

Improvement over baselines

Comparison CV17 Test (WER) FLEURS Test (WER)
vs. Zero-shot -38.83 pp -5.28 pp
vs. CV-only fine-tuning -2.68 pp -3.68 pp

All improvements are statistically significant (paired bootstrap, p < 0.001, n = 100,000).

Usage

import nemo.collections.asr as nemo_asr

# Load model
model = nemo_asr.models.ASRModel.from_pretrained("yuriyvnv/parakeet-tdt-0.6b-slovenian")

# Transcribe
transcriptions = model.transcribe(["audio.wav"])
print(transcriptions[0].text)

Training Details

  • Optimizer: AdamW (lr=5e-5, betas=[0.9, 0.98], weight_decay=0.001)
  • Schedule: Cosine annealing with 10% linear warmup
  • Batch size: 32
  • Early stopping: patience 10 epochs on val_wer
  • Best epoch: 74 (val_wer = 0.1085)
  • Precision: bf16-mixed
  • Seed: 42

Synthetic Data Augmentation

The synthetic training data was generated using a three-stage pipeline:

  1. Text generation: GPT-5-mini generates diverse sentences across paraphrase, domain expansion, and morphological categories
  2. LLM-as-judge validation: Each sentence validated for grammaticality, naturalness, and language purity
  3. Speech synthesis: OpenAI gpt-4o-mini-tts with 11-voice rotation

Dataset: yuriyvnv/synthetic_asr_et_sl

Acknowledgments

Downloads last month
4
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for yuriyvnv/parakeet-tdt-0.6b-slovenian

Finetuned
(36)
this model

Datasets used to train yuriyvnv/parakeet-tdt-0.6b-slovenian

Collection including yuriyvnv/parakeet-tdt-0.6b-slovenian

Evaluation results