YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Whisper Large v3 Turbo zh-TW (GGML)
GGML format models for whisper.cpp, converted from JacobLinCool/whisper-large-v3-turbo-common_voice_19_0-zh-TW.
Model Information
- Base Model: OpenAI Whisper Large v3 Turbo (0.8B parameters)
- Fine-tuned on: Common Voice 19.0 zh-TW (44 hours of audio)
- Language: Traditional Chinese (Taiwan)
- CER: 8.6% (Character Error Rate)
- License: MIT
Available Files
| File | Size | Format | Description |
|---|---|---|---|
ggml-whisper-large-v3-turbo-zh-TW.bin |
1.6GB | F16 | Full precision, best quality |
ggml-whisper-large-v3-turbo-zh-TW-q8_0.bin |
874MB | Q8_0 | Quantized, recommended for mobile devices |
Usage with whisper.cpp
# Clone whisper.cpp
git clone https://github.com/ggml-org/whisper.cpp
cd whisper.cpp
# Build
cmake -B build && cmake --build build --config Release
# Download model
wget https://huggingface.co/leaker/whisper-large-v3-turbo-zh-TW-ggml/resolve/main/ggml-whisper-large-v3-turbo-zh-TW-q8_0.bin
# Run inference
./build/bin/whisper-cli -m ggml-whisper-large-v3-turbo-zh-TW-q8_0.bin -f audio.wav -l zh
Conversion
These models were converted using the convert-h5-to-ggml.py script from whisper.cpp:
python3 ./models/convert-h5-to-ggml.py \
/path/to/JacobLinCool/whisper-large-v3-turbo-common_voice_19_0-zh-TW \
/path/to/openai/whisper \
/output/path
# Quantize to Q8_0
./build/bin/whisper-quantize input.bin output-q8_0.bin q8_0
Credits
- Original model: JacobLinCool/whisper-large-v3-turbo-common_voice_19_0-zh-TW
- whisper.cpp: ggml-org/whisper.cpp
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support