Whisper-Small-Quantized: Optimized for Qualcomm Devices

We have applied w8a16 quantization to significantly enhance performance and efficiency. HuggingFace Whisper-Small ASR (Automatic Speech Recognition) model is a state-of-the-art system designed for transcribing spoken language into written text. This model is based on the transformer architecture and has been optimized for edge inference by replacing Multi-Head Attention (MHA) with Single-Head Attention (SHA) and linear layers with convolutional (conv) layers. It exhibits robust performance in realistic, noisy environments, making it highly reliable for real-world applications. Specifically, it excels in long-form transcription, capable of accurately transcribing audio clips up to 30 seconds long. Time to the first token is the encoder's latency, while time to each additional token is decoder's latency, where we assume a max decoded length specified below.

This is based on the implementation of Whisper-Small-Quantized found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite Gen 5 Mobile QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X2 Elite QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X Elite QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Gen 3 Mobile QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS8550 (Proxy) QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite For Galaxy Mobile QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 7 Gen 4 Mobile QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCM6690 QAIRT 2.42, ONNX Runtime 1.24.3 Download
PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS9075 QAIRT 2.42, ONNX Runtime 1.24.3 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite Gen 5 Mobile QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® X2 Elite QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® X Elite QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Gen 3 Mobile QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS8550 (Proxy) QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Qualcomm® SA8775P QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite For Galaxy Mobile QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Snapdragon® 7 Gen 4 Mobile QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Qualcomm® SA7255P QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Qualcomm® QCM6690 QAIRT 2.45 Download
QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS9075 QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® 8 Elite Gen 5 Mobile QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® X2 Elite QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® X Elite QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® 8 Gen 3 Mobile QAIRT 2.45 Download
VOICE_AI w8a16 Qualcomm® QCS8550 (Proxy) QAIRT 2.45 Download
VOICE_AI w8a16 Qualcomm® SA8775P QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® 8 Elite For Galaxy Mobile QAIRT 2.45 Download
VOICE_AI w8a16 Snapdragon® 7 Gen 4 Mobile QAIRT 2.45 Download
VOICE_AI w8a16 Qualcomm® SA7255P QAIRT 2.45 Download
VOICE_AI w8a16 Qualcomm® QCM6690 QAIRT 2.45 Download
VOICE_AI w8a16 Qualcomm® QCS9075 QAIRT 2.45 Download

For more device-specific assets and performance metrics, visit Whisper-Small-Quantized on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for Whisper-Small-Quantized on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.speech_recognition

Model Stats:

  • Model checkpoint: openai/whisper-small
  • Input resolution: 80x3000 (30 seconds audio)
  • Max decoded sequence length: 200 tokens

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite Gen 5 Mobile 3.988 ms 38 - 47 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X2 Elite 3.818 ms 185 - 185 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X Elite 7.87 ms 185 - 185 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Gen 3 Mobile 6.403 ms 40 - 48 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS8550 (Proxy) 8.293 ms 27 - 30 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS9075 9.073 ms 29 - 62 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCM6690 30.751 ms 30 - 40 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite For Galaxy Mobile 4.776 ms 19 - 32 MB NPU
decoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 7 Gen 4 Mobile 10.92 ms 29 - 35 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite Gen 5 Mobile 3.924 ms 30 - 41 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® X2 Elite 4.228 ms 30 - 30 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® X Elite 7.137 ms 30 - 30 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Gen 3 Mobile 6.045 ms 30 - 38 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS8275 (Proxy) 12.714 ms 21 - 29 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS8550 (Proxy) 7.823 ms 30 - 34 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® SA8775P 8.931 ms 30 - 40 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS9075 8.683 ms 25 - 60 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCM6690 30.252 ms 29 - 35 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Qualcomm® SA7255P 12.714 ms 21 - 29 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite For Galaxy Mobile 4.648 ms 28 - 41 MB NPU
decoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 7 Gen 4 Mobile 10.377 ms 30 - 37 MB NPU
decoder VOICE_AI w8a16 Snapdragon® 8 Elite Gen 5 Mobile 3.916 ms 30 - 40 MB NPU
decoder VOICE_AI w8a16 Snapdragon® X2 Elite 4.178 ms 30 - 30 MB NPU
decoder VOICE_AI w8a16 Snapdragon® X Elite 7.367 ms 30 - 30 MB NPU
decoder VOICE_AI w8a16 Snapdragon® 8 Gen 3 Mobile 6.108 ms 30 - 38 MB NPU
decoder VOICE_AI w8a16 Qualcomm® QCS8275 (Proxy) 12.664 ms 30 - 39 MB NPU
decoder VOICE_AI w8a16 Qualcomm® QCS8550 (Proxy) 7.905 ms 30 - 32 MB NPU
decoder VOICE_AI w8a16 Qualcomm® SA8775P 8.949 ms 20 - 29 MB NPU
decoder VOICE_AI w8a16 Qualcomm® QCS9075 8.689 ms 25 - 60 MB NPU
decoder VOICE_AI w8a16 Qualcomm® QCM6690 30.19 ms 29 - 36 MB NPU
decoder VOICE_AI w8a16 Qualcomm® SA7255P 12.664 ms 30 - 39 MB NPU
decoder VOICE_AI w8a16 Snapdragon® 8 Elite For Galaxy Mobile 4.64 ms 21 - 35 MB NPU
decoder VOICE_AI w8a16 Snapdragon® 7 Gen 4 Mobile 10.381 ms 30 - 37 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite Gen 5 Mobile 176.471 ms 63 - 73 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X2 Elite 208.403 ms 127 - 127 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® X Elite 266.415 ms 127 - 127 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Gen 3 Mobile 241.09 ms 63 - 74 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS8550 (Proxy) 332.61 ms 55 - 57 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCS9075 255.874 ms 63 - 66 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Qualcomm® QCM6690 4300.869 ms 2 - 12 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 8 Elite For Galaxy Mobile 198.665 ms 63 - 74 MB NPU
encoder PRECOMPILED_QNN_ONNX w8a16 Snapdragon® 7 Gen 4 Mobile 462.524 ms 56 - 66 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite Gen 5 Mobile 174.444 ms 1 - 10 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® X2 Elite 157.097 ms 0 - 0 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® X Elite 303.245 ms 0 - 0 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Gen 3 Mobile 275.913 ms 3 - 10 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS8275 (Proxy) 517.442 ms 1 - 9 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS8550 (Proxy) 369.446 ms 1 - 20 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® SA8775P 316.07 ms 0 - 9 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCS9075 296.96 ms 0 - 29 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® QCM6690 4451.53 ms 1 - 12 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Qualcomm® SA7255P 517.442 ms 1 - 9 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 8 Elite For Galaxy Mobile 228.972 ms 1 - 13 MB NPU
encoder QNN_CONTEXT_BINARY w8a16 Snapdragon® 7 Gen 4 Mobile 480.563 ms 0 - 7 MB NPU
encoder VOICE_AI w8a16 Snapdragon® 8 Elite Gen 5 Mobile 190.029 ms 1 - 10 MB NPU
encoder VOICE_AI w8a16 Snapdragon® X2 Elite 156.256 ms 0 - 0 MB NPU
encoder VOICE_AI w8a16 Snapdragon® X Elite 302.218 ms 0 - 0 MB NPU
encoder VOICE_AI w8a16 Snapdragon® 8 Gen 3 Mobile 269.314 ms 1 - 8 MB NPU
encoder VOICE_AI w8a16 Qualcomm® QCS8275 (Proxy) 518.78 ms 1 - 9 MB NPU
encoder VOICE_AI w8a16 Qualcomm® QCS8550 (Proxy) 366.243 ms 1 - 3 MB NPU
encoder VOICE_AI w8a16 Qualcomm® SA8775P 316.776 ms 1 - 10 MB NPU
encoder VOICE_AI w8a16 Qualcomm® QCS9075 296.507 ms 0 - 29 MB NPU
encoder VOICE_AI w8a16 Qualcomm® QCM6690 3954.641 ms 1 - 12 MB NPU
encoder VOICE_AI w8a16 Qualcomm® SA7255P 518.78 ms 1 - 9 MB NPU
encoder VOICE_AI w8a16 Snapdragon® 8 Elite For Galaxy Mobile 229.249 ms 1 - 14 MB NPU
encoder VOICE_AI w8a16 Snapdragon® 7 Gen 4 Mobile 486.936 ms 1 - 7 MB NPU

License

  • The license for the original implementation of Whisper-Small-Quantized can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support