Catalyst SHD SNN Benchmark
Spiking Neural Network for spoken digit classification on SHD.
Model Description
- Architecture: 700 โ 1024 (recurrent adLIF) โ 20
- Neuron model: Adaptive Leaky Integrate-and-Fire (adLIF) with Symplectic Euler discretization
- Training: Surrogate gradient BPTT, fast-sigmoid surrogate (scale=25)
- Hardware target: Catalyst N1/N2/N3 neuromorphic processors
- Quantization: Float32 weights -> int16, membrane decay -> 12-bit fixed-point
Results
| Metric | Value |
|---|---|
| Float accuracy | 90.68% |
| Quantized accuracy (int16) | 90.2% |
| Parameters | 1,789,972 |
| Quantization loss | 0.5% |
Reproduce
git clone https://github.com/catalyst-neuromorphic/catalyst-benchmarks.git
cd catalyst-benchmarks
pip install -e .
python shd/train.py --device cuda:0
Deploy to Catalyst Hardware
import catalyst_cloud
client = catalyst_cloud.Client()
result = client.simulate(
model="catalyst-neuromorphic/shd-snn-benchmark",
input_data=your_spikes,
processor="n2"
)
Links
- Benchmark repo: catalyst-neuromorphic/catalyst-benchmarks
- Cloud API: catalyst-neuromorphic.com
- N2 paper: Zenodo DOI 10.5281/zenodo.18728256
- N1 paper: Zenodo DOI 10.5281/zenodo.18727094
Citation
@misc{catalyst-benchmarks-2026,
author = {Shulayev Barnes, Henry},
title = {Catalyst Neuromorphic Benchmarks},
year = {2026},
url = {https://github.com/catalyst-neuromorphic/catalyst-benchmarks}
}
- Downloads last month
- 25
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Evaluation results
- Float Accuracy on Spiking Heidelberg Digits (SHD)self-reported90.680
- Quantized Accuracy (int16) on Spiking Heidelberg Digits (SHD)self-reported90.200