Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
sample: string
sample_file: string
pace: string
duration_seconds: double
sample_rate: int64
channels: int64
recorded_at: string
word_count: int64
id: string
audio: string
annotations: struct<pace: string, mic_distance: string, background_noise: string, notes: string>
  child 0, pace: string
  child 1, mic_distance: string
  child 2, background_noise: string
  child 3, notes: string
equipment: struct<microphone: string, sample_rate: int64, channels: int64>
  child 0, microphone: string
  child 1, sample_rate: int64
  child 2, channels: int64
to
{'id': Value('string'), 'audio': Audio(sampling_rate=None, decode=True, stream_index=None), 'sample': Value('string'), 'sample_file': Value('string'), 'word_count': Value('int32'), 'duration_seconds': Value('float32'), 'recorded_at': Value('string'), 'annotations': {'pace': Value('string'), 'mic_distance': Value('string'), 'background_noise': Value('string'), 'notes': Value('string')}, 'equipment': {'microphone': Value('string'), 'sample_rate': Value('int32'), 'channels': Value('int32')}}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2431, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1984, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              sample: string
              sample_file: string
              pace: string
              duration_seconds: double
              sample_rate: int64
              channels: int64
              recorded_at: string
              word_count: int64
              id: string
              audio: string
              annotations: struct<pace: string, mic_distance: string, background_noise: string, notes: string>
                child 0, pace: string
                child 1, mic_distance: string
                child 2, background_noise: string
                child 3, notes: string
              equipment: struct<microphone: string, sample_rate: int64, channels: int64>
                child 0, microphone: string
                child 1, sample_rate: int64
                child 2, channels: int64
              to
              {'id': Value('string'), 'audio': Audio(sampling_rate=None, decode=True, stream_index=None), 'sample': Value('string'), 'sample_file': Value('string'), 'word_count': Value('int32'), 'duration_seconds': Value('float32'), 'recorded_at': Value('string'), 'annotations': {'pace': Value('string'), 'mic_distance': Value('string'), 'background_noise': Value('string'), 'notes': Value('string')}, 'equipment': {'microphone': Value('string'), 'sample_rate': Value('int32'), 'channels': Value('int32')}}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ASR WPM and Background Noise Evaluation Dataset

A dataset of annotated audio recordings for evaluating how different factors affect Whisper (and other ASR/STT systems) transcription accuracy.

Purpose

This dataset provides controlled audio samples with annotations to evaluate ASR performance across:

  • Speaking pace (fast, normal, slow, mumbled, whispered, weird voices)
  • Background noise (cafe, music, conversations in various languages, traffic, sirens, etc.)
  • Microphone distance (close, normal, far)

Dataset Structure

Each sample includes:

  • A WAV audio file (16kHz mono)
  • Metadata with annotations describing recording conditions

Features

Feature Type Description
id string 4-character hex identifier
audio audio Path to WAV file
sample string Text sample identifier
sample_file string Source text filename
word_count int Number of words in the sample
duration_seconds float Recording duration
recorded_at string Timestamp (YYYYMMDD_HHMMSS)
annotations.pace string Speaking pace category
annotations.mic_distance string Microphone distance
annotations.background_noise string Background noise type
annotations.notes string Additional notes
equipment.microphone string Recording device
equipment.sample_rate int Audio sample rate (16000)
equipment.channels int Audio channels (1 = mono)

Annotation Categories

Speaking Pace:

  • fast - As fast as possible
  • quick - Quicker than normal
  • normal - Normal/conversational
  • slow - Deliberately slow
  • whispered - Whispered speech
  • loud - Louder than normal
  • weird_voices - Altered/unusual voice patterns

Microphone Distance:

  • close - Less than 6 inches
  • normal - 6-12 inches
  • far - Greater than 12 inches

Background Noise:

  • none - Silence
  • cafe - Coffee shop ambience
  • music - Background music (various genres)
  • convo_same - Same-language conversation
  • convo_other - Other-language conversation (Spanish, Arabic, Korean, Japanese, Mandarin, Cantonese, Irish English)
  • convo_mixed - Mixed language babble
  • transit - Airport/transportation sounds
  • honking - Traffic/horns
  • siren - Emergency vehicle sirens
  • dogs - Dog barking
  • baby - Baby sounds

Audio Specifications

  • Format: WAV
  • Sample Rate: 16kHz
  • Channels: Mono
  • Equipment: Samson Q2U USB Microphone

Usage

from datasets import load_dataset

dataset = load_dataset("danielrosehill/ASR-WPM-And-Background-Noise-Eval")

# Access audio and metadata
for sample in dataset["train"]:
    audio = sample["audio"]
    pace = sample["annotations"]["pace"]
    noise = sample["annotations"]["background_noise"]

Use Cases

  • Benchmarking ASR/STT models under varying conditions
  • Evaluating robustness to background noise
  • Testing speech recognition at different speaking rates
  • Comparing transcription accuracy across challenging audio scenarios

Source

Recording tools and methodology: Whisper-WPM-Eval

License

MIT

Downloads last month
63