Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

image
image
label
int64
classname
string
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
8
ship
End of preview.

Dataset Specifications

Contains the entire CIFAR10 dataset, downloaded via PyTorch, then split and saved as .png files representing 32x32 images.

There a three splits, perfectly balanced class-wise:

  • train: 49,000 out of the original 50,000 samples from the training set of CIFAR10;
  • calibration: 1,000 left-out samples from the training set;
  • test: 10,000 samples, the entire original test set.

File Structure

Files are archives <split>/<classname>.zip. Each <classname>.zip is a flat archive containing the associated images XXXX.png.

For a given class, every filename XXXX.png is unique, with XXXX ranging:

  • from "0000" to "0999" for test samples,
  • from "1000" to "1099" for calibration samples,
  • from "1100" to "5999" for train samples.

Use with PyTorch

As a helper, you can use the following snippet to iterate through a specific split:

from datasets import load_dataset
from torch.utils.data import DataLoader

dataset = load_dataset("ego-thales/cifar10", name="no_bird_calibration")
dataset = dataset["unique_split"]  # Comment out if you're in case 1. below
dataloader = DataLoader(dataset.with_format("torch"), batch_size=300)
for batch in dataloader:
    # batch["images"]:    tensor with shape `(300, 3, 32, 32)` (`torch.uint8` values between 0 and 255)
    # batch["label"]:     tensor with shape `(300,)` (`torch.int64` values between 0 and 9)
    # batch["classname"]: list of length `300` with classnames as `str`

Loading arguments

While this question does not find a reasonable answer, there are two cases to consider.

1. If you wish to download the full dataset

  • name (Optional): Either "complete" or "no_<classname>".
  • split (Optional): One of "train", "calibration" or "test". If name is of the format "no_<classname>", then the following values are also allowed: "left_out_train", "left_out_calibration", "left_out_test" and "left_out" (for all left-out samples).

2. If you wish to only use a single split

Since apparently streaming=True still goes through every archive to find metadata no matter the setting, as a workaround, we defined many different configuration to act as splits. So use:

  • name: One of "<classname>_<split>" or "no_<classname>_<split>" with <split> either "train", "calibration" or "test".
  • split: Leave empty.

Good to know

The .zip archives are unzipped at every iteration where they are needed. As such, we recommend using an adapted batch_size, accounting for the number of samples in each data.zip.

Downloads last month
2,197

Models trained or fine-tuned on ego-thales/cifar10