Waste Benchmark Dataset
Overview
The Waste Benchmark is a specialized computer vision dataset specifically built to test and validate waste computation models. It consists of tray image pairs (before and after consumption) processed through a custom technical workflow involving automated pre-annotation and manual expert refinement. This dataset may not be the final version as it is still in construction
Expert Labeling Criteria & Disclaimer
Annotations were generated by expert annotators using the Food Waste Annotation Tool. Marta López Poch (mlopez@proppos.com) and Ambia Mohammad Bibi being qualified dietitists and nutritionist and thus considered experts on this task.
Process & Methodology
- Workflow: Tasks are imported into Label Studio after receiving initial masks from a SegFormer model (
proppos/segformer_food) to speed up the process. - Manual Refinement: Experts manually refine these masks and assign waste ratios based on visual volume estimation.
- Site Context: The data primarily originates from Germans Trias hospital.
Bias and Ground Truth Warning
Important: These labels should not be interpreted as absolute objective ground truth.
- Expert annotators possess individual biases regarding volume and waste perception.
- The labels represent an expert's visual judgment.
- Users should account for human subjectivity when measuring model accuracy against these labels.
Usage Guide
Dataset Splits
This repository contains two distinct subsets to facilitate both model training and error analysis:
benchmark: The primary dataset (1,926 examples). This split represents the "standard" high-quality annotations intended for model evaluation and benchmarking.variability_study: A secondary subset (947 examples) designed specifically to study variability among annotators. This split is used to conduct quality studies and analyze the variance of human/expert annotators to understand the limits of manual waste estimation.
Dataset Features
Each example in the dataset contains the following features:
before_image(Image): A PIL Image object representing the meal tray before consumption. These were originally sourced from S3 and represent the baseline for waste computation.after_image(Image): A PIL Image object representing the same tray after consumption. Comparison with the 'before' image allows for change detection.before_mask(String): An RLE-encoded string representing the segmentation masks in thebefore_image. These were initialized via SegFormer and manually refined by experts.after_mask(String): An RLE-encoded string for the food items in theafter_image.- Mask Categories: (Only on the Benchmark subset) Both
before_maskandafter_maskcontain two distinct semantic categories:- Main Dish: The primary protein or central component of the meal.
- Side Dish: Accompanying items such as vegetables, starches, or salads.
total_waste_ratio(Float32): The primary label indicating the percentage of total food wasted, calculated as a ratio between 0.0 and 1.0.pair_id(String): A unique identifier for the specific meal tray event (e.g.,gt_001), used to trace data back to original source logs or hospital sites.annotator(String): The identifier for the expert who completed the task refinement. Crucial for thevariability_studysplit to track inter-annotator variance.notes(String): Manual comments provided by the annotator during the refinement process, including details on image quality or labeling challenges.
Loading the Dataset
To access this private dataset, ensure you have a Hugging Face token with appropriate permissions.
from datasets import load_dataset
# Load the main benchmark split
ds = load_dataset("proppos/waste-benchmark", split="benchmark", use_auth_token=True)
#Decoding Masks (RLE)
#The before_mask and after_mask properties are stored as Run-Length Encoded (RLE) strings. To use these for training, they must be decoded into binary masks.
import json
import numpy as np
import json
import numpy as np
def rle_to_multiclass_mask(annotation_list, height, width):
"""
Decodes the raw Label Studio JSON list into a single multiclass mask.
Main Dish = 1, Side Dish = 2
"""
mask = np.zeros((height, width), dtype=np.uint8)
label_map = {"Main Dish": 1, "Side Dish": 2}
for item in annotation_list:
if item['type'] != 'brushlabels':
continue
label = item['value']['brushlabels'][0]
category_id = label_map.get(label, 0)
rle = item['value']['rle']
flat_mask = np.zeros(height * width, dtype=np.uint8)
for i in range(0, len(rle), 2):
flat_mask[rle[i] : rle[i] + rle[i+1]] = category_id
# Merge into the main mask
mask = np.maximum(mask, flat_mask.reshape((height, width)))
return mask
Contact & Maintenance
Main Maintainer: Genís Láinez (glainez@proppos.com).
- Downloads last month
- 8