Datasets:
Formats:
parquet
Size:
10K - 100K
Tags:
human-activity-recognition
sensor-data
federated-learning
mobile-sensing
accelerometer
gyroscope
License:
metadata
license: mit
task_categories:
- time-series-classification
tags:
- human-activity-recognition
- sensor-data
- federated-learning
- mobile-sensing
- accelerometer
- gyroscope
size_categories:
- 1K<n<10K
UCI Human Activity Recognition (HAR) Dataset
Dataset Description
The UCI Human Activity Recognition dataset is a widely-used benchmark for human activity recognition using smartphone sensors. This dataset contains sensor readings from accelerometers and gyroscopes of smartphones worn by volunteers performing six different activities.
Activities
The dataset includes the following 6 activities:
- 1: WALKING
- 2: WALKING_UPSTAIRS
- 3: WALKING_DOWNSTAIRS
- 4: SITTING
- 5: STANDING
- 6: LAYING
Dataset Statistics
- Number of subjects: 30
- Number of activities: 6
- Number of features: 561
- Training samples: 7,352
- Test samples: 2,947
- Total samples: 10,299
Dataset Structure
Data Fields
0to560: Individual sensor feature columns (561 float values total)target: Integer ID of the activity (1-6)activity_label: String label of the activitysubject_id: Integer ID of the subject
Data Splits
- Train: 7,352 samples
- Test: 2,947 samples
Usage with Flower Datasets
This dataset is optimized for federated learning scenarios. Here's how to use it with Flower:
from flwr_datasets import FederatedDataset
# Load the dataset
fds = FederatedDataset(dataset="your-username/uci-har", partitioners={"train": 30})
# Get data for a specific client (subject)
client_data = fds.load_partition(0)
Federated Learning Scenarios
This dataset supports several FL scenarios:
- Subject-based partitioning: Each client represents one subject (natural FL setting)
- Activity-based partitioning: Clients have different activity distributions
- Sensor heterogeneity: Simulate different device capabilities
Citation
@misc{anguita2013public,
title={A public domain dataset for human activity recognition using smartphones},
author={Anguita, Davide and Ghio, Alessandro and Oneto, Luca and Parra, Xavier and Reyes-Ortiz, Jorge Luis},
year={2013},
publisher={European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning}
}
Original Source
- Repository: UCI Machine Learning Repository
- License: MIT
Example Usage
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("your-username/uci-har")
# Access train split
train_data = dataset["train"]
print(f"Number of training samples: {len(train_data)}")
# Access a sample
sample = train_data[0]
print(f"Number of feature columns: 561")
print(f"Activity: {sample['activity_label']}")
print(f"Target: {sample['target']}")
print(f"Subject: {sample['subject_id']}")
print(f"First few features: {sample['0']:.3f}, {sample['1']:.3f}, {sample['2']:.3f}")
Federated Learning Use Cases
This dataset is particularly suitable for:
- Cross-silo FL: Different organizations with sensor data
- Cross-device FL: Mobile devices performing activity recognition
- Personalized models: Subject-specific activity patterns
- Non-IID scenarios: Different subjects have different activity patterns
Data Preprocessing
The sensor signals were preprocessed by:
- Noise filtering using median and 3rd order low pass Butterworth filters
- Sampling at 50Hz fixed-width sliding windows of 2.56 sec (128 readings/window)
- 50% overlap between windows
- Separation of gravitational and body motion components
- Jerk signals derived from body linear acceleration and angular velocity
- Fast Fourier Transform (FFT) applied to some signals
- Feature vector extraction from time and frequency domain variables