uci-har-federated / README.md
Beothuk's picture
Add comprehensive dataset card with FL usage instructions
a8378d4 verified
metadata
license: mit
task_categories:
  - time-series-classification
tags:
  - human-activity-recognition
  - sensor-data
  - federated-learning
  - mobile-sensing
  - accelerometer
  - gyroscope
size_categories:
  - 1K<n<10K

UCI Human Activity Recognition (HAR) Dataset

Dataset Description

The UCI Human Activity Recognition dataset is a widely-used benchmark for human activity recognition using smartphone sensors. This dataset contains sensor readings from accelerometers and gyroscopes of smartphones worn by volunteers performing six different activities.

Activities

The dataset includes the following 6 activities:

  • 1: WALKING
  • 2: WALKING_UPSTAIRS
  • 3: WALKING_DOWNSTAIRS
  • 4: SITTING
  • 5: STANDING
  • 6: LAYING

Dataset Statistics

  • Number of subjects: 30
  • Number of activities: 6
  • Number of features: 561
  • Training samples: 7,352
  • Test samples: 2,947
  • Total samples: 10,299

Dataset Structure

Data Fields

  • 0 to 560: Individual sensor feature columns (561 float values total)
  • target: Integer ID of the activity (1-6)
  • activity_label: String label of the activity
  • subject_id: Integer ID of the subject

Data Splits

  • Train: 7,352 samples
  • Test: 2,947 samples

Usage with Flower Datasets

This dataset is optimized for federated learning scenarios. Here's how to use it with Flower:

from flwr_datasets import FederatedDataset

# Load the dataset
fds = FederatedDataset(dataset="your-username/uci-har", partitioners={"train": 30})

# Get data for a specific client (subject)
client_data = fds.load_partition(0)

Federated Learning Scenarios

This dataset supports several FL scenarios:

  1. Subject-based partitioning: Each client represents one subject (natural FL setting)
  2. Activity-based partitioning: Clients have different activity distributions
  3. Sensor heterogeneity: Simulate different device capabilities

Citation

@misc{anguita2013public,
  title={A public domain dataset for human activity recognition using smartphones},
  author={Anguita, Davide and Ghio, Alessandro and Oneto, Luca and Parra, Xavier and Reyes-Ortiz, Jorge Luis},
  year={2013},
  publisher={European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning}
}

Original Source

Example Usage

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("your-username/uci-har")

# Access train split
train_data = dataset["train"]
print(f"Number of training samples: {len(train_data)}")

# Access a sample
sample = train_data[0]
print(f"Number of feature columns: 561")
print(f"Activity: {sample['activity_label']}")
print(f"Target: {sample['target']}")
print(f"Subject: {sample['subject_id']}")
print(f"First few features: {sample['0']:.3f}, {sample['1']:.3f}, {sample['2']:.3f}")

Federated Learning Use Cases

This dataset is particularly suitable for:

  • Cross-silo FL: Different organizations with sensor data
  • Cross-device FL: Mobile devices performing activity recognition
  • Personalized models: Subject-specific activity patterns
  • Non-IID scenarios: Different subjects have different activity patterns

Data Preprocessing

The sensor signals were preprocessed by:

  • Noise filtering using median and 3rd order low pass Butterworth filters
  • Sampling at 50Hz fixed-width sliding windows of 2.56 sec (128 readings/window)
  • 50% overlap between windows
  • Separation of gravitational and body motion components
  • Jerk signals derived from body linear acceleration and angular velocity
  • Fast Fourier Transform (FFT) applied to some signals
  • Feature vector extraction from time and frequency domain variables