The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowNotImplementedError
Message: Cannot write struct type 'imageMode' with no child field to Parquet. Consider adding a dummy child field.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 673, in write_table
self._build_writer(inferred_schema=pa_table.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'imageMode' with no child field to Parquet. Consider adding a dummy child field.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1908, in _prepare_split_single
num_examples, num_bytes = writer.finalize()
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 688, in finalize
self._build_writer(self.schema)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
self.pa_writer = pq.ParquetWriter(
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
self.writer = _parquet.ParquetWriter(
^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'imageMode' with no child field to Parquet. Consider adding a dummy child field.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1342, in compute_config_parquet_and_info_response
parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 907, in stream_convert_to_parquet
builder._prepare_split(split_generator=splits_generators[split], file_format="parquet")
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1919, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
configById dict | globalVariables dict | userNodes dict | playbackConfig dict | drawerConfig dict | layout dict |
|---|---|---|---|---|---|
{
"3D!18i6zy7": {
"layers": {
"845139cb-26bc-40b3-8161-8ab60af4baf5": {
"visible": true,
"frameLocked": true,
"label": "Grid",
"instanceId": "845139cb-26bc-40b3-8161-8ab60af4baf5",
"layerId": "foxglove.Grid",
"size": 10,
"divisions": 10,
"lineW... | {} | {} | {
"speed": 1
} | {
"tracks": []
} | {
"first": {
"first": "3D!18i6zy7",
"second": "Image!3mnp456",
"direction": "row"
},
"second": {
"first": "StateTransitions!2g0t2kq",
"second": "RawMessages!os6rgs",
"direction": "row",
"splitPercentage": 79.89457831325302
},
"direction": "column",
"splitPercentage": 59.561021858... |
MicroAGI01: Egocentric Manipulation Dataset
License: See
maginoresell
MicroAGI01 is an egocentric RGB-D dataset of human household manipulation with full pose annotations. 676 recordings spanning 137 task types across 14 activity categories.
What's Included Per Recording
- RGB + depth streams
- Camera pose (6DoF)
- Hand poses (3D landmarks)
- Task segmentation with text annotations
Quick Facts
| Recordings | 676 mcaps (283 cut, 393 uncut) |
| Task types | 137 |
| Container | .mcap |
| Previews | 1 sample .mp4 file |
Folder Structure
MicroAGI01/
βββ uncut_mcaps/ # Full-length recordings, β₯80% hands validity
βββ cut_mcaps/ # Shorter semantic chunks, β₯95% hands validity
βββ task_mapping.csv # Task labels per recording
βββ microagi01viewerfoxglove.json
βββ LICENSE
Start with uncut_mcaps β full-length recordings with all annotations included.
cut_mcaps contains shorter, semantically-complete segments with stricter hand tracking validity.
Task Categories
Kitchen: kitchen_cooking, kitchen_prep, kitchen_dishes, kitchen_organization, kitchen_dining, kitchen_general
Cleaning: cleaning_general, cleaning_floor
Laundry: laundry
Organization: general_organization, general_household
Rooms: bedroom, bathroom, living_room
Topic Structure
Overview
Meta /meta
Camera
/tf_static
/camera/color/image, /camera/color/info (+ /camera/color/health)
/camera/depth/image, /camera/depth/info, /camera/depth/unit_of_depth_in_mm
SLAM /tf/camera (+ .../health, .../state)
Hands /tf/hands, /hands/left, /hands/right (+ .../health)
IMU /imu/accel/sample, /imu/gyro/sample
Task /task (includes task_title)
Descriptions (of relevant topics)
/meta: Information about the mcap, the operator, ... (operator_height_in_m, metadata for general task description)
/tf_static: Any static transforms (Includes transforms between camera, imu, depth and color)
/camera/.../image: JPEG@90 image for color, PNG for depth
/camera/.../info: Parameters for sensor (especially intrinsics)
/camera/depth/unit_of_depth_in_mm: Defines the depth unit conversion. Currently set to 1, meaning the raw pixel values in the depth image are measured directly in millimeters (e.g., a pixel value of 1000 equals 1 meter)
/camera/color/health: Signals bad images which are e.g. too dark, blurry, ...
/tf/camera: Pose of camera (Only valid if a msg on .../health exists with the same timestamp and valid == true, otherwise they should be ignored. Poses are only coherent to poses in the same block of valid poses.)
/tf/camera/health: Signals regions which successful tracking
/tf/hands: Pose of left and right wrist
/hands/...: Positions of Hand keypoints (In wrist frame)
/hands/.../health: Signals whether to trust the hands position or not
/imu/.../sample: Raw imu samples
/task: Description of the current task (includes task_title)
TF-Tree (Across all tf (static) topics)
TF_TREE (RightHanded Coordinate Systems):
world (On the ground; z is up, gravity aligned)
camera (Center of camera; z is up, x is front)
# Camera data
depth (Reference for the depth image; x to the right, y is down)
accel (Reference for the accel)
gyro (Reference for the gyro)
color (Reference for the color image; x to the right, y is down)
left_wrist (x is in direction from pinky to thumb, z is in direction of arm)
right_wrist (x is in direction from pinky to thumb, z is in direction of arm)
Download
Everything:
huggingface-cli download MicroAGI-Labs/MicroAGI01 --repo-type dataset --local-dir ./MicroAGI01
Single file:
huggingface-cli download MicroAGI-Labs/MicroAGI01 uncut_mcaps/open-source-06.mcap --repo-type dataset --local-dir ./
Viewing
We use Foxglove. A layout template is included in the repo:
- Open Foxglove
- Layout β Import layout β select
microagi01viewerfoxglove.json - Load any
.mcapfile
This sets up the 3D view, camera feed, hand validity state transitions, and task annotations panel.
Extracting protobuf
We use our github repo. A script is included in the repo.
Intended Uses
- Policy and skill learning (robotics / VLA)
- Action detection and segmentation
- Hand/pose estimation and grasp analysis
- World-model pre/post training
Attribution
This work uses the MicroAGI01 dataset (MicroAGI, Inc. 2026).
Contact
Questions: info@micro-agi.com
Custom data or derived signals: data@micro-agi.com
- Downloads last month
- 197