metadata
license: bsd-3-clause
pipeline_tag: robotics
tags:
- fingernet
- asfinger
- multimodal
- onnx
- pytorch
library_name: transformers
datasets:
- asRobotics/fingernet-100k
Model Card for FingerNet
Table of Contents
Model Description
FingerNet is an MLP model designed for the asFinger. It can predict both 6D force and 3D shape (mesh nodes) from the 6D motion of the asFinger.
Try it out on the Spaces Demo!
- Developer: Xudong Han, Ning Guo, Xiaobo Liu, Tianyu Wu, Fang Wan, and Chaoyang Song.
- Model type: MLP
- License: BSD-3-Clause
- Resources for more information:
Intended Use
This model is intended for researchers and developers working in robotics and tactile sensing. It can be used to enhance the capabilities of asFingers in applications such as robotic manipulation, haptics, and human-robot interaction.
See the project page for more details.
To load the model:
# Example code to load safetensors
from transformers import AutoModel
model = AutoModel.from_pretrained("asRobotics/fingernet", trust_remote_code=True)
x = torch.zeros((1, 6)) # Example input: batch size of 1, 6D motion
output = model(x)
Or to load the ONNX version:
# Example code to load onnx
import onnxruntime as ort
import numpy as np
from huggingface_hub import hf_hub_download
onnx_model_path = hf_hub_download(repo_id="asRobotics/fingernet", filename="model.onnx")
ort_session = ort.InferenceSession(onnx_model_path)
x = np.zeros((1, 6)).astype(np.float32) # Example input: batch size of 1, 6D motion
outputs = ort_session.run(None, {"motion": x})
Training Data
The model was trained on the FingerNet-100K, which includes a variety of motion, force, and shape data collected by finite element simulations.
Citation
If you use this model in your research, please cite the following papers:
@article{liu2024proprioceptive,
title={Proprioceptive learning with soft polyhedral networks},
author={Liu, Xiaobo and Han, Xudong and Hong, Wei and Wan, Fang and Song, Chaoyang},
journal={The International Journal of Robotics Research},
volume = {43},
number = {12},
pages = {1916-1935},
year = {2024},
publisher={SAGE Publications Sage UK: London, England},
doi = {10.1177/02783649241238765}
}
@article{wu2025magiclaw,
title={MagiClaw: A Dual-Use, Vision-Based Soft Gripper for Bridging the Human Demonstration to Robotic Deployment Gap},
author={Wu, Tianyu and Han, Xudong and Sun, Haoran and Zhang, Zishang and Huang, Bangchao and Song, Chaoyang and Wan, Fang},
journal={arXiv preprint arXiv:2509.19169},
year={2025}
}