|
|
--- |
|
|
license: bsd-3-clause |
|
|
pipeline_tag: robotics |
|
|
tags: |
|
|
- fingernet |
|
|
- asfinger |
|
|
- multimodal |
|
|
- onnx |
|
|
- pytorch |
|
|
library_name: transformers |
|
|
datasets: |
|
|
- asRobotics/fingernet-100k |
|
|
--- |
|
|
|
|
|
# Model Card for FingerNet |
|
|
|
|
|
## Table of Contents |
|
|
|
|
|
- [Model Card for FingerNet](#model-card-for-fingernet) |
|
|
- [Table of Contents](#table-of-contents) |
|
|
- [Model Description](#model-description) |
|
|
- [Intended Use](#intended-use) |
|
|
- [Training Data](#training-data) |
|
|
- [Citation](#citation) |
|
|
|
|
|
## Model Description |
|
|
|
|
|
FingerNet is an MLP model designed for the asFinger. It can predict both 6D force and 3D shape (mesh nodes) from the 6D motion of the asFinger. |
|
|
|
|
|
Try it out on the [Spaces Demo](https://huggingface.co/spaces/asRobotics/fingernet-demo)! |
|
|
|
|
|
- Developer: Xudong Han, Ning Guo, Xiaobo Liu, Tianyu Wu, Fang Wan, and Chaoyang Song. |
|
|
- Model type: MLP |
|
|
- License: BSD-3-Clause |
|
|
- Resources for more information: |
|
|
- [Project page](https://doc.ancoraspring.com/asfinger) |
|
|
- [GitHub repo](https://github.com/AncoraSpring/metafinger) |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
This model is intended for researchers and developers working in robotics and tactile sensing. It can be used to enhance the capabilities of asFingers in applications such as robotic manipulation, haptics, and human-robot interaction. |
|
|
|
|
|
See the [project page](https://doc.ancoraspring.com/asfinger) for more details. |
|
|
|
|
|
To load the model: |
|
|
|
|
|
```python |
|
|
# Example code to load safetensors |
|
|
from transformers import AutoModel |
|
|
|
|
|
model = AutoModel.from_pretrained("asRobotics/fingernet", trust_remote_code=True) |
|
|
x = torch.zeros((1, 6)) # Example input: batch size of 1, 6D motion |
|
|
output = model(x) |
|
|
``` |
|
|
|
|
|
Or to load the ONNX version: |
|
|
|
|
|
```python |
|
|
# Example code to load onnx |
|
|
import onnxruntime as ort |
|
|
import numpy as np |
|
|
from huggingface_hub import hf_hub_download |
|
|
|
|
|
onnx_model_path = hf_hub_download(repo_id="asRobotics/fingernet", filename="model.onnx") |
|
|
ort_session = ort.InferenceSession(onnx_model_path) |
|
|
x = np.zeros((1, 6)).astype(np.float32) # Example input: batch size of 1, 6D motion |
|
|
outputs = ort_session.run(None, {"motion": x}) |
|
|
``` |
|
|
|
|
|
## Training Data |
|
|
|
|
|
The model was trained on the [FingerNet-100K](https://huggingface.co/datasets/asRobotics/fingernet-100k), which includes a variety of motion, force, and shape data collected by finite element simulations. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this model in your research, please cite the following papers: |
|
|
|
|
|
```bibtex |
|
|
@article{liu2024proprioceptive, |
|
|
title={Proprioceptive learning with soft polyhedral networks}, |
|
|
author={Liu, Xiaobo and Han, Xudong and Hong, Wei and Wan, Fang and Song, Chaoyang}, |
|
|
journal={The International Journal of Robotics Research}, |
|
|
volume = {43}, |
|
|
number = {12}, |
|
|
pages = {1916-1935}, |
|
|
year = {2024}, |
|
|
publisher={SAGE Publications Sage UK: London, England}, |
|
|
doi = {10.1177/02783649241238765} |
|
|
} |
|
|
``` |
|
|
|
|
|
[](https://arxiv.org/abs/2308.08538) |
|
|
|
|
|
```bibtex |
|
|
@article{wu2025magiclaw, |
|
|
title={MagiClaw: A Dual-Use, Vision-Based Soft Gripper for Bridging the Human Demonstration to Robotic Deployment Gap}, |
|
|
author={Wu, Tianyu and Han, Xudong and Sun, Haoran and Zhang, Zishang and Huang, Bangchao and Song, Chaoyang and Wan, Fang}, |
|
|
journal={arXiv preprint arXiv:2509.19169}, |
|
|
year={2025} |
|
|
} |
|
|
``` |
|
|
|