HealthAILab nielsr HF Staff commited on
Commit
2bac606
·
verified ·
1 Parent(s): 45d4977

Improve model card: Add pipeline tag, library name, paper/project/code links (#1)

Browse files

- Improve model card: Add pipeline tag, library name, paper/project/code links (6b6e5c673e20f32bd2ce39ed54bd4db409f5e41d)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +60 -3
README.md CHANGED
@@ -1,3 +1,60 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-classification
4
+ library_name: transformers
5
+ ---
6
+
7
+ # Menta: A Small Language Model for On-Device Mental Health Prediction
8
+
9
+ Menta is an optimized small language model (SLM) fine-tuned specifically for multi-task mental health prediction from social media data. As presented in the paper [Menta: A Small Language Model for On-Device Mental Health Prediction](https://huggingface.co/papers/2512.02716), it addresses the need for privacy-preserving and efficient mental health assessment on mobile devices.
10
+
11
+ - **Paper**: [Menta: A Small Language Model for On-Device Mental Health Prediction](https://huggingface.co/papers/2512.02716)
12
+ - **Project Page**: https://xxue752-nz.github.io/menta-project/
13
+ - **Code Repository**: https://github.com/xxue752-nz/Menta
14
+
15
+ <div align="center">
16
+ <img src="https://github.com/xxue752-nz/Menta/raw/main/workflow.png" alt="Menta Workflow" width="100%">
17
+
18
+ **Privacy-Preserving Mental Health Assessment Using Small Language Models on Mobile Devices**
19
+
20
+ </div>
21
+
22
+ ## Overview
23
+
24
+ Menta is an optimized small language model for multi task mental health prediction from social media. It is trained with a LoRA based cross dataset regimen and a balanced accuracy oriented objective across six classification tasks. Compared with nine state of the art small language model baselines, Menta delivers an average improvement of 15.2 percent over the best SLM without fine tuning and it surpasses 13B parameter large language models on depression and stress while remaining about 3.25 times smaller. We also demonstrate real time on device inference on an iPhone 15 Pro Max that uses about 3 GB of RAM, enabling scalable and privacy preserving mental health monitoring.
25
+
26
+ ## Key Features
27
+
28
+ - **Privacy-First**: All processing happens on-device, no data leaves your device
29
+ - **Mobile-Optimized**: Designed specifically for iOS devices with efficient resource usage
30
+ - **Multi-Dimensional Analysis**: Evaluates depression, stress, and suicidal thoughts
31
+ - **Real-Time Monitoring**: Provides immediate in-situ predictions
32
+ - **High Accuracy**: Fine-tuned SLMs for mental health assessment tasks
33
+
34
+ ## Technical Stack
35
+
36
+ ### Deployment
37
+ - **Language**: Swift, SwiftUI
38
+ - **Platform**: iOS 15.0+
39
+ - **ML Framework**: `llama.cpp` (C++ inference)
40
+ - **Model Format**: GGUF (quantized models)
41
+
42
+ ### Training
43
+ - **Language**: Python 3.8+
44
+ - **Frameworks**: PyTorch, Transformers
45
+ - **Techniques**: LoRA fine-tuning, multi-task learning
46
+ - **Base Models**: Small Language Models (SLMs)
47
+
48
+ For more detailed deployment and training instructions, please refer to the [GitHub repository](https://github.com/xxue752-nz/Menta).
49
+
50
+ ## Citation
51
+ If you find our work helpful or inspiring, please feel free to cite it:
52
+ ```bibtex
53
+ @inproceedings{menta2025menta,
54
+ title={Menta: A Small Language Model for On-Device Mental Health Prediction},
55
+ author={},
56
+ booktitle={Annual Conference on Neural Information Processing Systems},
57
+ year={2025},
58
+ url={https://arxiv.org/abs/2512.02716},
59
+ }
60
+ ```