File size: 1,755 Bytes
e67cfc4 6ffa4be 4ee8cb4 6ffa4be 4ee8cb4 6ffa4be e67cfc4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
license: apache-2.0
datasets:
- arafatanam/Mental-Health-Couseling
- arafatanam/Student-Mental-Health-Counseling-10K
language:
- en
base_model:
- unsloth/llama-3-8b-Instruct-bnb-4bit
tags:
- mental-health
- student-focused
- chatbot
---
# LLaMA-3-8B-Instruct Fine-Tuned for Mental Health Counseling
## Model Overview
This is a fine-tuned version of [`unsloth/llama-3-8b-Instruct-bnb-4bit`](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit), adapted for mental health counseling applications. It is designed to provide thoughtful, relevant, and compassionate responses.
## Dataset
- **Amod/mental_health_counseling_conversations** (cleaned version: [`arafatanam/Mental-Health-Counseling`](https://huggingface.co/datasets/arafatanam/Mental-Health-Counseling)) - **2752 rows**
- **chillies/student-mental-health-counseling-vn** (translated version: [`arafatanam/Student-Mental-Health-Counseling-10K`](https://huggingface.co/datasets/arafatanam/Student-Mental-Health-Counseling-10K)) - **7500 rows**
- **Total dataset size**: 10,252 rows
## Training Details
- **Hardware**: Kaggle Notebooks (GPU T4 x2)
- **Fine-tuning framework**: `Unsloth` with `LoRA`
- **Training settings**:
- `max_seq_length = 512`
- `batch_size = 8`
- `gradient_accumulation_steps = 4`
- `num_train_epochs = 2`
- `learning_rate = 5e-5`
- `optimizer = adamw_8bit`
- `lr_scheduler = cosine`
## Training Results
- **Final training loss**: `1.2433`
- **Total steps**: `640`
- **Trainable parameters**: `0.52%` of the model`
- **Validation loss**: `1.182`
- **Evaluation metric** (perplexity): `3.15`
## Usage
This model can be applied to:
- AI-driven mental health chatbots
- Personalized therapy assistance
- Generating mental health support content |