# Fine-tuned Model: Dindan ## 📚 Training Configuration - **data_path**: `QomSSLab/MMPD_QA_v2` - **output_dir**: `gemma312b_lora_chckpnts-dindan` - **new_model_name**: `Dindan` - **data_ratio**: `1.0` - **model_name**: `QomSSLab/gemma-3-12b-it` - **use_4bit**: `False` - **use_lora**: `False` - **max_seq_length**: `2048` - **batch_size**: `16` - **gradient_accu**: `8` - **epochs**: `1` - **learning_rate**: `2e-05` - **lora_alpha**: `256` - **lora_drop**: `0.05` - **lora_r**: `256` - **tune_embedding_layer**: `False` - **hf_token**: `********` - **resume_from_checkpoint**: `False` - **use_8bit_optimizer**: `True` - **push_to_hub**: `True` - **push_lora_only**: `False` - **train_only_on_assistant**: `False` - **last_response**: `False` --- Auto-generated after training.