HyperParam Tuning LoRA (max_seq_len=2048)
LoRA adapter fine-tuned from Qwen/Qwen3-4B-Instruct-2507 using QLoRA (4-bit, Unsloth).
Training Configuration
- Base model: Qwen/Qwen3-4B-Instruct-2507
- Dataset: u-10bei/structured_data_with_cot_dataset_v2
- Method: QLoRA (4-bit)
- Max sequence length: 2048
- Epochs: 3
- Learning rate: 0.0001
- LoRA: r=64, alpha=128
Sources & License
- Training Data: u-10bei/structured_data_with_cot_dataset_v2
- Dataset License: CC-BY-4.0. This dataset is used and can be redistributed under the terms of the CC-BY-4.0 license.
- Compliance: Users must comply with both the dataset attribution requirements and the base model original terms of use.
- Downloads last month
- 4
Model tree for kawatoshi3/exp2a-lora
Base model
Qwen/Qwen3-4B-Instruct-2507