Prompt Evolution Qwen3.5-2B Adapter (train_aligned)
This repository provides a LoRA adapter fine-tuned from Qwen/Qwen3.5-2B for Chinese prompt-rewriting coaching (train_aligned style).
What this is
- Format: PEFT LoRA adapter (
adapter_model.safetensors) - Base model required:
Qwen/Qwen3.5-2B - Not GGUF / not Ollama one-click by default
Quick Start (Transformers + PEFT)
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
from peft import PeftModel
base_model = "Qwen/Qwen3.5-2B"
adapter_repo = "<your-username>/prompt-evolution-qwen35-2b-adapter"
bnb = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.float16,
bnb_4bit_use_double_quant=True,
)
tok = AutoTokenizer.from_pretrained(base_model, trust_remote_code=True)
if tok.pad_token is None:
tok.pad_token = tok.eos_token or tok.unk_token
model = AutoModelForCausalLM.from_pretrained(
base_model,
quantization_config=bnb,
torch_dtype=torch.float16,
device_map="auto",
trust_remote_code=True,
)
model = PeftModel.from_pretrained(model, adapter_repo)
model.eval()
prompt = "你是 Prompt Evolution 的提示词纠偏教练。请只做提示词优化,不要直接代做任务。\n原始提示词:写周报,你看着办就行,快一点。"
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=220, do_sample=False)
print(tok.decode(out[0], skip_special_tokens=True))
Training Snapshot
- Prompt style:
train_aligned - Steps: 100
- Selected layers: top 8 transformer layers
- LoRA targets:
q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj - See
run_summary.json,run_config.json,metrics_summary.jsonfor details.
Notes
- Follow base model license/usage terms from
Qwen/Qwen3.5-2B. - This adapter is intended for prompt-rewriting assistance, not fact injection or task substitution.
- Downloads last month
- 2