PMTX1-2B Merged

PMTX1-2B Merged is a full merged-weight model derived from Qwen/Qwen3.5-2B + PMTX1 LoRA adapter.

Repository Type

This is a merged full-weight repo (not a LoRA adapter repo).

Contains at least:

  • model.safetensors
  • config.json
  • generation_config.json
  • tokenizer.json
  • tokenizer_config.json
  • chat_template.jinja

One-Command Pull (Transformers)

pip install -U torch transformers
python -c "from huggingface_hub import snapshot_download; snapshot_download('silas114514/PMTX1-2B-merged')"

Quick Inference (Transformers)

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

repo = "silas114514/PMTX1-2B-merged"

tok = AutoTokenizer.from_pretrained(repo, trust_remote_code=True)
if tok.pad_token is None:
    tok.pad_token = tok.eos_token or tok.unk_token

model = AutoModelForCausalLM.from_pretrained(
    repo,
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True,
)

prompt = "你是 Prompt Evolution 的提示词纠偏教练。请只做提示词优化,不要直接代做任务。\n原始提示词:写周报,你看着办就行,快一点。"
inputs = tok(prompt, return_tensors="pt").to(model.device)
out = model.generate(**inputs, max_new_tokens=200, do_sample=False)
print(tok.decode(out[0], skip_special_tokens=True))

One-Command Pull (vLLM)

vllm serve silas114514/PMTX1-2B-merged --trust-remote-code

Base Model and Training Profile

  • Base model: Qwen/Qwen3.5-2B
  • Prompt style: train_aligned
  • LoRA layers: top 8
  • LoRA targets: q_proj,k_proj,v_proj,o_proj,gate_proj,up_proj,down_proj
  • Training summary files included: run_summary.json, run_config.json, metrics_summary.json

Known Limitations

  • This model is optimized for prompt rewriting guidance, not unrestricted factual generation.
  • Some prompts may still need strict output post-processing for production format guarantees.
  • For Ollama: official qwen3.5 base can be pulled directly, but custom Qwen3.5 fine-tune portability to Ollama may require additional conversion/tooling support and validation.

License and Compliance

  • Follow upstream Qwen/Qwen3.5-2B license and usage terms.
  • This merged model is a derivative artifact from that base model.
Downloads last month
17
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for silas114514/PMTX1-2B-merged

Finetuned
Qwen/Qwen3.5-2B
Finetuned
(39)
this model
Quantizations
1 model