π©Ί Liquid-LFM-1.2B-Medical-Doctor
This is a fine-tuned version of LiquidAI/LFM2.5-1.2B-Instruct specialized in medical dialogue and clinical assessment.
It was trained to adopt the persona of an empathetic and professional physician, capable of analyzing patient symptoms and structuring responses in a clinical format ("Assessment", "Plan", etc.).
π» Hardware & Training
- Hardware: Trained on 3x NVIDIA H100 80GB GPUs.
- Base Model: Liquid LFM 1.2B (Lightweight, efficient architecture).
- Dataset: UCSD Medical Dialogue (English).
- Method: QLoRA / LoRA fine-tuning via
trlandpeft.
β οΈ Medical Disclaimer
This model is an AI research artifact and IS NOT a substitute for professional medical advice, diagnosis, or treatment. It has been trained on historical medical dialogues but can hallucinate or provide incorrect information. Always consult a qualified healthcare provider for personal medical needs.
π Quick Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# 1. Load Base Model
base_model = AutoModelForCausalLM.from_pretrained(
"LiquidAI/LFM2.5-1.2B-Instruct",
device_map="auto",
torch_dtype=torch.bfloat16,
trust_remote_code=True
)
# 2. Load Adapter
adapter_id = "5ivatej/Liquid-LFM-1.2B-Medical-Doctor"
model = PeftModel.from_pretrained(base_model, adapter_id)
model = model.merge_and_unload()
# 3. Run Inference
tokenizer = AutoTokenizer.from_pretrained("LiquidAI/LFM2.5-1.2B-Instruct")
messages = [{"role": "user", "content": "I have a headache."}]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
output = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))
- Downloads last month
- 1
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support
Model tree for 5ivatej/Liquid-LFM-1.2B-Medical-Doctor
Base model
LiquidAI/LFM2.5-1.2B-Base
Finetuned
LiquidAI/LFM2.5-1.2B-Instruct