GovOn Civil Response Adapter (EXAONE 4.0-32B QLoRA)

A QLoRA adapter for EXAONE 4.0-32B that drafts Korean government civil complaint responses with the appropriate tone, structure, and regulatory awareness.

Part of the GovOn agentic platform | Documentation | Training Data

Why This Exists

South Korean government civil servants handle thousands of citizen complaints daily. Each response requires understanding citizen intent, searching relevant regulations across multiple systems, drafting in an appropriate administrative tone, and citing legal grounds -- all for a single complaint. Most of this work is repetitive, yet demands precision and empathy.

GovOn is an agentic CLI shell built with LangGraph that automates this pipeline while keeping the human in full control. Every AI action requires explicit approval before execution. This adapter is the component responsible for generating draft responses to citizen complaints, designed to be dynamically loaded only when needed.

Architecture: Where This Adapter Fits

GovOn uses a Multi-LoRA architecture where the base model handles planning and intent classification, while specialized adapters are attached per-request for specific capabilities.

User Query
    |
    v
[session_load] --> [planner (base EXAONE 4.0-32B)]
                        |
                   [approval_wait] -- human approves -->
                        |
                   [tool_execute]
                     /        \
           civil-adapter    legal-adapter
           (this model)     (legal citations)
                     \        /
                   [synthesis]
                        |
                   [persist] --> Response to citizen
  • Base model: EXAONE 4.0-32B handles intent classification and tool selection
  • This adapter: Dynamically attached only when drafting citizen complaint responses
  • Legal adapter: A companion LoRA for legal citation augmentation (govon-legal-adapter)
  • Serving: vLLM with per-request LoRA switching (Multi-LoRA)

Quick Start

Standalone with PEFT

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained(
    "LGAI-EXAONE/EXAONE-4.0-32B",
    torch_dtype="bfloat16",
    device_map="auto",
    trust_remote_code=True,
)
model = PeftModel.from_pretrained(base, "umyunsang/govon-civil-adapter")
tokenizer = AutoTokenizer.from_pretrained("umyunsang/govon-civil-adapter")

messages = [
    {"role": "system", "content": "당신은 대한민국 정부 민원 상담 전문가입니다. 시민의 질문에 정확하고 친절하게 답변해 주세요."},
    {"role": "user", "content": "주민등록증 재발급 절차를 알려주세요."},
]

input_ids = tokenizer.apply_chat_template(
    messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
)
output = model.generate(input_ids.to(model.device), max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Production: vLLM Multi-LoRA

Launch the server with the adapter registered:

vllm serve LGAI-EXAONE/EXAONE-4.0-32B-AWQ \
  --enable-lora \
  --lora-modules civil=umyunsang/govon-civil-adapter \
  --max-lora-rank 16

Then call it using the OpenAI-compatible API:

from openai import OpenAI

client = OpenAI(base_url="http://localhost:8000/v1")

response = client.chat.completions.create(
    model="civil",  # LoRA adapter name
    messages=[
        {"role": "system", "content": "당신은 대한민국 정부 민원 상담 전문가입니다."},
        {"role": "user", "content": "주민등록증 재발급 절차를 알려주세요."},
    ],
    max_tokens=512,
)
print(response.choices[0].message.content)

Adapter Configuration

Parameter Value
PEFT type LoRA
Rank (r) 16
Alpha 32
Dropout 0
Target modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Task type CAUSAL_LM
Bias none
PEFT version 0.18.1
Adapter size 534 MB

Training Details

Parameter Value
Base model EXAONE 4.0-32B
Hardware HuggingFace Spaces L40S (48 GB VRAM)
Quantization 4-bit (Unsloth)
Optimizer AdamW 8-bit
Learning rate 2e-4 (cosine scheduler)
Batch size 2 (gradient accumulation 8 = effective 16)
Epochs 1
Max sequence length 1,024
Precision bf16
Warmup ratio 0.03
Packing Enabled
Optimizations Unsloth gradient checkpointing, flash attention

Training Data

This adapter was trained on umyunsang/govon-civil-response-data, derived from AI Hub Dataset 71852 (Korean Public Civil Complaint QA). Administrative law data (Dataset 71847) was intentionally excluded and reserved for the companion legal-adapter.

Training used the EXAONE 4.0 native chat template via tokenizer.apply_chat_template() with the following system prompt:

당신은 대한민국 정부 민원 상담 전문가입니다. 시민의 질문에 정확하고 친절하게 답변해 주세요.

Intended Use and Limitations

Intended use

  • Drafting responses to Korean government civil complaints within the GovOn agentic workflow
  • Research and experimentation with domain-adapted Korean language models
  • Educational exploration of LoRA fine-tuning for government NLP tasks

Limitations

  • This adapter generates draft responses only. Human review is mandatory before any response is sent to citizens.
  • Quality may degrade for complaint types outside the training data distribution.
  • Designed specifically for Korean language government civil complaint responses. Performance on other languages or domains is not guaranteed.
  • For legal citation and evidence augmentation, use the companion legal-adapter.

License

This adapter involves two licenses:

Component License
This adapter (LoRA weights) MIT License (GovOn project)
Base model (EXAONE 4.0-32B) EXAONE AI Model License 1.2 - NC

The base model license permits non-commercial use for research and education. Commercial deployment is available through the FriendliAI platform. The base model may not be used to develop models that compete with EXAONE. For commercial licensing inquiries, contact contact_us@lgresearch.ai.

Users must comply with both licenses when using this adapter.

Related Resources

Resource Link
GovOn Platform github.com/GovOn-Org/GovOn
Documentation govon-org.github.io/GovOn
Training Data umyunsang/govon-civil-response-data
Training Space umyunsang/govon-civil-adapter-train
Legal Adapter siwo/govon-legal-adapter
Team GovOn-Org (On-Device AI Team)

Citation

If you use this adapter in your research, please cite:

@misc{govon-civil-adapter,
  title={GovOn Civil Response Adapter: QLoRA for Korean Government Complaint Drafting},
  author={GovOn-Org},
  year={2026},
  url={https://huggingface.co/umyunsang/govon-civil-adapter},
}
Downloads last month
153
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for umyunsang/govon-civil-adapter

Adapter
(2)
this model

Dataset used to train umyunsang/govon-civil-adapter

Spaces using umyunsang/govon-civil-adapter 2