Model Card for khazarai/Azerbaijani-math-1.7B

Model Details

This model is a fine-tuned version of the Qwen3-1.7B, adapted for instruction-following in Azerbaijani, with focus on mathematical problem solving. The fine-tuning process improves the model’s ability to:

  • Understand and solve math problems written in Azerbaijani
  • Respond in a fluent, natural Azerbaijani style
  • Follow task-specific instructions with improved alignment and chat capability.

Model Description

  • Developed by: Rustam Shiriyev
  • Language(s) (NLP): Azerbaijani
  • License: MIT
  • Finetuned from model: unsloth/Qwen3-1.7B

Uses

Direct Use

This model is best suited for:

  • Solving and explaining math problems in Azerbaijani
  • Educational assistants and tutoring bots for Azerbaijani students

Out-of-Scope Use

  • Not fine-tuned for factual correctness or safety filtering.

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel


tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B",)
base_model = AutoModelForCausalLM.from_pretrained(
    "unsloth/Qwen3-1.7B",
    device_map={"": 0}
)

model = PeftModel.from_pretrained(base_model,"khazarai/Azerbaijani-math-1.7B")


question = "Bir f(x) funksiyası verilib: f(x) = 2x^2 + 3x + 4. Bu funksiyanın maksimum və ya minimum nöqtəsini hesablayın və nəticəni geniş izah edin."

messages = [
    {"role" : "user", "content" : question}
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize = False,
    add_generation_prompt = True, 
    enable_thinking = False, 
)

from transformers import TextStreamer
_ = model.generate(
    **tokenizer(text, return_tensors = "pt").to("cuda"),
    max_new_tokens = 512,
    temperature = 0.7,
    top_p = 0.8,
    top_k = 20,
    streamer = TextStreamer(tokenizer, skip_prompt = True),
)

For pipeline:

from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

tokenizer = AutoTokenizer.from_pretrained("unsloth/Qwen3-1.7B")
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Qwen3-1.7B")
model = PeftModel.from_pretrained(base_model, "khazarai/Azerbaijani-math-1.7B")

question ="""
Bir f(x) funksiyası verilib: f(x) = 2x^2 + 3x + 4. Bu funksiyanın maksimum və ya minimum nöqtəsini hesablayın və nəticəni geniş izah edin.
"""

pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
    {"role": "user", "content": question}
]
pipe(messages)

Training Details

Training Data

The model was fine-tuned on a curated combination of:

  • OnlyCheeini/azerbaijani-math-gpt4o — 100,000 examples of Azerbaijani math instructions generated via GPT-4o, focused on algebra, geometry, and applied math.

  • mlabonne/FineTome-100k — 35,000 chat-style instruction samples (35% of the full dataset) to improve general-purpose instruction following and conversational ability.

Framework versions

  • PEFT 0.14.0
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for khazarai/Azerbaijani-math-1.7B

Finetuned
Qwen/Qwen3-1.7B
Finetuned
unsloth/Qwen3-1.7B
Adapter
(15)
this model

Datasets used to train khazarai/Azerbaijani-math-1.7B

Space using khazarai/Azerbaijani-math-1.7B 1

Collection including khazarai/Azerbaijani-math-1.7B