Chat2Find-CPT: Qwen 3.5 4B (Sri Lankan Continued Pre-Training)

Chat2Find-CPT is a specialized version of the Qwen 3.5 4B model, enhanced via Continued Pre-Training (CPT) using QLoRA (4-bit) to excel in Sri Lankan linguistic and cultural contexts. It features robust proficiency in Sinhala, Tamil, and English.

Model Details

  • Developed by: Sentient (Chat2Find)
  • Base Model: Qwen 3.5-4B
  • Training Method: Continued Pre-Training (CPT) with QLoRA
  • Languages: Sinhala (si), Tamil (ta), English (en)
  • Quantization: 4-bit (bitsandbytes)

Technical Specifications

Training Hardware

  • Frameworks: Unsloth, Hugging Face Transformers, PEFT

Training Hyperparameters

  • Method: QLoRA (Rank 32, Alpha 32)
  • Learning Rate: 5e-5 (Cosine Scheduler)
  • Optimizer: AdamW (8-bit)
  • Epochs: 1.0
  • Sequence Length: 2048 tokens
  • Batch Size: 2 (local) / 8 (global with Gradient Accumulation)

Dataset

The model underwent true Continued Pre-Training on a massive 1.38 GB unstructured text corpus. The data was densely packed into:

  • Size: 270,000 packed sequences of 2048 tokens each (550 Million total Qwen tokens / approx. 255 Million words).
  • Epochs: 1 Epoch (Standard pre-training practice to prevent overfitting).
  • Content: Sri Lankan News & Media, Cultural Context, and domain-specific raw web data.

Capabilities

Chat2Find-CPT excels at:

  1. Sinhala & Tamil Generation: Fluent and contextually relevant text generation.
  2. Code-Switching: Handling natural language mixes common in Sri Lankan communication.
  3. Local Knowledge: Understanding entities, locations, and cultural references specific to Sri Lanka.

Usage

Using Unsloth (Recommended for Speed)

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Chat2Find/Chat2Find-CPT",
    max_seq_length = 2048,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

prompt = "ශ්‍රී ලංකාව ගැන කෙටි විස්තරයක්:"

inputs = tokenizer(
    text=[prompt],
    return_tensors="pt"
).to("cuda")

outputs = model.generate(**inputs, max_new_tokens=256)
# Decode the generated text
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Using Standard Transformers (GPU)

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Chat2Find/Chat2Find-CPT"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")

# Note: The model is a merged 16-bit weight set. 
# You can load it in 4-bit/8-bit using BitsAndBytes.

Running on CPU Only

If you do not have a dedicated GPU, you can explicitly map the model to CPU. Note that inference will be significantly slower.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "Chat2Find/Chat2Find-CPT"
tokenizer = AutoTokenizer.from_pretrained(model_name)

# Force the model to load into CPU RAM
model = AutoModelForCausalLM.from_pretrained(
    model_name, 
    device_map="cpu", 
    torch_dtype="auto" # Loads in bfloat16 to save RAM
)

prompt = "ශ්‍රී ලංකාව ගැන කෙටි විස්තරයක්:"
inputs = tokenizer(text=[prompt], return_tensors="pt").to("cpu")

outputs = model.generate(**inputs, max_new_tokens=128)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Limitations & Bias

While Chat2Find-CPT is significantly better at local languages than the base Qwen model, it may still exhibit biases present in the training data or the base model's internal knowledge. Users are encouraged to perform their own safety checks for specific deployment scenarios.

License

This model is subject to the original Qwen License Agreement.

Downloads last month
1,000
Safetensors
Model size
5B params
Tensor type
BF16
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Chat2Find/Chat2Find-CPT

Finetunes
1 model