gpt-oss-20b finetuned with multiple opus 4.5 datasets

This repository contains a merged standalone model built by applying a LoRA adapter onto openai/gpt-oss-20b.

Intended Use

Use this model for text generation/chat-style inference with GPT-OSS-compatible prompt formatting.

Limitations

  • This model inherits the capabilities and limitations of the base model and the fine-tuning data/process.
  • This model card does not yet include benchmark or safety evaluation numbers for this specific checkpoint.
Downloads last month
31
Safetensors
Model size
21B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for P0x0/oss-20B-opus-distill

Base model

openai/gpt-oss-20b
Adapter
(139)
this model
Adapters
2 models