Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
nodes: list<item: struct<vector: list<item: double>, neighbors: list<item: list<item: int64>>, max_layer: int64, metadata: string>>
entry_point: int64
max_layer: int64
dim: int64
vs
vrom_id: string
version: string
description: string
source: string
embedding_spec: struct<model: string, model_source: string, dimensions: int64, quantization: string, distance_metric: string, normalized: bool>
hnsw_config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
vector_count: int64
total_tokens: int64
total_chunks: int64
corpus_hash: string
created_at: timestamp[s]
chunk_strategy: struct<method: string, max_tokens: int64, overlap: int64, preserve_code_blocks: bool, linked_list_pointers: bool>
files: struct<index: string, chunks: string, manifest: string>
compatibility: struct<vecdb_wasm: string, load_method: string>
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1821, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 781, in finalize
                  self.write_rows_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 662, in write_rows_on_file
                  table = pa.concat_tables(self.current_rows)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
              nodes: list<item: struct<vector: list<item: double>, neighbors: list<item: list<item: int64>>, max_layer: int64, metadata: string>>
              entry_point: int64
              max_layer: int64
              dim: int64
              vs
              vrom_id: string
              version: string
              description: string
              source: string
              embedding_spec: struct<model: string, model_source: string, dimensions: int64, quantization: string, distance_metric: string, normalized: bool>
              hnsw_config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
              vector_count: int64
              total_tokens: int64
              total_chunks: int64
              corpus_hash: string
              created_at: timestamp[s]
              chunk_strategy: struct<method: string, max_tokens: int64, overlap: int64, preserve_code_blocks: bool, linked_list_pointers: bool>
              files: struct<index: string, chunks: string, manifest: string>
              compatibility: struct<vecdb_wasm: string, load_method: string>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

chunk_id
int64
text
string
source_file
string
section_heading
string
char_start
int64
char_end
int64
token_estimate
int64
prev_chunk_id
int64
next_chunk_id
int64
url
string
doc_title
string
0
# TRL - Transformers Reinforcement Learning TRL is a full stack library where we provide a set of tools to train transformer language models with methods like Supervised Fine-Tuning (SFT), Group Relative Policy Optimization (GRPO), Direct Preference Optimization (DPO), Reward Modeling, and more. The library is integra...
trl/index.md
TRL - Transformers Reinforcement Learning
0
391
97
null
1
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
1
## 🎉 What's New **TRL v1:** We released TRL v1 — a major milestone that marks a real shift in what TRL is. Read the [blog post](https://huggingface.co/blog/trl-v1) to learn more.
trl/index.md
🎉 What's New
393
572
44
0
2
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
2
## Taxonomy Below is the current list of TRL trainers, organized by method type (⚡️ = vLLM support; 🧪 = experimental).
trl/index.md
Taxonomy
574
693
29
1
3
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
3
### Online methods - [`GRPOTrainer`](grpo_trainer) ⚡️ - [`RLOOTrainer`](rloo_trainer) ⚡️ - [`OnlineDPOTrainer`](online_dpo_trainer) 🧪 ⚡️ - [`NashMDTrainer`](nash_md_trainer) 🧪 ⚡️ - [`PPOTrainer`](ppo_trainer) 🧪 - [`XPOTrainer`](xpo_trainer) 🧪 ⚡️
trl/index.md
Online methods
695
941
61
2
4
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
4
### Reward modeling - [`RewardTrainer`](reward_trainer) - [`PRMTrainer`](prm_trainer) 🧪
trl/index.md
Reward modeling
943
1,031
22
3
5
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
5
### Offline methods - [`SFTTrainer`](sft_trainer) - [`DPOTrainer`](dpo_trainer) - [`BCOTrainer`](bco_trainer) 🧪 - [`CPOTrainer`](cpo_trainer) 🧪 - [`KTOTrainer`](kto_trainer) 🧪 - [`ORPOTrainer`](orpo_trainer) 🧪
trl/index.md
Offline methods
1,033
1,243
52
4
6
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
6
### Knowledge distillation - [`GKDTrainer`](gkd_trainer) 🧪 - [`MiniLLMTrainer`](minillm_trainer) 🧪 You can also explore TRL-related models, datasets, and demos in the [TRL Hugging Face organization](https://huggingface.co/trl-lib).
trl/index.md
Knowledge distillation
1,245
1,478
58
5
7
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
7
## Learn Learn post-training with TRL and other libraries in 🤗 [smol course](https://github.com/huggingface/smol-course).
trl/index.md
Learn
1,480
1,602
30
6
8
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
8
## Contents The documentation is organized into the following sections: - **Getting Started**: installation and quickstart guide. - **Conceptual Guides**: dataset formats, training FAQ, and understanding logs. - **How-to Guides**: reducing memory usage, speeding up training, distributing training, etc. - **Integratio...
trl/index.md
Contents
1,604
2,058
113
7
9
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
9
## Blog posts
trl/index.md
Blog posts
2,060
2,073
3
8
10
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
10
Published March 27, 2026 TRL v1: Post-Training Library That Holds When the Field Invalidates Its Own Assumptions Published October 23, 2025 Building the Open Agent Ecosystem Together: Introducing OpenEnv Published on August 7, 2025 Vision Language Model Al...
trl/index.md
Blog posts
2,096
3,470
348
9
11
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
11
## Talks Talk given on October 30, 2025 Fine tuning with TRL
trl/index.md
Talks
3,480
3,568
22
10
null
https://huggingface.co/docs/trl/index
TRL - Transformers Reinforcement Learning
12
# SFT Trainer [![All_models-SFT-blue](https://img.shields.io/badge/All_models-SFT-blue)](https://huggingface.co/models?other=sft,trl) [![smol_course-Chapter_1-yellow](https://img.shields.io/badge/smol_course-Chapter_1-yellow)](https://github.com/huggingface/smol-course/tree/main/1_instruction_tuning)
trl/sft_trainer.md
SFT Trainer
0
302
75
null
13
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
13
## Overview TRL supports the Supervised Fine-Tuning (SFT) Trainer for training language models. This post-training method was contributed by [Younes Belkada](https://huggingface.co/ybelkada).
trl/sft_trainer.md
Overview
304
497
48
12
14
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
14
## Quick start This example demonstrates how to train a language model using the [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer) from TRL. We train a [Qwen 3 0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model on the [Capybara dataset](https://huggingface.co/datasets/trl-lib/Capybara), a compact, diverse ...
trl/sft_trainer.md
Quick start
499
1,093
148
13
15
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
15
## Expected dataset type and format SFT supports both [language modeling](dataset_formats#language-modeling) and [prompt-completion](dataset_formats#prompt-completion) datasets. The [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer) is compatible with both [standard](dataset_formats#standard) and [conversati...
trl/sft_trainer.md
Expected dataset type and format
1,095
1,596
125
14
16
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
16
# Standard language modeling {"text": "The sky is blue."}
trl/sft_trainer.md
Standard language modeling
1,597
1,654
14
15
17
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
17
# Conversational language modeling {"messages": [{"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "It is blue."}]}
trl/sft_trainer.md
Conversational language modeling
1,656
1,823
41
16
18
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
18
# Standard prompt-completion {"prompt": "The sky is", "completion": " blue."}
trl/sft_trainer.md
Standard prompt-completion
1,825
1,903
19
17
19
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
19
# Conversational prompt-completion {"prompt": [{"role": "user", "content": "What color is the sky?"}], "completion": [{"role": "assistant", "content": "It is blue."}]} ``` If your dataset is not in one of these formats, you can preprocess it to convert it into the expected format. Here is an example with the [Freedom...
trl/sft_trainer.md
Conversational prompt-completion
1,905
2,727
205
18
20
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
20
dataset = dataset.map(preprocess_function, remove_columns=["Question", "Response", "Complex_CoT"]) print(next(iter(dataset["train"]))) ``` ```json { "prompt": [ { "content": "Given the symptoms of sudden weakness in the left arm and leg, recent long-distance travel, and the presence of swollen ...
trl/sft_trainer.md
Conversational prompt-completion
2,729
3,568
209
19
21
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
21
## Looking deeper into the SFT method Supervised Fine-Tuning (SFT) is the simplest and most commonly used method to adapt a language model to a target dataset. The model is trained in a fully supervised fashion using pairs of input and output sequences. The goal is to minimize the negative log-likelihood (NLL) of the ...
trl/sft_trainer.md
Looking deeper into the SFT method
3,570
4,072
125
20
22
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
22
### Preprocessing and tokenization During training, each example is expected to contain a **text field** or a **(prompt, completion)** pair, depending on the dataset format. For more details on the expected formats, see [Dataset formats](dataset_formats). The [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer...
trl/sft_trainer.md
Preprocessing and tokenization
4,074
4,543
117
21
23
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
23
### Computing the loss ![sft_figure](https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/sft_figure.png) The loss used in SFT is the **token-level cross-entropy loss**, defined as: $$ \mathcal{L}_{\text{SFT}}(\theta) = - \sum_{t=1}^{T} \log p_\theta(y_t \mid y_{ [!TIP] > The paper [On the Gener...
trl/sft_trainer.md
Computing the loss
4,545
5,422
219
22
24
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
24
### Label shifting and masking During training, the loss is computed using a **one-token shift**: the model is trained to predict each token in the sequence based on all previous tokens. Specifically, the input sequence is shifted right by one position to form the target labels. Padding tokens (if present) are ignored...
trl/sft_trainer.md
Label shifting and masking
5,424
5,921
124
23
25
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
25
## Logged metrics While training and evaluating we record the following reward metrics: * `global_step`: The total number of optimizer steps taken so far. * `epoch`: The current epoch number, based on dataset iteration. * `num_tokens`: The total number of tokens processed so far. * `loss`: The average cross-entropy l...
trl/sft_trainer.md
Logged metrics
5,923
6,723
200
24
26
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
26
## Customization
trl/sft_trainer.md
Customization
6,725
6,741
4
25
27
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
27
### Model initialization You can directly pass the kwargs of the `from_pretrained()` method to the [SFTConfig](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTConfig). For example, if you want to load a model in a different precision, analogous to ```python model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B", dtype...
trl/sft_trainer.md
Model initialization
6,743
7,426
170
26
28
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
28
### Packing [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer) supports _example packing_, where multiple examples are packed in the same input sequence to increase training efficiency. To enable packing, simply pass `packing=True` to the [SFTConfig](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTConfig) constructor...
trl/sft_trainer.md
Packing
7,428
7,880
113
27
29
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
29
### Train on assistant messages only To train on assistant messages only, use a [conversational](dataset_formats#conversational) dataset and set `assistant_only_loss=True` in the [SFTConfig](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTConfig). This setting ensures that loss is computed **only** on the assistant responses, ...
trl/sft_trainer.md
Train on assistant messages only
7,882
8,890
252
28
30
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
30
### Train on completion only To train on completion only, use a [prompt-completion](dataset_formats#prompt-completion) dataset. By default, the trainer computes the loss on the completion tokens only, ignoring the prompt tokens. If you want to train on the full sequence, set `completion_only_loss=False` in the [SFTCon...
trl/sft_trainer.md
Train on completion only
8,892
9,347
113
29
31
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
31
# Load a prompt-completion dataset; loss is computed on the completion only by default dataset = load_dataset("trl-lib/kto-mix-14k", split="train") trainer = SFTTrainer( model="Qwen/Qwen2.5-0.5B-Instruct", args=SFTConfig(completion_only_loss=True), # True by default for prompt-completion datasets train_da...
trl/sft_trainer.md
Load a prompt-completion dataset; loss is computed on the completion only by default
9,349
10,158
202
30
32
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
32
### Train adapters with PEFT We support tight integration with 🤗 PEFT library, allowing any user to conveniently train adapters and share them on the Hub, rather than training the entire model. ```python from datasets import load_dataset from trl import SFTTrainer from peft import LoraConfig dataset = load_dataset(...
trl/sft_trainer.md
Train adapters with PEFT
10,160
10,878
179
31
33
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
33
```python from datasets import load_dataset from trl import SFTTrainer from peft import AutoPeftModelForCausalLM model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/Qwen3-4B-LoRA", is_trainable=True) dataset = load_dataset("trl-lib/Capybara", split="train") trainer = SFTTrainer( model=model, train_datas...
trl/sft_trainer.md
Train adapters with PEFT
10,880
11,421
135
32
34
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
34
### Train with Liger Kernel Liger Kernel is a collection of Triton kernels for LLM training that boosts multi-GPU throughput by 20%, cuts memory use by 60% (enabling up to 4× longer context), and works seamlessly with tools like FlashAttention, PyTorch FSDP, and DeepSpeed. For more information, see [Liger Kernel Integ...
trl/sft_trainer.md
Train with Liger Kernel
11,423
11,777
88
33
35
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
35
### Rapid Experimentation for SFT RapidFire AI is an open-source experimentation engine that sits on top of TRL and lets you launch multiple SFT configurations at once, even on a single GPU. Instead of trying configurations sequentially, RapidFire lets you **see all their learning curves earlier, stop underperforming ...
trl/sft_trainer.md
Rapid Experimentation for SFT
11,779
12,256
119
34
36
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
36
### Train with Unsloth Unsloth is an open‑source framework for fine‑tuning and reinforcement learning that trains LLMs (like Llama, Mistral, Gemma, DeepSeek, and more) up to 2× faster with up to 70% less VRAM, while providing a streamlined, Hugging Face–compatible workflow for training, evaluation, and deployment. For...
trl/sft_trainer.md
Train with Unsloth
12,258
12,644
96
35
37
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
37
## Instruction tuning example **Instruction tuning** teaches a base language model to follow user instructions and engage in conversations. This requires: 1. **Chat template**: Defines how to structure conversations into text sequences, including role markers (user/assistant), special tokens, and turn boundaries. Rea...
trl/sft_trainer.md
Instruction tuning example
12,646
13,566
230
36
38
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
38
```python from trl import SFTConfig, SFTTrainer from datasets import load_dataset trainer = SFTTrainer( model="Qwen/Qwen3-0.6B-Base", args=SFTConfig( output_dir="Qwen3-0.6B-Instruct", chat_template_path="HuggingFaceTB/SmolLM3-3B", ), train_dataset=load_dataset("trl-lib/Capybara", split=...
trl/sft_trainer.md
Instruction tuning example
13,568
14,555
246
37
39
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
39
```python >>> from transformers import pipeline >>> pipe = pipeline("text-generation", model="Qwen3-0.6B-Instruct/checkpoint-5000") >>> prompt = "user\nWhat is the capital of France? Answer in one word.\nassistant\n" >>> response = pipe(prompt) >>> response[0]["generated_text"] 'user\nWhat is the capital of France? Ans...
trl/sft_trainer.md
Instruction tuning example
14,557
15,339
195
38
40
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
40
## Tool Calling with SFT The [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer) fully supports fine-tuning models with _tool calling_ capabilities. In this case, each dataset example should include: * The conversation messages, including any tool calls (`tool_calls`) and tool responses (`tool` role messages...
trl/sft_trainer.md
Tool Calling with SFT
15,341
15,877
134
39
41
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
41
## Training Vision Language Models [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer) fully supports training Vision-Language Models (VLMs). To train a VLM, provide a dataset with either an `image` column (single image per sample) or an `images` column (list of images per sample). For more information on the...
trl/sft_trainer.md
Training Vision Language Models
15,879
16,711
208
40
42
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
42
> [!TIP] > For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_length=None` in the [SFTConfig](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTConfig). This allows the model to process the full sequence length without truncating image tokens. > > ```python > SFTConfig(max_len...
trl/sft_trainer.md
Training Vision Language Models
16,713
17,166
113
41
43
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
43
## SFTTrainer[[trl.SFTTrainer]]
trl/sft_trainer.md
SFTTrainer[[trl.SFTTrainer]]
17,168
17,199
7
42
44
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
44
#### trl.SFTTrainer[[trl.SFTTrainer]] [Source](https://github.com/huggingface/trl/blob/v1.2.0/trl/trainer/sft_trainer.py#L543) Trainer for Supervised Fine-Tuning (SFT) method. This class is a wrapper around the [Trainer](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/trainer#transformers.Trainer) cl...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
17,201
17,836
158
43
45
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
45
traintrl.SFTTrainer.trainhttps://github.com/huggingface/trl/blob/v1.2.0/transformers/trainer.py#L1323[{"name": "resume_from_checkpoint", "val": ": str | bool | None = None"}, {"name": "trial", "val": ": optuna.Trial | dict[str, Any] | None = None"}, {"name": "ignore_keys_for_eval", "val": ": list[str] | None = None"}]-...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
17,838
18,976
284
44
46
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
46
Main training entry point. **Parameters:**
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
18,978
19,021
10
45
47
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
47
model (`str` or [PreTrainedModel](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/model#transformers.PreTrainedModel) or `PeftModel`) : Model to be trained. Can be either: - A string, being the *model id* of a pretrained model hosted inside a model repo on huggingface.co, or a path to a *directory* con...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
19,023
20,138
278
46
48
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
48
args ([SFTConfig](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTConfig), *optional*) : Configuration for this trainer. If `None`, a default configuration is used. data_collator (`DataCollator`, *optional*) : Function to use to form a batch from a list of elements of the processed `train_dataset` or `eval_dataset`. Will defau...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
20,140
20,728
147
47
49
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
49
train_dataset ([Dataset](https://huggingface.co/docs/datasets/v4.8.4/en/package_reference/main_classes#datasets.Dataset) or [IterableDataset](https://huggingface.co/docs/datasets/v4.8.4/en/package_reference/main_classes#datasets.IterableDataset)) : Dataset to use for training. This trainer supports both [language model...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
20,730
21,467
184
48
50
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
50
eval_dataset ([Dataset](https://huggingface.co/docs/datasets/v4.8.4/en/package_reference/main_classes#datasets.Dataset), [IterableDataset](https://huggingface.co/docs/datasets/v4.8.4/en/package_reference/main_classes#datasets.IterableDataset) or `dict[str, Dataset | IterableDataset]`) : Dataset to use for evaluation. I...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
21,469
21,842
93
49
51
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
51
processing_class ([PreTrainedTokenizerBase](https://huggingface.co/docs/transformers/v5.5.4/en/internal/tokenization_utils#transformers.PreTrainedTokenizerBase), [ProcessorMixin](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/processors#transformers.ProcessorMixin), *optional*) : Processing class used ...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
21,844
22,539
173
50
52
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
52
compute_loss_func (`Callable`, *optional*) : A function that accepts the raw model outputs, labels, and the number of items in the entire accumulated batch (batch_size * gradient_accumulation_steps) and returns the loss. For example, see the default [loss function](https://github.com/huggingface/transformers/blob/052e6...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
22,541
22,950
102
51
53
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
53
compute_metrics (`Callable[[EvalPrediction], dict]`, *optional*) : The function that will be used to compute metrics at evaluation. Must take a [EvalPrediction](https://huggingface.co/docs/transformers/v5.5.4/en/internal/trainer_utils#transformers.EvalPrediction) and return a dictionary string to metric values. When pa...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
22,952
23,646
173
52
54
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
54
callbacks (list of [TrainerCallback](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/callback#transformers.TrainerCallback), *optional*) : List of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in [here](https://huggingface.co/docs/transformers/main_cl...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
23,648
24,645
249
53
55
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
55
optimizer_cls_and_kwargs (`tuple[Type[torch.optim.Optimizer], Dict[str, Any]]`, *optional*) : A tuple containing the optimizer class and keyword arguments to use. Overrides `optim` and `optim_args` in `args`. Incompatible with the `optimizers` argument. Unlike `optimizers`, this argument avoids the need to place model...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
24,647
25,645
249
54
56
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
56
formatting_func (`Callable`, *optional*) : Formatting function applied to the dataset before tokenization. Applying the formatting function explicitly converts the dataset into a [language modeling](#language-modeling) type. **Returns:** ``~trainer_utils.TrainOutput`` Object containing the global step count, trainin...
trl/sft_trainer.md
trl.SFTTrainer[[trl.SFTTrainer]]
25,647
25,987
85
55
57
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
57
#### save_model[[trl.SFTTrainer.save_model]] [Source](https://github.com/huggingface/trl/blob/v1.2.0/transformers/trainer.py#L3746) Will save the model, so you can reload it using `from_pretrained()`. Will only save from the main process.
trl/sft_trainer.md
save_model[[trl.SFTTrainer.save_model]]
25,988
26,229
60
56
58
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
58
#### push_to_hub[[trl.SFTTrainer.push_to_hub]] [Source](https://github.com/huggingface/trl/blob/v1.2.0/transformers/trainer.py#L3993) Upload `self.model` and `self.processing_class` to the 🤗 model hub on the repo `self.args.hub_model_id`. **Parameters:** commit_message (`str`, *optional*, defaults to `"End of trai...
trl/sft_trainer.md
push_to_hub[[trl.SFTTrainer.push_to_hub]]
26,230
27,223
248
57
59
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
59
## SFTConfig[[trl.SFTConfig]]
trl/sft_trainer.md
SFTConfig[[trl.SFTConfig]]
27,225
27,254
7
58
60
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
60
#### trl.SFTConfig[[trl.SFTConfig]] [Source](https://github.com/huggingface/trl/blob/v1.2.0/trl/trainer/sft_config.py#L23) Configuration class for the [SFTTrainer](/docs/trl/v1.2.0/en/sft_trainer#trl.SFTTrainer). This class includes only the parameters that are specific to SFT training. For a full list of training a...
trl/sft_trainer.md
trl.SFTConfig[[trl.SFTConfig]]
27,256
28,216
240
59
61
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
61
> [!NOTE] > These parameters have default values different from [TrainingArguments](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/trainer#transformers.TrainingArguments): > - `logging_steps`: Defaults to `10` instead of `500`. > - `gradient_checkpointing`: Defaults to `True` instead of `False`. > - `b...
trl/sft_trainer.md
trl.SFTConfig[[trl.SFTConfig]]
28,218
28,663
111
60
null
https://huggingface.co/docs/trl/sft_trainer
SFT Trainer
62
# DPO Trainer [![All_models-DPO-blue](https://img.shields.io/badge/All_models-DPO-blue)](https://huggingface.co/models?other=dpo,trl) [![smol_course-Chapter_2-yellow](https://img.shields.io/badge/smol_course-Chapter_2-yellow)](https://github.com/huggingface/smol-course/tree/main/2_preference_alignment)
trl/dpo_trainer.md
DPO Trainer
0
304
76
null
63
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
63
## Overview TRL supports the Direct Preference Optimization (DPO) Trainer for training language models, as described in the paper [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290) by [Rafael Rafailov](https://huggingface.co/rmrafailov), Archit Sh...
trl/dpo_trainer.md
Overview
306
848
135
62
64
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
64
> While large-scale unsupervised language models (LMs) learn broad world knowledge and some reasoning skills, achieving precise control of their behavior is difficult due to the completely unsupervised nature of their training. Existing methods for gaining such steerability collect human labels of the relative quality ...
trl/dpo_trainer.md
Overview
850
2,456
401
63
65
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
65
This post-training method was contributed by [Kashif Rasul](https://huggingface.co/kashif) and later refactored by [Quentin Gallouédec](https://huggingface.co/qgallouedec).
trl/dpo_trainer.md
Overview
2,458
2,630
43
64
66
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
66
## Quick start This example demonstrates how to train a language model using the [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#trl.DPOTrainer) from TRL. We train a [Qwen 3 0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) model on the [UltraFeedback dataset](https://huggingface.co/datasets/openbmb/UltraFeedbac...
trl/dpo_trainer.md
Quick start
2,632
3,183
137
65
67
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
67
## Expected dataset type and format DPO requires a [preference](dataset_formats#preference) dataset. The [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#trl.DPOTrainer) is compatible with both [standard](dataset_formats#standard) and [conversational](dataset_formats#conversational) dataset formats. When prov...
trl/dpo_trainer.md
Expected dataset type and format
3,185
3,622
109
66
68
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
68
# Standard format
trl/dpo_trainer.md
Standard format
3,623
3,640
4
67
69
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
69
## Explicit prompt (recommended) preference_example = {"prompt": "The sky is", "chosen": " blue.", "rejected": " green."}
trl/dpo_trainer.md
Explicit prompt (recommended)
3,641
3,762
30
68
70
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
70
# Implicit prompt preference_example = {"chosen": "The sky is blue.", "rejected": "The sky is green."}
trl/dpo_trainer.md
Implicit prompt
3,763
3,865
25
69
71
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
71
# Conversational format
trl/dpo_trainer.md
Conversational format
3,867
3,890
5
70
72
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
72
## Explicit prompt (recommended) preference_example = {"prompt": [{"role": "user", "content": "What color is the sky?"}], "chosen": [{"role": "assistant", "content": "It is blue."}], "rejected": [{"role": "assistant", "content": "It is green."}]}
trl/dpo_trainer.md
Explicit prompt (recommended)
3,891
4,181
72
71
73
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
73
## Implicit prompt preference_example = {"chosen": [{"role": "user", "content": "What color is the sky?"}, {"role": "assistant", "content": "It is blue."}], "rejected": [{"role": "user", "content": "What color is the sky?"}, {"rol...
trl/dpo_trainer.md
Implicit prompt
4,182
5,166
246
72
74
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
74
dataset = dataset.map(preprocess_function, remove_columns=["instruction", "input", "accepted", "ID"]) print(next(iter(dataset["train"]))) ``` ```json { "prompt": [{"role": "user", "content": "Create a nested loop to print every combination of numbers [...]"}], "chosen": [{"role": "assistant", "content": "Here ...
trl/dpo_trainer.md
Implicit prompt
5,168
5,651
120
73
75
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
75
## Looking deeper into the DPO method Direct Preference Optimization (DPO) is a training method designed to align a language model with preference data. Instead of supervised input–output pairs, the model is trained on pairs of completions to the same prompt, where one completion is preferred over the other. The objec...
trl/dpo_trainer.md
Looking deeper into the DPO method
5,653
6,451
199
74
76
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
76
### Preprocessing and tokenization During training, each example is expected to contain a prompt along with a preferred (`chosen`) and a dispreferred (`rejected`) completion. For more details on the expected formats, see [Dataset formats](dataset_formats). The [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#...
trl/dpo_trainer.md
Preprocessing and tokenization
6,453
6,838
96
75
77
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
77
### Computing the loss ![dpo_figure](https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/dpo_figure.png) The loss used in DPO is defined as follows: $$ \mathcal{L}_{\mathrm{DPO}}(\theta) = -\mathbb{E}_{(x,y^{+},y^{-})}\!\left[\log \sigma\!\left(\beta\Big(\log\frac{\pi_{\theta}(y^{+}\!\mid x)}{\p...
trl/dpo_trainer.md
Computing the loss
6,840
7,666
206
76
78
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
78
#### Loss Types Several formulations of the objective have been proposed in the literature. Initially, the objective of DPO was defined as presented above.
trl/dpo_trainer.md
Loss Types
7,668
7,824
39
77
79
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
79
| `loss_type=` | Description | | --- | --- | | `"sigmoid"` (default) | Given the preference data, we can fit a binary classifier according to the Bradley-Terry model and in fact the [DPO](https://huggingface.co/papers/2305.18290) authors propose the sigmoid loss on the normalized likelihood via the `logsigmoid` to fit ...
trl/dpo_trainer.md
Loss Types
7,826
11,496
917
78
80
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
80
## Logged metrics While training and evaluating we record the following reward metrics:
trl/dpo_trainer.md
Logged metrics
11,498
11,586
22
79
81
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
81
* `global_step`: The total number of optimizer steps taken so far. * `epoch`: The current epoch number, based on dataset iteration. * `num_tokens`: The total number of tokens processed so far. * `loss`: The average cross-entropy loss computed over non-masked tokens in the current logging interval. * `entropy`: The aver...
trl/dpo_trainer.md
Logged metrics
11,588
13,379
447
80
82
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
82
## Customization
trl/dpo_trainer.md
Customization
13,381
13,397
4
81
83
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
83
### Compatibility and constraints Some argument combinations are intentionally restricted in the current [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#trl.DPOTrainer) implementation: * `use_weighting=True` is not supported with `loss_type="aot"` or `loss_type="aot_unpaired"`. * With `use_liger_kernel=True...
trl/dpo_trainer.md
Compatibility and constraints
13,399
14,146
186
82
84
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
84
### Multi-loss combinations The DPO trainer supports combining multiple loss functions with different weights, enabling more sophisticated optimization strategies. This is particularly useful for implementing algorithms like MPO (Mixed Preference Optimization). MPO is a training approach that combines multiple optimiz...
trl/dpo_trainer.md
Multi-loss combinations
14,148
14,757
152
83
85
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
85
# MPO: Combines DPO (sigmoid) for preference and BCO (bco_pair) for quality training_args = DPOConfig( loss_type=["sigmoid", "bco_pair", "sft"], # loss types to combine loss_weights=[0.8, 0.2, 1.0] # corresponding weights, as used in the MPO paper ) ```
trl/dpo_trainer.md
MPO: Combines DPO (sigmoid) for preference and BCO (bco_pair) for quality
14,758
15,021
65
84
86
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
86
### Model initialization You can directly pass the kwargs of the `from_pretrained()` method to the [DPOConfig](/docs/trl/v1.2.0/en/dpo_trainer#trl.DPOConfig). For example, if you want to load a model in a different precision, analogous to ```python model = AutoModelForCausalLM.from_pretrained("Qwen/Qwen3-0.6B", dtype...
trl/dpo_trainer.md
Model initialization
15,023
15,706
170
85
87
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
87
### Train adapters with PEFT We support tight integration with 🤗 PEFT library, allowing any user to conveniently train adapters and share them on the Hub, rather than training the entire model. ```python from datasets import load_dataset from trl import DPOTrainer from peft import LoraConfig dataset = load_dataset(...
trl/dpo_trainer.md
Train adapters with PEFT
15,708
16,454
186
86
88
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
88
```python from datasets import load_dataset from trl import DPOTrainer from peft import AutoPeftModelForCausalLM model = AutoPeftModelForCausalLM.from_pretrained("trl-lib/Qwen3-4B-LoRA", is_trainable=True) dataset = load_dataset("trl-lib/ultrafeedback_binarized", split="train") trainer = DPOTrainer( model=model, ...
trl/dpo_trainer.md
Train adapters with PEFT
16,456
17,034
144
87
89
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
89
### Train with Liger Kernel Liger Kernel is a collection of Triton kernels for LLM training that boosts multi-GPU throughput by 20%, cuts memory use by 60% (enabling up to 4× longer context), and works seamlessly with tools like FlashAttention, PyTorch FSDP, and DeepSpeed. For more information, see [Liger Kernel Integ...
trl/dpo_trainer.md
Train with Liger Kernel
17,036
17,390
88
88
90
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
90
### Rapid Experimentation for DPO RapidFire AI is an open-source experimentation engine that sits on top of TRL and lets you launch multiple DPO configurations at once, even on a single GPU. Instead of trying configurations sequentially, RapidFire lets you **see all their learning curves earlier, stop underperforming ...
trl/dpo_trainer.md
Rapid Experimentation for DPO
17,392
17,869
119
89
91
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
91
### Train with Unsloth Unsloth is an open‑source framework for fine‑tuning and reinforcement learning that trains LLMs (like Llama, Mistral, Gemma, DeepSeek, and more) up to 2× faster with up to 70% less VRAM, while providing a streamlined, Hugging Face–compatible workflow for training, evaluation, and deployment. For...
trl/dpo_trainer.md
Train with Unsloth
17,871
18,257
96
90
92
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
92
## Tool Calling with DPO The [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#trl.DPOTrainer) fully supports fine-tuning models with _tool calling_ capabilities. In this case, each dataset example should include: * The conversation messages (prompt, chosen and rejected), including any tool calls (`tool_calls...
trl/dpo_trainer.md
Tool Calling with DPO
18,259
18,838
144
91
93
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
93
## Training Vision Language Models [DPOTrainer](/docs/trl/v1.2.0/en/bema_for_reference_model#trl.DPOTrainer) fully supports training Vision-Language Models (VLMs). To train a VLM, provide a dataset with either an `image` column (single image per sample) or an `images` column (list of images per sample). For more infor...
trl/dpo_trainer.md
Training Vision Language Models
18,840
19,700
215
92
94
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
94
> [!TIP] > For VLMs, truncating may remove image tokens, leading to errors during training. To avoid this, set `max_length=None` in the [DPOConfig](/docs/trl/v1.2.0/en/dpo_trainer#trl.DPOConfig). This allows the model to process the full sequence length without truncating image tokens. > > ```python > DPOConfig(max_len...
trl/dpo_trainer.md
Training Vision Language Models
19,702
20,155
113
93
95
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
95
## DPOTrainer[[trl.DPOTrainer]]
trl/dpo_trainer.md
DPOTrainer[[trl.DPOTrainer]]
20,157
20,188
7
94
96
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
96
#### trl.DPOTrainer[[trl.DPOTrainer]] [Source](https://github.com/huggingface/trl/blob/v1.2.0/trl/trainer/dpo_trainer.py#L406) Trainer for Direct Preference Optimization (DPO) method. This algorithm was initially proposed in the paper [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](ht...
trl/dpo_trainer.md
trl.DPOTrainer[[trl.DPOTrainer]]
20,190
21,011
205
95
97
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
97
traintrl.DPOTrainer.trainhttps://github.com/huggingface/trl/blob/v1.2.0/transformers/trainer.py#L1323[{"name": "resume_from_checkpoint", "val": ": str | bool | None = None"}, {"name": "trial", "val": ": optuna.Trial | dict[str, Any] | None = None"}, {"name": "ignore_keys_for_eval", "val": ": list[str] | None = None"}]-...
trl/dpo_trainer.md
trl.DPOTrainer[[trl.DPOTrainer]]
21,013
22,151
284
96
98
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
98
Main training entry point. **Parameters:** model (`str` or [PreTrainedModel](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/model#transformers.PreTrainedModel) or `PeftModel`) : Model to be trained. Can be either: - A string, being the *model id* of a pretrained model hosted inside a model repo on h...
trl/dpo_trainer.md
trl.DPOTrainer[[trl.DPOTrainer]]
22,153
23,104
237
97
99
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
99
ref_model ([PreTrainedModel](https://huggingface.co/docs/transformers/v5.5.4/en/main_classes/model#transformers.PreTrainedModel), *optional*) : Reference model used to compute the reference log probabilities. - If provided, this model is used directly as the reference policy. - If `None`, the trainer will automaticall...
trl/dpo_trainer.md
trl.DPOTrainer[[trl.DPOTrainer]]
23,106
24,103
249
98
100
https://huggingface.co/docs/trl/dpo_trainer
DPO Trainer
End of preview.

🧩 vROM: ML Training Stack (TRL + PEFT + Datasets)

Vector Read-Only Memory — Pre-computed HNSW index for instant in-browser RAG

VecDB-WASM Vectors Model

What is this?

A plug-and-play RAG cartridge containing pre-embedded documentation for the ML training stack:

  • TRL — SFT, DPO, GRPO, PPO, Reward, KTO, ORPO, CPO trainers
  • PEFT — LoRA, adapters, parameter-efficient fine-tuning
  • Datasets — Loading, processing, streaming, creating, uploading

Load directly into VecDB-WASM for instant vector search — zero compute embedding required on the client.

Metric Value
Vectors 629
Dimensions 384
Total Tokens ~100K
Index Size 5.8 MB
Embedding Model Xenova/all-MiniLM-L6-v2 (q8)
Distance Metric Cosine

Quick Start

import init, { VectorDB } from 'vecdb-wasm';
import { pipeline } from '@huggingface/transformers';

await init();

// Load the vROM (5.8 MB)
const resp = await fetch(
  'https://huggingface.co/datasets/philipp-zettl/vrom-ml-training/resolve/main/index.json'
);
const db = VectorDB.load(await resp.text());

// Embed & search
const extractor = await pipeline('feature-extraction', 'Xenova/all-MiniLM-L6-v2', { dtype: 'q8' });
const emb = await extractor('how to train with DPO', { pooling: 'mean', normalize: true });
const results = JSON.parse(db.search(new Float32Array(emb.data), 5));

Files

File Size Description
index.json 5.8 MB HNSW index (VectorDB.load())
chunks.json 626 KB Chunk metadata array
manifest.json 1.2 KB Package spec
tools/vrom_builder.py 25 KB Builder tool for custom vROMs

Part of the vROM Ecosystem

See also: vrom-hf-docs (Transformers + Hub docs)

Built with VecDB-WASM.

Downloads last month
33