Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
nodes: list<item: struct<vector: list<item: double>, neighbors: list<item: list<item: int64>>, max_layer: int64, metadata: string>>
entry_point: int64
max_layer: int64
dim: int64
vs
vrom_id: string
version: string
description: string
source: string
embedding_spec: struct<model: string, model_source: string, dimensions: int64, quantization: string, distance_metric: string, normalized: bool>
hnsw_config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
vector_count: int64
total_tokens: int64
total_chunks: int64
corpus_hash: string
created_at: timestamp[s]
chunk_strategy: struct<method: string, max_tokens: int64, overlap: int64, preserve_code_blocks: bool, linked_list_pointers: bool>
files: struct<index: string, chunks: string, manifest: string>
compatibility: struct<vecdb_wasm: string, load_method: string>
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1821, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 781, in finalize
                  self.write_rows_on_file()
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 662, in write_rows_on_file
                  table = pa.concat_tables(self.current_rows)
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
              nodes: list<item: struct<vector: list<item: double>, neighbors: list<item: list<item: int64>>, max_layer: int64, metadata: string>>
              entry_point: int64
              max_layer: int64
              dim: int64
              vs
              vrom_id: string
              version: string
              description: string
              source: string
              embedding_spec: struct<model: string, model_source: string, dimensions: int64, quantization: string, distance_metric: string, normalized: bool>
              hnsw_config: struct<m: int64, m_max0: int64, ef_construction: int64, ef_search: int64, level_multiplier: double, metric: string>
              vector_count: int64
              total_tokens: int64
              total_chunks: int64
              corpus_hash: string
              created_at: timestamp[s]
              chunk_strategy: struct<method: string, max_tokens: int64, overlap: int64, preserve_code_blocks: bool, linked_list_pointers: bool>
              files: struct<index: string, chunks: string, manifest: string>
              compatibility: struct<vecdb_wasm: string, load_method: string>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

chunk_id
int64
text
string
source_file
string
section_heading
string
char_start
int64
char_end
int64
token_estimate
int64
prev_chunk_id
int64
next_chunk_id
int64
url
string
doc_title
string
0
# Transformers Transformers acts as the model-definition framework for state-of-the-art machine learning models in text, computer vision, audio, video, and multimodal models, for both inference and training. It centralizes the model definition so that this definition is agreed upon across the ecosystem. `transformers...
transformers/index.md
Transformers
0
1,014
253
null
1
https://huggingface.co/docs/transformers/index
Transformers
1
Explore the [Hub](https://hfp.stormocean.ukm/) today to find a model and use Transformers to help you get started right away. Explore the [Models Timeline](./models_timeline) to discover the latest text, vision, audio and multimodal model architectures in Transformers.
transformers/index.md
Transformers
1,022
1,289
66
0
2
https://huggingface.co/docs/transformers/index
Transformers
2
## Features Transformers provides everything you need for inference or training with state-of-the-art pretrained models. Some of the main features include: - [Pipeline](./pipeline_tutorial): Simple and optimized inference class for many machine learning tasks like text generation, image segmentation, automatic speech...
transformers/index.md
Features
1,291
2,037
186
1
3
https://huggingface.co/docs/transformers/index
Transformers
3
## Design > [!TIP] > Read our [Philosophy](./philosophy) to learn more about Transformers' design principles. Transformers is designed for developers and machine learning engineers and researchers. Its main design principles are: 1. Fast and easy to use: Every model is implemented from only three main classes (confi...
transformers/index.md
Design
2,039
2,885
211
2
4
https://huggingface.co/docs/transformers/index
Transformers
4
## Learn If you're new to Transformers or want to learn more about transformer models, we recommend starting with the [LLM course](https://huggingface.co/learn/llm-course/chapter1/1?fw=pt). This comprehensive course covers everything from the fundamentals of how transformer models work to practical applications across...
transformers/index.md
Learn
2,901
3,522
155
3
null
https://huggingface.co/docs/transformers/index
Transformers
5
# Installation Transformers works with [PyTorch](https://pytorch.org/get-started/locally/). It has been tested on Python 3.10+ and PyTorch 2.4+.
transformers/installation.md
Installation
0
145
36
null
6
https://huggingface.co/docs/transformers/installation
Installation
6
## Virtual environment [uv](https://docs.astral.sh/uv/) is an extremely fast Rust-based Python package and project manager and requires a [virtual environment](https://docs.astral.sh/uv/pip/environments/) by default to manage different projects and avoids compatibility issues between dependencies. It can be used as a...
transformers/installation.md
Virtual environment
147
819
168
5
7
https://huggingface.co/docs/transformers/installation
Installation
7
## Python Install Transformers with the following command. [uv](https://docs.astral.sh/uv/) is a fast Rust-based Python package and project manager. ```bash uv pip install transformers ``` For GPU acceleration, install the appropriate CUDA drivers for [PyTorch](https://pytorch.org/get-started/locally). Run the com...
transformers/installation.md
Python
821
1,714
223
6
8
https://huggingface.co/docs/transformers/installation
Installation
8
### Source install Installing from source installs the *latest* version rather than the *stable* version of the library. It ensures you have the most up-to-date changes in Transformers and it's useful for experimenting with the latest features or fixing a bug that hasn't been officially released in the stable version ...
transformers/installation.md
Source install
1,716
2,680
241
7
9
https://huggingface.co/docs/transformers/installation
Installation
9
### Editable install An [editable install](https://pip.pypa.io/en/stable/topics/local-project-installs/#editable-installs) is useful if you're developing locally with Transformers. It links your local copy of Transformers to the Transformers [repository](https://github.com/huggingface/transformers) instead of copying ...
transformers/installation.md
Editable install
2,682
3,401
179
8
10
https://huggingface.co/docs/transformers/installation
Installation
10
## conda [conda](https://docs.conda.io/projects/conda/en/stable/#) is a language-agnostic package manager. Install Transformers from the [conda-forge](https://anaconda.org/conda-forge/transformers) channel in your newly created virtual environment. ```bash conda install conda-forge::transformers ```
transformers/installation.md
conda
3,403
3,705
75
9
11
https://huggingface.co/docs/transformers/installation
Installation
11
## Set up After installation, you can configure the Transformers cache location or set up the library for offline usage.
transformers/installation.md
Set up
3,707
3,828
30
10
12
https://huggingface.co/docs/transformers/installation
Installation
12
### Cache directory When you load a pretrained model with [from_pretrained()](/docs/transformers/v5.6.2/en/main_classes/model#transformers.PreTrainedModel.from_pretrained), the model is downloaded from the Hub and locally cached. Every time you load a model, it checks whether the cached model is up-to-date. If it's t...
transformers/installation.md
Cache directory
3,830
4,572
185
11
13
https://huggingface.co/docs/transformers/installation
Installation
13
1. [HF_HUB_CACHE](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhubcache) (default) 2. [HF_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#hfhome) 3. [XDG_CACHE_HOME](https://hf.co/docs/huggingface_hub/package_reference/environment_variables#xdgcachehome) ...
transformers/installation.md
Cache directory
4,574
4,941
91
12
14
https://huggingface.co/docs/transformers/installation
Installation
14
### Offline mode To use Transformers in an offline or firewalled environment requires the downloaded and cached files ahead of time. Download a model repository from the Hub with the `snapshot_download` method. > [!TIP] > Refer to the [Download files from the Hub](https://hf.co/docs/huggingface_hub/guides/download) g...
transformers/installation.md
Offline mode
4,943
5,844
225
13
15
https://huggingface.co/docs/transformers/installation
Installation
15
Another option for only loading cached files is to set `local_files_only=True` in [from_pretrained()](/docs/transformers/v5.6.2/en/main_classes/model#transformers.PreTrainedModel.from_pretrained). ```py from transformers import LlamaForCausalLM model = LlamaForCausalLM.from_pretrained("./path/to/local/directory", loc...
transformers/installation.md
Offline mode
5,846
6,189
85
14
null
https://huggingface.co/docs/transformers/installation
Installation
16
# Quickstart Transformers is designed to be fast and easy to use so that everyone can start learning or building with transformer models. The number of user-facing abstractions is limited to only three classes for instantiating a model, and two APIs for inference or training. This quickstart introduces you to Transfo...
transformers/quicktour.md
Quickstart
0
602
150
null
17
https://huggingface.co/docs/transformers/quicktour
Quickstart
17
## Set up To start, we recommend creating a Hugging Face [account](https://hf.co/join). An account lets you host and access version controlled models, datasets, and [Spaces](https://hf.co/spaces) on the Hugging Face [Hub](https://hf.co/docs/hub/index), a collaborative platform for discovery and building. Create a [Us...
transformers/quicktour.md
Set up
604
1,465
215
16
18
https://huggingface.co/docs/transformers/quicktour
Quickstart
18
Then install an up-to-date version of Transformers and some additional libraries from the Hugging Face ecosystem for accessing datasets and vision models, evaluating training, and optimizing training for large models. ```bash !pip install -U transformers datasets evaluate accelerate timm ```
transformers/quicktour.md
Set up
1,467
1,760
73
17
19
https://huggingface.co/docs/transformers/quicktour
Quickstart
19
## Pretrained models Each pretrained model inherits from three base classes.
transformers/quicktour.md
Pretrained models
1,762
1,839
19
18
20
https://huggingface.co/docs/transformers/quicktour
Quickstart
20
| **Class** | **Description** | |---|---| | [PreTrainedConfig](/docs/transformers/v5.6.2/en/main_classes/configuration#transformers.PreTrainedConfig) | A file that specifies a models attributes such as the number of attention heads or vocabulary size. | | [PreTrainedModel](/docs/transformers/v5.6.2/en/main_classes/mode...
transformers/quicktour.md
Pretrained models
1,841
3,066
306
19
21
https://huggingface.co/docs/transformers/quicktour
Quickstart
21
We recommend using the [AutoClass](./model_doc/auto) API to load models and preprocessors because it automatically infers the appropriate architecture for each task and machine learning framework based on the name or path to the pretrained weights and configuration file. Use [from_pretrained()](/docs/transformers/v5.6...
transformers/quicktour.md
Pretrained models
3,068
3,942
218
20
22
https://huggingface.co/docs/transformers/quicktour
Quickstart
22
```py from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf", dtype="auto", device_map="auto") tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-2-7b-hf") ``` Tokenize the text and return PyTorch tensors with the tokenizer. Move t...
transformers/quicktour.md
Pretrained models
3,944
4,840
224
21
23
https://huggingface.co/docs/transformers/quicktour
Quickstart
23
```py generated_ids = model.generate(**model_inputs, max_length=30) tokenizer.batch_decode(generated_ids)[0] ' The secret to baking a good cake is 100% in the preparation. There are so many recipes out there,' ``` > [!TIP] > Skip ahead to the [Trainer](#trainer-api) section to learn how to fine-tune a model.
transformers/quicktour.md
Pretrained models
4,842
5,152
77
22
24
https://huggingface.co/docs/transformers/quicktour
Quickstart
24
## Pipeline The [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) class is the most convenient way to inference with a pretrained model. It supports many tasks such as text generation, image segmentation, automatic speech recognition, document question answering, and more. > [!TIP]...
transformers/quicktour.md
Pipeline
5,154
6,022
217
23
25
https://huggingface.co/docs/transformers/quicktour
Quickstart
25
```py from transformers import pipeline from accelerate import Accelerator device = Accelerator().device pipeline = pipeline("text-generation", model="meta-llama/Llama-2-7b-hf", device=device) ``` Prompt [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) with some initial text to g...
transformers/quicktour.md
Pipeline
6,024
6,914
222
24
26
https://huggingface.co/docs/transformers/quicktour
Quickstart
26
Pass an image - a URL or local path to the image - to [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline). ```py segments = pipeline("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png") segments[0]["label"] 'bird' segments[1]["label"] 'bird' ``` Use `Accelerator...
transformers/quicktour.md
Pipeline
0
972
243
25
27
https://huggingface.co/docs/transformers/quicktour
Quickstart
27
## Trainer [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) is a complete training and evaluation loop for PyTorch models. It abstracts away a lot of the boilerplate usually involved in manually writing a training loop, so you can start training faster and focus on training design choi...
transformers/quicktour.md
Trainer
7,895
8,833
234
26
28
https://huggingface.co/docs/transformers/quicktour
Quickstart
28
```py from transformers import AutoModelForSequenceClassification, AutoTokenizer from datasets import load_dataset model = AutoModelForSequenceClassification.from_pretrained("distilbert/distilbert-base-uncased") tokenizer = AutoTokenizer.from_pretrained("distilbert/distilbert-base-uncased") dataset = load_dataset("rot...
transformers/quicktour.md
Trainer
8,835
9,748
228
27
29
https://huggingface.co/docs/transformers/quicktour
Quickstart
29
Next, set up [TrainingArguments](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.TrainingArguments) with the training features and hyperparameters. ```py from transformers import TrainingArguments training_args = TrainingArguments( output_dir="distilbert-rotten-tomatoes", learning_rate=2e-5, ...
transformers/quicktour.md
Trainer
9,750
10,684
233
28
30
https://huggingface.co/docs/transformers/quicktour
Quickstart
30
Share your model and tokenizer to the Hub with [push_to_hub()](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer.push_to_hub). ```py trainer.push_to_hub() ``` Congratulations, you just trained your first model with Transformers!
transformers/quicktour.md
Trainer
10,686
10,937
62
29
31
https://huggingface.co/docs/transformers/quicktour
Quickstart
31
## Next steps Now that you have a better understanding of Transformers and what it offers, it's time to keep exploring and learning what interests you the most.
transformers/quicktour.md
Next steps
10,939
11,100
40
30
32
https://huggingface.co/docs/transformers/quicktour
Quickstart
32
- **Base classes**: Learn more about the configuration, model and processor classes. This will help you understand how to create and customize models, preprocess different types of inputs (audio, images, multimodal), and how to share your model. - **Inference**: Explore the [Pipeline](/docs/transformers/v5.6.2/en/main_...
transformers/quicktour.md
Next steps
11,102
12,078
244
31
null
https://huggingface.co/docs/transformers/quicktour
Quickstart
33
# Philosophy Transformers is a PyTorch-first library. It provides models that are faithful to their papers, easy to use, and easy to hack. A longer, in-depth article with examples, visualizations and timelines is available [here](https://huggingface.co/spaces/transformers-community/Transformers-tenets) as our canonic...
transformers/philosophy.md
Philosophy
0
436
109
null
34
https://huggingface.co/docs/transformers/philosophy
Philosophy
34
## Who this library is for - Researchers and educators exploring or extending model architectures. - Practitioners fine-tuning, evaluating, or serving models. - Engineers who want a pretrained model that “just works” with a predictable API.
transformers/philosophy.md
Who this library is for
438
679
60
33
35
https://huggingface.co/docs/transformers/philosophy
Philosophy
35
## What you can expect - Three core classes are required for each model: [configuration](main_classes/configuration), [models](main_classes/model), and a preprocessing class. [Tokenizers](main_classes/tokenizer) handle NLP, [image processors](main_classes/image_processor) handle images, [video processors](main_cla...
transformers/philosophy.md
What you can expect
681
1,172
122
34
36
https://huggingface.co/docs/transformers/philosophy
Philosophy
36
- All of these classes can be initialized in a simple and unified way from pretrained instances by using a common `from_pretrained()` method which downloads (if needed), caches and loads the related class instance and associated data (configurations' hyperparameters, tokenizers' vocabulary, processors' paramete...
transformers/philosophy.md
What you can expect
1,174
1,992
204
35
37
https://huggingface.co/docs/transformers/philosophy
Philosophy
37
## Core tenets The following tenets solidified over time, and they're detailed in our new philosophy [blog post](https://huggingface.co/spaces/transformers-community/Transformers-tenets). They guide maintainer decisions when reviewing PRs and contributions.
transformers/philosophy.md
Core tenets
1,994
2,252
64
36
38
https://huggingface.co/docs/transformers/philosophy
Philosophy
38
> - **Source of Truth.** Implementations must be faithful to official results and intended behavior. > - **One Model, One File.** Core inference/training logic is visible top-to-bottom in the model file users read. > - **Code is the Product.** Optimize for reading and diff-ing. Prefer explicit names over clever indirec...
transformers/philosophy.md
Core tenets
2,254
3,105
212
37
39
https://huggingface.co/docs/transformers/philosophy
Philosophy
39
## Main classes - [**Configuration classes**](main_classes/configuration) store the hyperparameters required to build a model. These include the number of layers and hidden size. You don't always need to instantiate these yourself. When using a pretrained model without modification, creating the model automatically in...
transformers/philosophy.md
Main classes
3,107
3,715
152
38
40
https://huggingface.co/docs/transformers/philosophy
Philosophy
40
- **Modular transformers.** Contributors write a small `modular_*.py` shard that declares reuse from existing components. The library auto-expands this into the visible `modeling_*.py` file that users read/debug. Maintainers review the shard; users hack the expanded file. This preserves “One Model, One File” without bo...
transformers/philosophy.md
Main classes
3,717
4,178
115
39
41
https://huggingface.co/docs/transformers/philosophy
Philosophy
41
- **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provides methods for encoding and decoding strings in a list of token embedding indices. [Image processors](main_classes/image_processor) preprocess vision...
transformers/philosophy.md
Main classes
4,180
4,900
180
40
42
https://huggingface.co/docs/transformers/philosophy
Philosophy
42
- `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or stored locally (or on a server) by the user. - `save_pretrained()` lets ...
transformers/philosophy.md
Main classes
4,902
5,477
143
41
null
https://huggingface.co/docs/transformers/philosophy
Philosophy
43
# Pipeline The [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) is a simple but powerful inference API that is readily available for a variety of machine learning tasks with any model from the Hugging Face [Hub](https://hf.co/models). Tailor the [Pipeline](/docs/transformers/v5.6....
transformers/pipeline_tutorial.md
Pipeline
0
705
176
null
44
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
44
Transformers has two pipeline classes, a generic [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) and many individual task-specific pipelines like [TextGenerationPipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.TextGenerationPipeline). Load these individual...
transformers/pipeline_tutorial.md
Pipeline
707
1,667
240
43
45
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
45
```py from transformers import pipeline pipeline = pipeline(task="text-generation", model="google/gemma-2-2b") pipeline("the secret to baking a really good cake is ") [{'generated_text': 'the secret to baking a really good cake is 1. the right ingredients 2. the'}] ``` When you have more than one input, pass them as ...
transformers/pipeline_tutorial.md
Pipeline
1,669
2,646
244
44
46
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
46
## Tasks [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) is compatible with many machine learning tasks across different modalities. Pass an appropriate input to the pipeline and it will handle the rest. Here are some examples of how to use [Pipeline](/docs/transformers/v5.6.2/en...
transformers/pipeline_tutorial.md
Tasks
2,648
3,379
182
45
47
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
47
```py from transformers import pipeline pipeline = pipeline(task="image-classification", model="google/vit-base-patch16-224") pipeline(images="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg") [{'label': 'lynx, catamount', 'score': 0.43350091576576233}, {'label': ...
transformers/pipeline_tutorial.md
Tasks
3,381
4,359
244
46
48
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
48
## Parameters At a minimum, [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) only requires a task identifier, model, and the appropriate input. But there are many parameters available to configure the pipeline with, from task-specific parameters to optimizing performance. This sec...
transformers/pipeline_tutorial.md
Parameters
4,361
4,742
95
47
49
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
49
### Device [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) is compatible with many hardware types, including GPUs, CPUs, Apple Silicon, and more. Configure the hardware type with the `device` parameter. By default, [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#tra...
transformers/pipeline_tutorial.md
Device
4,744
5,509
191
48
50
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
50
You could also let [Accelerate](https://hf.co/docs/accelerate/index), a library for distributed training, automatically choose how to load and store the model weights on the appropriate device. This is especially useful if you have multiple devices. Accelerate loads and stores the model weights on the fastest device fi...
transformers/pipeline_tutorial.md
Device
5,511
6,447
234
49
51
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
51
```py from transformers import pipeline pipeline = pipeline(task="text-generation", model="google/gemma-2-2b", device="mps") pipeline("the secret to baking a really good cake is ") ```
transformers/pipeline_tutorial.md
Device
6,449
6,634
46
50
52
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
52
### Batch inference [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) can also process batches of inputs with the `batch_size` parameter. Batch inference may improve speed, especially on a GPU, but it isn't guaranteed. Other variables such as hardware, data, and the model itself can...
transformers/pipeline_tutorial.md
Batch inference
6,636
7,274
159
51
53
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
53
```py from transformers import pipeline from accelerate import Accelerator device = Accelerator().device pipeline = pipeline(task="text-generation", model="google/gemma-2-2b", device=device, batch_size=2) pipeline(["the secret to baking a really good cake is", "a baguette is", "paris is the", "hotdogs are"]) [[{'gene...
transformers/pipeline_tutorial.md
Batch inference
7,276
8,272
249
52
54
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
54
# KeyDataset is a utility that returns the item in the dict returned by the dataset dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised") pipeline = pipeline(task="text-classification", model="distilbert/distilbert-base-uncased-finetuned-sst-2-english", device=device) for out in pipeline(KeyD...
transformers/pipeline_tutorial.md
KeyDataset is a utility that returns the item in the dict returned by the dataset
8,274
8,798
131
53
55
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
55
1. The only way to know for sure is to measure performance on your model, data, and hardware. 2. Don't batch inference if you're constrained by latency (a live inference product for example). 3. Don't batch inference if you're using a CPU. 4. Don't batch inference if you don't know the `sequence_length` of your data. M...
transformers/pipeline_tutorial.md
KeyDataset is a utility that returns the item in the dict returned by the dataset
8,800
9,488
172
54
56
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
56
### Task-specific parameters [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) accepts any parameters that are supported by each individual task pipeline. Make sure to check out each individual task pipeline to see what type of parameters are available. If you can't find a parameter...
transformers/pipeline_tutorial.md
Task-specific parameters
9,490
10,265
193
55
57
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
57
```py from transformers import pipeline pipeline = pipeline(task="automatic-speech-recognition", model="openai/whisper-large-v3") pipeline(audio="https://huggingface.co/datasets/Narsil/asr_dummy/resolve/main/mlk.flac", return_timestamp="word") {'text': ' I have a dream that one day this nation will rise up and live ou...
transformers/pipeline_tutorial.md
Task-specific parameters
10,267
11,637
342
56
58
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
58
Pass `return_full_text=False` to [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) to only return the generated text instead of the full text (prompt and generated text). [__call__()](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.TextGenerationPipeline.__call__) a...
transformers/pipeline_tutorial.md
Task-specific parameters
11,639
12,225
146
57
59
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
59
```py from transformers import pipeline pipeline = pipeline(task="text-generation", model="openai-community/gpt2") pipeline("the secret to baking a good cake is", num_return_sequences=4, return_full_text=False) [{'generated_text': ' how easy it is for me to do it with my hands. You must not go nuts, or the cake is goi...
transformers/pipeline_tutorial.md
Task-specific parameters
12,227
13,183
239
58
60
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
60
## Chunk batching There are some instances where you need to process data in chunks. - for some data types, a single input (for example, a really long audio file) may need to be chunked into multiple parts before it can be processed - for some tasks, like zero-shot classification or question answering, a single input...
transformers/pipeline_tutorial.md
Chunk batching
13,185
14,180
248
59
61
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
61
The example below shows how it differs from [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline). ```py
transformers/pipeline_tutorial.md
Chunk batching
14,182
14,319
34
60
62
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
62
# ChunkPipeline all_model_outputs = [] for preprocessed in pipeline.preprocess(inputs): model_outputs = pipeline.model_forward(preprocessed) all_model_outputs.append(model_outputs) outputs =pipeline.postprocess(all_model_outputs)
transformers/pipeline_tutorial.md
ChunkPipeline
14,320
14,557
59
61
63
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
63
# Pipeline preprocessed = pipeline.preprocess(inputs) model_outputs = pipeline.forward(preprocessed) outputs = pipeline.postprocess(model_outputs) ```
transformers/pipeline_tutorial.md
Pipeline
14,559
14,709
37
62
64
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
64
## Large datasets For inference with large datasets, you can iterate directly over the dataset itself. This avoids immediately allocating memory for the entire dataset, and you don't need to worry about creating batches yourself. Try [Batch inference](#batch-inference) with the `batch_size` parameter to see if it impr...
transformers/pipeline_tutorial.md
Large datasets
14,711
15,565
213
63
65
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
65
Other ways to run inference on large datasets with [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) include using an iterator or generator. ```py def data(): for i in range(1000): yield f"My example {i}" pipeline = pipeline(model="openai-community/gpt2", device=0) gene...
transformers/pipeline_tutorial.md
Large datasets
15,567
15,998
107
64
66
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
66
## Large models [Accelerate](https://hf.co/docs/accelerate/index) enables a couple of optimizations for running large models with [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline). Make sure Accelerate is installed first. ```py !pip install -U accelerate ``` The `device_map="auto"...
transformers/pipeline_tutorial.md
Large models
16,000
16,956
239
65
67
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
67
Lastly, [Pipeline](/docs/transformers/v5.6.2/en/main_classes/pipelines#transformers.Pipeline) also accepts quantized models to reduce memory usage even further. Make sure you have the [bitsandbytes](https://hf.co/docs/bitsandbytes/installation) library installed first, and then add `quantization_config` to `model_kwarg...
transformers/pipeline_tutorial.md
Large models
16,958
17,686
182
66
null
https://huggingface.co/docs/transformers/pipeline_tutorial
Pipeline
68
# Fine-tuning Fine-tuning continues training a large pretrained model on a smaller dataset specific to a task or domain. For example, fine-tuning on a dataset of coding examples helps the model get better at coding. Fine-tuning is identical to pretraining except you don't start with random weights. It also requires fa...
transformers/training.md
Fine-tuning
0
662
165
null
69
https://huggingface.co/docs/transformers/training
Fine-tuning
69
## Tokenization Load a dataset and [tokenize](./fast_tokenizers) the text column the model trains on (`horoscope` in the dataset below). The tokenizer creates the model inputs, `input_ids` and `attention_mask`. The model's forward method only accepts `input_ids` and `attention_mask`, so set `remove_columns` to drop c...
transformers/training.md
Tokenization
664
1,216
138
68
70
https://huggingface.co/docs/transformers/training
Fine-tuning
70
```py from datasets import load_dataset from transformers import AutoTokenizer, DataCollatorForLanguageModeling model_name = "Qwen/Qwen3-0.6B" tokenizer = AutoTokenizer.from_pretrained(model_name) dataset = load_dataset("karthiksagarn/astro_horoscope", split="train") def tokenize(batch): return tokenizer( ...
transformers/training.md
Tokenization
1,218
2,240
255
69
71
https://huggingface.co/docs/transformers/training
Fine-tuning
71
```py data_collator = DataCollatorForLanguageModeling(tokenizer, mlm=False), ```
transformers/training.md
Tokenization
2,242
2,322
20
70
72
https://huggingface.co/docs/transformers/training
Fine-tuning
72
## Loading a model Load a pretrained checkpoint to fine-tune (see the [Loading models](./models) guide for more details about loading models). - Set `dtype="auto"` to load the weights in their saved dtype. Without it, PyTorch loads weights in `torch.float32`, which doubles memory usage if the weights are originally `...
transformers/training.md
Loading a model
2,324
2,848
131
71
73
https://huggingface.co/docs/transformers/training
Fine-tuning
73
## Training configuration [TrainingArguments](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.TrainingArguments) provides all the options for customizing a training run. Only the most common arguments are covered here. Everything else has reasonable defaults or is only relevant to specific scenarios lik...
transformers/training.md
Training configuration
2,850
3,510
165
72
74
https://huggingface.co/docs/transformers/training
Fine-tuning
74
- Set `bf16=True` for fast mixed precision training if your hardware supports it (Ampere+ GPUs). Otherwise, fall back to `fp16=True` on older hardware. - `gradient_accumulation_steps` simulates a larger effective batch size by accumulating gradients over multiple forward passes before updating weights. - `gradient_chec...
transformers/training.md
Training configuration
3,512
4,281
192
73
75
https://huggingface.co/docs/transformers/training
Fine-tuning
75
```py training_args = TrainingArguments( output_dir="qwen3-finetuned", num_train_epochs=3, per_device_train_batch_size=2, gradient_accumulation_steps=8, gradient_checkpointing=True, bf16=True, learning_rate=2e-5, logging_steps=10, eval_strategy="epoch", save_strategy="epoch", ...
transformers/training.md
Training configuration
4,283
4,638
88
74
76
https://huggingface.co/docs/transformers/training
Fine-tuning
76
## Training Create a [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) instance with all the necessary components, then call [train()](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer.train) to begin. ```py trainer = Trainer( model=model, args=training_arg...
transformers/training.md
Training
4,640
5,374
183
75
77
https://huggingface.co/docs/transformers/training
Fine-tuning
77
## Next steps - Read the [Trainer features](./trainer_recipes) guide for minimal working examples of common Trainer features like custom loss functions, memory-efficient evaluation, checkpointing, and more. - Read the [Subclassing Trainer methods](./trainer_customize) guide to learn how to subclass [Trainer](/docs/tra...
transformers/training.md
Next steps
5,376
6,348
243
76
null
https://huggingface.co/docs/transformers/training
Fine-tuning
78
# Accelerate [Accelerate](https://hf.co/docs/accelerate/index) is a library designed to simplify distributed training on any type of setup with PyTorch by uniting the most common frameworks ([Fully Sharded Data Parallel (FSDP)](https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/) and [DeepSpe...
transformers/accelerate.md
Accelerate
0
1,025
256
null
79
https://huggingface.co/docs/transformers/accelerate
Accelerate
79
Start by running [accelerate config](https://hf.co/docs/accelerate/main/en/package_reference/cli#accelerate-config) in the command line to answer a series of prompts about your training system. This creates and saves a configuration file to help Accelerate correctly set up training based on your setup. ```bash acceler...
transformers/accelerate.md
Accelerate
1,027
1,539
128
78
80
https://huggingface.co/docs/transformers/accelerate
Accelerate
80
```yaml compute_environment: LOCAL_MACHINE debug: false distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: false fsdp_cpu_ram_efficient_loading: true fsdp_offload_params: false fsdp_sharding...
transformers/accelerate.md
Accelerate
1,541
2,248
176
79
81
https://huggingface.co/docs/transformers/accelerate
Accelerate
81
## Trainer Pass the path to the saved configuration file to [TrainingArguments](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.TrainingArguments), and from there, pass your [TrainingArguments](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.TrainingArguments) to [Trainer](/docs/transform...
transformers/accelerate.md
Trainer
2,250
2,627
94
80
82
https://huggingface.co/docs/transformers/accelerate
Accelerate
82
```py from transformers import TrainingArguments, Trainer training_args = TrainingArguments( output_dir="your-model", learning_rate=2e-5, per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=2, fsdp_config="path/to/fsdp_config", fsdp="full_shard", weight_decay...
transformers/accelerate.md
Trainer
2,629
3,322
173
81
83
https://huggingface.co/docs/transformers/accelerate
Accelerate
83
## Native PyTorch Accelerate can also be added to any PyTorch training loop to enable distributed training. The [Accelerator](https://huggingface.co/docs/accelerate/v1.13.0/en/package_reference/accelerator#accelerate.Accelerator) is the main entry point for adapting your PyTorch code to work with Accelerate. It automa...
transformers/accelerate.md
Native PyTorch
3,324
4,079
188
82
84
https://huggingface.co/docs/transformers/accelerate
Accelerate
84
All PyTorch objects (model, optimizer, scheduler, dataloaders) should be passed to the [prepare](https://huggingface.co/docs/accelerate/v1.13.0/en/package_reference/accelerator#accelerate.Accelerator.prepare) method now. This method moves your model to the appropriate device or devices, adapts the optimizer and schedul...
transformers/accelerate.md
Native PyTorch
4,081
4,897
204
83
85
https://huggingface.co/docs/transformers/accelerate
Accelerate
85
Replace `loss.backward` in your training loop with Accelerates [backward](https://huggingface.co/docs/accelerate/v1.13.0/en/package_reference/accelerator#accelerate.Accelerator.backward) method to scale the gradients and determine the appropriate `backward` method to use depending on your framework (for example, DeepSp...
transformers/accelerate.md
Native PyTorch
4,899
5,594
173
84
86
https://huggingface.co/docs/transformers/accelerate
Accelerate
86
```py from accelerate import Accelerator def main(): accelerator = Accelerator() model, optimizer, training_dataloader, scheduler = accelerator.prepare( model, optimizer, training_dataloader, scheduler ) for batch in training_dataloader: optimizer.zero_grad() inputs, targets = batch ...
transformers/accelerate.md
Native PyTorch
5,596
6,601
251
85
null
https://huggingface.co/docs/transformers/accelerate
Accelerate
87
# Parameter-efficient fine-tuning [Parameter-efficient fine-tuning (PEFT)](https://huggingface.co/docs/peft/index) methods only fine-tune a small number of extra model parameters (adapters) on top of a pretrained model. Because only adapter parameters are updated, the optimizer tracks far fewer gradients and states, r...
transformers/peft.md
Parameter-efficient fine-tuning
0
431
107
null
88
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
88
Transformers integrates directly with the PEFT library through [PeftAdapterMixin](/docs/transformers/v5.6.2/en/main_classes/peft#transformers.integrations.PeftAdapterMixin), added to all [PreTrainedModel](/docs/transformers/v5.6.2/en/main_classes/model#transformers.PreTrainedModel) classes. You can load, add, train, sw...
transformers/peft.md
Parameter-efficient fine-tuning
433
1,154
180
87
89
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
89
## Add an adapter Create a PEFT config, like `LoraConfig` for example, and attach it to a model with [add_adapter()](/docs/transformers/v5.6.2/en/main_classes/peft#transformers.integrations.PeftAdapterMixin.add_adapter). ```py from peft import LoraConfig, TaskType from transformers import AutoModelForCausalLM model ...
transformers/peft.md
Add an adapter
1,156
1,737
145
88
90
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
90
### Fully fine-tuning specific layers To train additional modules alongside an adapter (for example, the language model head), specify them in `modules_to_save`. `modules_to_save` specifies layers that are fully fine-tuned alongside the adapter, so *all* of their parameters are updated. This is useful when certain lay...
transformers/peft.md
Fully fine-tuning specific layers
1,739
2,291
138
89
91
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
91
### Choosing which layers to adapt For common architectures (Llama, Gemma, Qwen, etc.), PEFT has predefined default targets (like `q_proj` and `v_proj`), so you don't need to specify `target_modules`. If you want to target different layers, or the model doesn't have predefined targets, pass `target_modules` explicitly...
transformers/peft.md
Choosing which layers to adapt
2,293
2,778
121
90
92
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
92
## Training Pass the model with an attached adapter to [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) and call [train()](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer.train). [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) on...
transformers/peft.md
Training
2,780
3,771
247
91
93
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
93
After training, save the final adapter with [save_pretrained()](/docs/transformers/v5.6.2/en/main_classes/model#transformers.PreTrainedModel.save_pretrained). ```py model.save_pretrained("./my_adapter") ```
transformers/peft.md
Training
3,773
3,980
51
92
94
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
94
### Resuming from a checkpoint [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) automatically detects adapter checkpoints when resuming. [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) scans the checkpoint directory for subdirectories containing adapte...
transformers/peft.md
Resuming from a checkpoint
3,982
4,446
116
93
95
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
95
### Distributed training PEFT adapters work with distributed training out of the box. For ZeRO-3, [Trainer](/docs/transformers/v5.6.2/en/main_classes/trainer#transformers.Trainer) passes `exclude_frozen_parameters=True` when saving checkpoints with a PEFT model. Frozen base model weights are skipped. Only the trainab...
transformers/peft.md
Distributed training
4,448
5,204
189
94
96
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
96
## Loading an adapter To load an adapter, the Hub repository or local directory must contain an `adapter_config.json` file and the adapter weights. [from_pretrained()](/docs/transformers/v5.6.2/en/main_classes/model#transformers.PreTrainedModel.from_pretrained) automatically detects adapters. When it finds an `adapte...
transformers/peft.md
Loading an adapter
5,206
5,702
124
95
97
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
97
# Automatically loads the base model and attaches the adapter model = AutoModelForCausalLM.from_pretrained("klcsp/gemma7b-lora-alpaca-11-v1") ``` To load an adapter onto an existing model, use [load_adapter()](/docs/transformers/v5.6.2/en/main_classes/peft#transformers.integrations.PeftAdapterMixin.load_adapter). ```...
transformers/peft.md
Automatically loads the base model and attaches the adapter
5,704
6,658
238
96
98
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
98
## Managing multiple adapters A model can hold multiple adapters at once. Add adapters with unique names, and switch between them as needed. ```py from peft import LoraConfig model.add_adapter(LoraConfig(r=8, lora_alpha=32), adapter_name="adapter_1") model.add_adapter(LoraConfig(r=16, lora_alpha=64), adapter_name="a...
transformers/peft.md
Managing multiple adapters
6,660
7,567
226
97
99
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
99
# Disable all adapters for base model inference model.disable_adapters()
transformers/peft.md
Disable all adapters for base model inference
7,568
7,640
18
98
100
https://huggingface.co/docs/transformers/peft
Parameter-efficient fine-tuning
End of preview.

🧩 vROM: HF Transformers & Hub Documentation

Vector Read-Only Memory — Pre-computed HNSW index for instant in-browser RAG

VecDB-WASM Vectors Model

What is a vROM?

A vROM (Vector Read-Only Memory) is a pre-computed, serialized HNSW index package that can be loaded directly into VecDB-WASM for instant vector search in the browser — no embedding computation required on the client side.

Think of it as a plug-and-play RAG cartridge: download, load, and search in milliseconds.

This vROM

Contains pre-embedded documentation from:

  • Hugging Face Transformers (v5.6) — Installation, Quick Start, Pipeline API, Training, Fine-tuning, Tasks, Quantization, API Reference
  • Hugging Face Hub — Repositories, Models, Datasets, Spaces, Uploading, Downloading
Metric Value
Vectors 1,356
Dimensions 384
Total Tokens ~233K
Index Size 12.6 MB
Embedding Model Xenova/all-MiniLM-L6-v2 (q8)
Distance Metric Cosine
HNSW M 16
HNSW efConstruction 128

Quick Start

Browser (with VecDB-WASM)

import init, { VectorDB } from 'vecdb-wasm';
import { pipeline } from '@huggingface/transformers';

// 1. Initialize WASM
await init();

// 2. Fetch and load the vROM
const response = await fetch(
  'https://huggingface.co/datasets/philipp-zettl/vrom-hf-docs/resolve/main/index.json'
);
const indexJson = await response.text();
const db = VectorDB.load(indexJson);

console.log(`Loaded ${db.len()} vectors (${db.dim()}d)`);

// 3. Embed a query with Transformers.js
const extractor = await pipeline(
  'feature-extraction',
  'Xenova/all-MiniLM-L6-v2',
  { dtype: 'q8' }
);
const output = await extractor('how to fine-tune a model', {
  pooling: 'mean',
  normalize: true
});

// 4. Search!
const results = JSON.parse(
  db.search(new Float32Array(output.data), 5)
);

for (const { id, distance, metadata } of results) {
  const meta = JSON.parse(metadata);
  console.log(`[${distance.toFixed(3)}] ${meta.section_heading}`);
  console.log(`  ${meta.text.slice(0, 100)}...`);
  console.log(`  Source: ${meta.url}`);
}

Context Expansion (The Linked-List Trick)

Each chunk has prev_chunk_id and next_chunk_id pointers. After finding a relevant chunk, expand context by following the chain:

function expandContext(db, chunkId, windowSize = 2) {
  const chunks = [];
  const meta = JSON.parse(db.get_metadata(chunkId));
  const parsed = JSON.parse(meta);
  
  // Walk backwards
  let prevId = parsed.prev_chunk_id;
  const before = [];
  for (let i = 0; i < windowSize && prevId !== null; i++) {
    const prevMeta = JSON.parse(JSON.parse(db.get_metadata(prevId)));
    before.unshift(prevMeta);
    prevId = prevMeta.prev_chunk_id;
  }
  
  // Walk forwards
  let nextId = parsed.next_chunk_id;
  const after = [];
  for (let i = 0; i < windowSize && nextId !== null; i++) {
    const nextMeta = JSON.parse(JSON.parse(db.get_metadata(nextId)));
    after.push(nextMeta);
    nextId = nextMeta.next_chunk_id;
  }
  
  return [...before, parsed, ...after];
}

Files

File Size Description
index.json 12.6 MB HNSW index (loadable by VectorDB.load())
chunks.json 1.5 MB Chunk metadata array (for browsing/filtering)
manifest.json 1.2 KB Package specification

Chunk Metadata Schema

Each vector's metadata (accessible via db.get_metadata(id)) is a JSON string:

{
  "chunk_id": 42,
  "text": "The actual chunk text...",
  "source_file": "transformers/pipeline_tutorial.md",
  "section_heading": "Pipeline API",
  "prev_chunk_id": 41,
  "next_chunk_id": 43,
  "url": "https://huggingface.co/docs/transformers/pipeline_tutorial",
  "doc_title": "Pipeline"
}

Chunking Strategy

  • Method: Section-aware (splits on markdown headings)
  • Target size: 256 tokens per chunk
  • Overlap: 0 (research shows overlap adds cost without improving retrieval)
  • Code blocks: Preserved intact within chunks
  • Linked list: prev_chunk_id / next_chunk_id for context traversal

Compatibility

  • VecDB-WASM: ≥0.1.0
  • Load method: VectorDB.load(json_string)
  • Browser embedding model: Xenova/all-MiniLM-L6-v2 with {pooling: 'mean', normalize: true}

Build Info

Part of the vROM Ecosystem

This is an official first-party vROM built by the VecDB-WASM team. See our VecDB-WASM Space for the core engine.

vROMs transform VecDB-WASM from a standalone database engine into a distribution hub for plug-and-play RAG architectures — the "NPM for AI Agent Memory."

Downloads last month
32

Space using philipp-zettl/vrom-hf-docs 1