Datasets:
metadata
license: other
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- document-qa
- ocr
- extractive-qa
- nanochat
- sft
pretty_name: DocVQA for Nanochat
size_categories:
- 10K<n<100K
source_datasets:
- pixparse/docvqa-single-page-questions
DocVQA for Nanochat
Single-page document QA dataset processed for nanochat fine-tuning.
Description
This dataset is derived from pixparse/docvqa-single-page-questions and has been processed for efficient fine-tuning of small language models with limited context windows.
Modifications from Source
- OCR truncation: Answer-priority truncation ensures the answer is always present in the truncated context. Lines containing the answer are prioritized, then surrounding context is added until the token budget is reached.
- Page numbers: Added "Page X" header at the top of each document from
other_metadata.ucsf_document_page_no - Token budget: Documents truncated to fit within 1750 tokens (for 2048 context window models)
- Short answers: Filtered to answers ≤150 characters
- Format: Conversation format compatible with nanochat's CustomJSON task loader
Statistics
| Split | Samples | Total Tokens | Avg Tokens |
|---|---|---|---|
| Train | 39,455 | 15,495,380 | 393 |
| Validation | 5,349 | 2,218,651 | 415 |
| Total | 44,804 | 17,714,031 | - |
Tokenizer
Token counts computed with tiktoken cl100k_base (GPT-4's tokenizer). This is a GPT-4 style BPE tokenizer similar to what nanochat uses.
Schema
| Field | Type | Description |
|---|---|---|
question_id |
int | Original question ID from DocVQA |
question |
str | The question to answer |
answer |
str | The extracted answer (or "Not found in document.") |
document_text |
str | OCR text with page number prepended |
page |
int | Page number from OCR results (always 1 for single-page) |
other_metadata |
dict | Full metadata from source (ucsf_document_id, doc_id, etc.) |
num_tokens |
int | Exact token count (tiktoken cl100k_base) |
match_type |
str | How answer was matched: "exact", "fuzzy", or "none" |
messages |
list | Conversation format for training |
Usage
With HuggingFace Datasets
from datasets import load_dataset
ds = load_dataset("morgan/docvqa-nanochat")
# Access a sample
sample = ds["train"][0]
print(f"Question: {{sample['question']}}")
print(f"Answer: {{sample['answer']}}")
print(f"Tokens: {{sample['num_tokens']}}")
For Nanochat Training
The messages field is formatted for nanochat's CustomJSON task:
# Download and convert to JSONL
from datasets import load_dataset
import json
ds = load_dataset("morgan/docvqa-nanochat", split="train")
with open("docvqa_train.jsonl", "w") as f:
for row in ds:
f.write(json.dumps(row["messages"]) + "\n")
# Then use with CustomJSON
from tasks.customjson import CustomJSON
train_ds = CustomJSON(filepath="docvqa_train.jsonl")
Document Format
Each document is formatted as:
Document:
Page 4
R. J. REYNOLDS TOBACCO COMPANY
RETAIL PARTNERS MARKETING PLAN CONTRACT
...
Question: When is the contract effective date?
License
Same as source dataset (pixparse/docvqa-single-page-questions).
Citation
If you use this dataset, please cite the original DocVQA paper:
@inproceedings{mathew2021docvqa,
title={DocVQA: A Dataset for VQA on Document Images},
author={Mathew, Minesh and Karatzas, Dimosthenis and Jawahar, CV},
booktitle={WACV},
year={2021}
}