vincentkoc commited on
Commit
676569d
·
verified ·
1 Parent(s): e79b1f2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +95 -65
README.md CHANGED
@@ -19,7 +19,7 @@ task_categories:
19
  task_ids:
20
  - fact-checking-retrieval
21
  paperswithcode_id: hover
22
- pretty_name: HoVer
23
  dataset_info:
24
  features:
25
  - name: id
@@ -54,13 +54,16 @@ dataset_info:
54
  - name: test
55
  num_bytes: 927513
56
  num_examples: 4000
57
- download_size: 12257835
58
  dataset_size: 7758943
59
  ---
60
 
61
- # Dataset Card for HoVer
 
 
62
 
63
  ## Table of Contents
 
64
  - [Dataset Description](#dataset-description)
65
  - [Dataset Summary](#dataset-summary)
66
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
@@ -73,119 +76,146 @@ dataset_info:
73
  - [Curation Rationale](#curation-rationale)
74
  - [Source Data](#source-data)
75
  - [Annotations](#annotations)
76
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
77
- - [Considerations for Using the Data](#considerations-for-using-the-data)
78
- - [Social Impact of Dataset](#social-impact-of-dataset)
79
- - [Discussion of Biases](#discussion-of-biases)
80
- - [Other Known Limitations](#other-known-limitations)
81
  - [Additional Information](#additional-information)
82
- - [Dataset Curators](#dataset-curators)
83
  - [Licensing Information](#licensing-information)
84
  - [Citation Information](#citation-information)
85
  - [Contributions](#contributions)
86
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
87
  ## Dataset Description
88
 
89
  - **Homepage:** https://hover-nlp.github.io/
90
  - **Repository:** https://github.com/hover-nlp/hover
91
  - **Paper:** https://arxiv.org/abs/2011.03088
92
  - **Leaderboard:** https://hover-nlp.github.io/
93
- - **Point of Contact:** [More Information Needed]
 
94
 
95
  ### Dataset Summary
96
 
97
- [More Information Needed]
 
 
 
 
98
 
99
  ### Supported Tasks and Leaderboards
100
 
101
- [More Information Needed]
 
 
 
 
102
 
103
  ### Languages
104
 
105
- [More Information Needed]
106
 
107
  ## Dataset Structure
108
 
109
  ### Data Instances
110
 
111
- A sample training set is provided below
112
-
113
- ```
114
- {'id': 14856, 'uid': 'a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce', 'claim': 'The park at which Tivolis Koncertsal is located opened on 15 August 1843.', 'supporting_facts': [{'key': 'Tivolis Koncertsal', 'value': 0}, {'key': 'Tivoli Gardens', 'value': 1}], 'label': 'SUPPORTED', 'num_hops': 2, 'hpqa_id': '5abca1a55542993a06baf937'}
 
 
 
 
 
 
 
 
 
 
 
115
  ```
116
 
117
- Please note that in test set sentence only id, uid and claim are available. Labels are not available in test set and are represented by -1.
118
-
119
 
120
  ### Data Fields
121
 
122
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
123
 
124
  ### Data Splits
125
 
126
- [More Information Needed]
 
 
 
 
 
 
 
127
 
128
  ## Dataset Creation
129
 
130
  ### Curation Rationale
131
 
132
- [More Information Needed]
133
 
134
  ### Source Data
135
 
136
- [More Information Needed]
137
-
138
- #### Initial Data Collection and Normalization
139
-
140
- [More Information Needed]
141
-
142
- #### Who are the source language producers?
143
-
144
- [More Information Needed]
145
 
146
  ### Annotations
147
 
148
- [More Information Needed]
149
-
150
- #### Annotation process
151
-
152
- [More Information Needed]
153
-
154
- #### Who are the annotators?
155
-
156
- [More Information Needed]
157
-
158
- ### Personal and Sensitive Information
159
-
160
- [More Information Needed]
161
-
162
- ## Considerations for Using the Data
163
-
164
- ### Social Impact of Dataset
165
-
166
- [More Information Needed]
167
-
168
- ### Discussion of Biases
169
-
170
- [More Information Needed]
171
-
172
- ### Other Known Limitations
173
-
174
- [More Information Needed]
175
 
176
  ## Additional Information
177
 
178
- ### Dataset Curators
179
-
180
- [More Information Needed]
181
-
182
  ### Licensing Information
183
 
184
- [More Information Needed]
185
 
186
  ### Citation Information
187
 
188
- [More Information Needed]
 
 
 
 
 
 
 
 
189
  ### Contributions
190
 
191
- Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding this dataset.
 
19
  task_ids:
20
  - fact-checking-retrieval
21
  paperswithcode_id: hover
22
+ pretty_name: HoVer (Parquet Format)
23
  dataset_info:
24
  features:
25
  - name: id
 
54
  - name: test
55
  num_bytes: 927513
56
  num_examples: 4000
57
+ download_size: 3428352
58
  dataset_size: 7758943
59
  ---
60
 
61
+ # Dataset Card for HoVer (Parquet Format)
62
+
63
+ > **Note**: This is a scriptless, Parquet-based version of the HoVer dataset for seamless integration with HuggingFace `datasets` library. No `trust_remote_code` required!
64
 
65
  ## Table of Contents
66
+ - [Quick Start](#quick-start)
67
  - [Dataset Description](#dataset-description)
68
  - [Dataset Summary](#dataset-summary)
69
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
 
76
  - [Curation Rationale](#curation-rationale)
77
  - [Source Data](#source-data)
78
  - [Annotations](#annotations)
 
 
 
 
 
79
  - [Additional Information](#additional-information)
 
80
  - [Licensing Information](#licensing-information)
81
  - [Citation Information](#citation-information)
82
  - [Contributions](#contributions)
83
 
84
+ ## Quick Start
85
+
86
+ ```python
87
+ from datasets import load_dataset
88
+
89
+ # Load the dataset (no trust_remote_code needed!)
90
+ dataset = load_dataset("vincentkoc/hover-parquet")
91
+
92
+ # Access splits
93
+ train = dataset["train"]
94
+ validation = dataset["validation"]
95
+ test = dataset["test"]
96
+
97
+ # Example usage
98
+ print(train[0])
99
+ # {
100
+ # 'id': 0,
101
+ # 'uid': '330ca632-e83f-4011-b11b-0d0158145036',
102
+ # 'claim': 'Skagen Painter Peder Severin Krøyer favored naturalism...',
103
+ # 'supporting_facts': [{'key': 'Kristian Zahrtmann', 'value': 0}, ...],
104
+ # 'label': 1, # 0: NOT_SUPPORTED, 1: SUPPORTED
105
+ # 'num_hops': 3,
106
+ # 'hpqa_id': '5ab7a86d5542995dae37e986'
107
+ # }
108
+ ```
109
+
110
  ## Dataset Description
111
 
112
  - **Homepage:** https://hover-nlp.github.io/
113
  - **Repository:** https://github.com/hover-nlp/hover
114
  - **Paper:** https://arxiv.org/abs/2011.03088
115
  - **Leaderboard:** https://hover-nlp.github.io/
116
+ - **Original Dataset:** https://huggingface.co/datasets/hover
117
+ - **Parquet Version:** https://huggingface.co/datasets/vincentkoc/hover-parquet
118
 
119
  ### Dataset Summary
120
 
121
+ HoVer (HOP VERification) is an open-domain, many-hop fact extraction and claim verification dataset built upon the Wikipedia corpus. The dataset contains claims that require reasoning over multiple documents (multi-hop) to verify whether they are supported or not supported by evidence.
122
+
123
+ The original 2-hop claims are adapted from question-answer pairs from HotpotQA. It was collected by a team of NLP researchers at UNC Chapel Hill and Verisk Analytics.
124
+
125
+ This version provides the dataset in Parquet format for efficient loading and compatibility with modern data processing pipelines, eliminating the need for custom loading scripts.
126
 
127
  ### Supported Tasks and Leaderboards
128
 
129
+ - **Fact Verification**: Determine whether a claim is SUPPORTED or NOT_SUPPORTED based on evidence from Wikipedia articles
130
+ - **Multi-hop Reasoning**: Claims require reasoning across multiple documents (indicated by `num_hops` field)
131
+ - **Evidence Retrieval**: Identify relevant supporting facts from source documents
132
+
133
+ The official leaderboard is available at https://hover-nlp.github.io/
134
 
135
  ### Languages
136
 
137
+ English (en)
138
 
139
  ## Dataset Structure
140
 
141
  ### Data Instances
142
 
143
+ A sample training set example:
144
+
145
+ ```json
146
+ {
147
+ "id": 14856,
148
+ "uid": "a0cf45ea-b5cd-4c4e-9ffa-73b39ebd78ce",
149
+ "claim": "The park at which Tivolis Koncertsal is located opened on 15 August 1843.",
150
+ "supporting_facts": [
151
+ {"key": "Tivolis Koncertsal", "value": 0},
152
+ {"key": "Tivoli Gardens", "value": 1}
153
+ ],
154
+ "label": 1,
155
+ "num_hops": 2,
156
+ "hpqa_id": "5abca1a55542993a06baf937"
157
+ }
158
  ```
159
 
160
+ **Note**: In the test set, only `id`, `uid`, and `claim` fields contain meaningful data. The `label` is set to `-1`, `num_hops` to `-1`, `hpqa_id` to `"None"`, and `supporting_facts` is an empty list, as these are withheld for evaluation purposes.
 
161
 
162
  ### Data Fields
163
 
164
+ - **id** (`int32`): Sequential identifier for the example within its split
165
+ - **uid** (`string`): Unique identifier (UUID) for the claim
166
+ - **claim** (`string`): The claim statement to be verified
167
+ - **supporting_facts** (`list`): List of evidence facts, where each fact contains:
168
+ - **key** (`string`): Title of the Wikipedia article
169
+ - **value** (`int32`): Sentence index within that article
170
+ - **label** (`ClassLabel`): Verification label with values:
171
+ - `0`: NOT_SUPPORTED - The claim is not supported by the evidence
172
+ - `1`: SUPPORTED - The claim is supported by the evidence
173
+ - `-1`: Unknown (used in test set)
174
+ - **num_hops** (`int32`): Number of reasoning hops required (typically 2-4 for this dataset)
175
+ - **hpqa_id** (`string`): Original HotpotQA question ID from which the claim was derived
176
 
177
  ### Data Splits
178
 
179
+ | Split | Examples |
180
+ |-------|----------|
181
+ | Train | 18,171 |
182
+ | Validation | 4,000 |
183
+ | Test | 4,000 |
184
+ | **Total** | **26,171** |
185
+
186
+ The splits maintain the original distribution from the HoVer dataset.
187
 
188
  ## Dataset Creation
189
 
190
  ### Curation Rationale
191
 
192
+ HoVer was created to address the challenge of multi-hop fact verification, where claims require reasoning across multiple documents. The dataset was built to push the boundaries of claim verification systems beyond single-document fact-checking.
193
 
194
  ### Source Data
195
 
196
+ The dataset is built upon Wikipedia as the knowledge source. Claims are adapted from HotpotQA question-answer pairs and modified to create verification statements that require multi-hop reasoning.
 
 
 
 
 
 
 
 
197
 
198
  ### Annotations
199
 
200
+ The dataset was annotated by expert annotators who identified supporting facts across multiple Wikipedia articles and determined whether claims were supported or not supported by the evidence.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
201
 
202
  ## Additional Information
203
 
 
 
 
 
204
  ### Licensing Information
205
 
206
+ This dataset is licensed under CC-BY-SA 4.0 (Creative Commons Attribution-ShareAlike 4.0 International).
207
 
208
  ### Citation Information
209
 
210
+ ```bibtex
211
+ @inproceedings{jiang2020hover,
212
+ title={{HoVer}: A Dataset for Many-Hop Fact Extraction And Claim Verification},
213
+ author={Yichen Jiang and Shikha Bordia and Zheng Zhong and Charles Dognin and Maneesh Singh and Mohit Bansal},
214
+ booktitle={Findings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)},
215
+ year={2020}
216
+ }
217
+ ```
218
+
219
  ### Contributions
220
 
221
+ Thanks to [@abhishekkrthakur](https://github.com/abhishekkrthakur) for adding the original dataset and [@vincentkoc](https://github.com/vincentkoc) for creating this Parquet version.