Felladrin commited on
Commit
9faa2f4
·
verified ·
1 Parent(s): 801f454

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ datasets:
4
+ - conll2003
5
+ license: mit
6
+ model-index:
7
+ - name: dslim/bert-base-NER
8
+ results:
9
+ - task:
10
+ type: token-classification
11
+ name: Token Classification
12
+ dataset:
13
+ name: conll2003
14
+ type: conll2003
15
+ config: conll2003
16
+ split: test
17
+ metrics:
18
+ - name: Accuracy
19
+ type: accuracy
20
+ value: 0.9118041001560013
21
+ verified: true
22
+ - name: Precision
23
+ type: precision
24
+ value: 0.9211550382257732
25
+ verified: true
26
+ - name: Recall
27
+ type: recall
28
+ value: 0.9306415698281261
29
+ verified: true
30
+ - name: F1
31
+ type: f1
32
+ value: 0.9258740048459675
33
+ verified: true
34
+ - name: loss
35
+ type: loss
36
+ value: 0.48325642943382263
37
+ verified: true
38
+ library_name: transformers.js
39
+ base_model:
40
+ - dslim/bert-base-NER
41
+ pipeline_tag: token-classification
42
+ ---
43
+
44
+
45
+
46
+ # bert-base-NER (ONNX)
47
+
48
+
49
+ This is an ONNX version of [dslim/bert-base-NER](https://huggingface.co/dslim/bert-base-NER). It was automatically converted and uploaded using [this Hugging Face Space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
50
+
51
+
52
+ ## Usage with Transformers.js
53
+
54
+
55
+ See the pipeline documentation for `token-classification`: https://huggingface.co/docs/transformers.js/api/pipelines#module_pipelines.TokenClassificationPipeline
56
+
57
+
58
+ ---
59
+
60
+
61
+ # bert-base-NER
62
+
63
+ If my open source models have been useful to you, please consider supporting me in building small, useful AI models for everyone (and help me afford med school / help out my parents financially). Thanks!
64
+
65
+ <a href="https://www.buymeacoffee.com/dslim" target="_blank"><img src="https://cdn.buymeacoffee.com/buttons/v2/arial-yellow.png" alt="Buy Me A Coffee" style="height: 60px !important;width: 217px !important;" ></a>
66
+
67
+ ## Model description
68
+
69
+ **bert-base-NER** is a fine-tuned BERT model that is ready to use for **Named Entity Recognition** and achieves **state-of-the-art performance** for the NER task. It has been trained to recognize four types of entities: location (LOC), organizations (ORG), person (PER) and Miscellaneous (MISC).
70
+
71
+ Specifically, this model is a *bert-base-cased* model that was fine-tuned on the English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
72
+
73
+ If you'd like to use a larger BERT-large model fine-tuned on the same dataset, a [**bert-large-NER**](https://huggingface.co/dslim/bert-large-NER/) version is also available.
74
+
75
+ ### Available NER models
76
+ | Model Name | Description | Parameters |
77
+ |-------------------|-------------|------------------|
78
+ | [distilbert-NER](https://huggingface.co/dslim/distilbert-NER) **(NEW!)** | Fine-tuned DistilBERT - a smaller, faster, lighter version of BERT | 66M |
79
+ | [bert-large-NER](https://huggingface.co/dslim/bert-large-NER/) | Fine-tuned bert-large-cased - larger model with slightly better performance | 340M |
80
+ | [bert-base-NER](https://huggingface.co/dslim/bert-base-NER)-([uncased](https://huggingface.co/dslim/bert-base-NER-uncased)) | Fine-tuned bert-base, available in both cased and uncased versions | 110M |
81
+
82
+
83
+ ## Intended uses & limitations
84
+
85
+ #### How to use
86
+
87
+ You can use this model with Transformers *pipeline* for NER.
88
+
89
+ ```python
90
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
91
+ from transformers import pipeline
92
+
93
+ tokenizer = AutoTokenizer.from_pretrained("dslim/bert-base-NER")
94
+ model = AutoModelForTokenClassification.from_pretrained("dslim/bert-base-NER")
95
+
96
+ nlp = pipeline("ner", model=model, tokenizer=tokenizer)
97
+ example = "My name is Wolfgang and I live in Berlin"
98
+
99
+ ner_results = nlp(example)
100
+ print(ner_results)
101
+ ```
102
+
103
+ #### Limitations and bias
104
+
105
+ This model is limited by its training dataset of entity-annotated news articles from a specific span of time. This may not generalize well for all use cases in different domains. Furthermore, the model occassionally tags subword tokens as entities and post-processing of results may be necessary to handle those cases.
106
+
107
+ ## Training data
108
+
109
+ This model was fine-tuned on English version of the standard [CoNLL-2003 Named Entity Recognition](https://www.aclweb.org/anthology/W03-0419.pdf) dataset.
110
+
111
+ The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
112
+
113
+ Abbreviation|Description
114
+ -|-
115
+ O|Outside of a named entity
116
+ B-MISC |Beginning of a miscellaneous entity right after another miscellaneous entity
117
+ I-MISC | Miscellaneous entity
118
+ B-PER |Beginning of a person’s name right after another person’s name
119
+ I-PER |Person’s name
120
+ B-ORG |Beginning of an organization right after another organization
121
+ I-ORG |organization
122
+ B-LOC |Beginning of a location right after another location
123
+ I-LOC |Location
124
+
125
+
126
+ ### CoNLL-2003 English Dataset Statistics
127
+ This dataset was derived from the Reuters corpus which consists of Reuters news stories. You can read more about how this dataset was created in the CoNLL-2003 paper.
128
+ #### # of training examples per entity type
129
+ Dataset|LOC|MISC|ORG|PER
130
+ -|-|-|-|-
131
+ Train|7140|3438|6321|6600
132
+ Dev|1837|922|1341|1842
133
+ Test|1668|702|1661|1617
134
+ #### # of articles/sentences/tokens per dataset
135
+ Dataset |Articles |Sentences |Tokens
136
+ -|-|-|-
137
+ Train |946 |14,987 |203,621
138
+ Dev |216 |3,466 |51,362
139
+ Test |231 |3,684 |46,435
140
+
141
+ ## Training procedure
142
+
143
+ This model was trained on a single NVIDIA V100 GPU with recommended hyperparameters from the [original BERT paper](https://arxiv.org/pdf/1810.04805) which trained & evaluated the model on CoNLL-2003 NER task.
144
+
145
+ ## Eval results
146
+ metric|dev|test
147
+ -|-|-
148
+ f1 |95.1 |91.3
149
+ precision |95.0 |90.7
150
+ recall |95.3 |91.9
151
+
152
+ The test metrics are a little lower than the official Google BERT results which encoded document context & experimented with CRF. More on replicating the original results [here](https://github.com/google-research/bert/issues/223).
153
+
154
+ ### BibTeX entry and citation info
155
+
156
+ ```
157
+ @article{DBLP:journals/corr/abs-1810-04805,
158
+ author = {Jacob Devlin and
159
+ Ming{-}Wei Chang and
160
+ Kenton Lee and
161
+ Kristina Toutanova},
162
+ title = {{BERT:} Pre-training of Deep Bidirectional Transformers for Language
163
+ Understanding},
164
+ journal = {CoRR},
165
+ volume = {abs/1810.04805},
166
+ year = {2018},
167
+ url = {http://arxiv.org/abs/1810.04805},
168
+ archivePrefix = {arXiv},
169
+ eprint = {1810.04805},
170
+ timestamp = {Tue, 30 Oct 2018 20:39:56 +0100},
171
+ biburl = {https://dblp.org/rec/journals/corr/abs-1810-04805.bib},
172
+ bibsource = {dblp computer science bibliography, https://dblp.org}
173
+ }
174
+ ```
175
+ ```
176
+ @inproceedings{tjong-kim-sang-de-meulder-2003-introduction,
177
+ title = "Introduction to the {C}o{NLL}-2003 Shared Task: Language-Independent Named Entity Recognition",
178
+ author = "Tjong Kim Sang, Erik F. and
179
+ De Meulder, Fien",
180
+ booktitle = "Proceedings of the Seventh Conference on Natural Language Learning at {HLT}-{NAACL} 2003",
181
+ year = "2003",
182
+ url = "https://www.aclweb.org/anthology/W03-0419",
183
+ pages = "142--147",
184
+ }
185
+ ```
config.json ADDED
@@ -0,0 +1,50 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_attn_implementation_autoset": true,
3
+ "_name_or_path": "dslim/bert-base-NER",
4
+ "_num_labels": 9,
5
+ "architectures": [
6
+ "BertForTokenClassification"
7
+ ],
8
+ "attention_probs_dropout_prob": 0.1,
9
+ "classifier_dropout": null,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "id2label": {
14
+ "0": "O",
15
+ "1": "B-MISC",
16
+ "2": "I-MISC",
17
+ "3": "B-PER",
18
+ "4": "I-PER",
19
+ "5": "B-ORG",
20
+ "6": "I-ORG",
21
+ "7": "B-LOC",
22
+ "8": "I-LOC"
23
+ },
24
+ "initializer_range": 0.02,
25
+ "intermediate_size": 3072,
26
+ "label2id": {
27
+ "B-LOC": 7,
28
+ "B-MISC": 1,
29
+ "B-ORG": 5,
30
+ "B-PER": 3,
31
+ "I-LOC": 8,
32
+ "I-MISC": 2,
33
+ "I-ORG": 6,
34
+ "I-PER": 4,
35
+ "O": 0
36
+ },
37
+ "layer_norm_eps": 1e-12,
38
+ "max_position_embeddings": 512,
39
+ "model_type": "bert",
40
+ "num_attention_heads": 12,
41
+ "num_hidden_layers": 12,
42
+ "output_past": true,
43
+ "pad_token_id": 0,
44
+ "position_embedding_type": "absolute",
45
+ "torch_dtype": "float32",
46
+ "transformers_version": "4.49.0",
47
+ "type_vocab_size": 2,
48
+ "use_cache": true,
49
+ "vocab_size": 28996
50
+ }
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c7333b3c8345c86535d8739470fb65011aea09943375ea8a89e008c6463c018
3
+ size 431214198
onnx/model_bnb4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c6fb7bbae9351c29a7f83b54e4e00d6eb07320d0c988add52bea1290ff35e96d
3
+ size 139238911
onnx/model_fp16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3738498eae2bf81f247386667fff2074cbdc0085e5b8c155afd44883fb4762dd
3
+ size 215761276
onnx/model_int8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b324e829f1fad3b897f926d1a1d1372803c6d04546831a3a2ed103652b916adf
3
+ size 108908107
onnx/model_q4.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d370232c24058eae9e27b05845c406a103ad0793dbd88e1547dd49e4f8276e3
3
+ size 144547224
onnx/model_q4f16.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:15efa3b81446543cc2e255825e452e66766aaadfe3ab11e21a3ca2984db5666d
3
+ size 93668689
onnx/model_quantized.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b324e829f1fad3b897f926d1a1d1372803c6d04546831a3a2ed103652b916adf
3
+ size 108908107
onnx/model_uint8.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82205ea6f29ad865269c1a0b0258e83e39d70097d81065e8cbff1980050f2eb7
3
+ size 108908107
quantize_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "modes": [
3
+ "fp16",
4
+ "q8",
5
+ "int8",
6
+ "uint8",
7
+ "q4",
8
+ "q4f16",
9
+ "bnb4"
10
+ ],
11
+ "per_channel": true,
12
+ "reduce_range": true,
13
+ "block_size": null,
14
+ "is_symmetric": true,
15
+ "accuracy_level": null,
16
+ "quant_type": 1,
17
+ "op_block_list": null
18
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_basic_tokenize": true,
47
+ "do_lower_case": false,
48
+ "extra_special_tokens": {},
49
+ "mask_token": "[MASK]",
50
+ "max_len": 512,
51
+ "model_max_length": 512,
52
+ "never_split": null,
53
+ "pad_token": "[PAD]",
54
+ "sep_token": "[SEP]",
55
+ "strip_accents": null,
56
+ "tokenize_chinese_chars": true,
57
+ "tokenizer_class": "BertTokenizer",
58
+ "unk_token": "[UNK]"
59
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff