Trouter-Library commited on
Commit
720e2e5
·
verified ·
1 Parent(s): 7f19f89

Create model_card.yaml

Browse files
Files changed (1) hide show
  1. model_card.yaml +208 -0
model_card.yaml ADDED
@@ -0,0 +1,208 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - es
5
+ - fr
6
+ - de
7
+ - it
8
+ - pt
9
+ - nl
10
+ - ru
11
+ - zh
12
+ - ja
13
+ - ko
14
+ - ar
15
+ - multilingual
16
+ license: apache-2.0
17
+ library_name: transformers
18
+ tags:
19
+ - text-generation
20
+ - image-text-to-text
21
+ - multimodal
22
+ - vision
23
+ - causal-lm
24
+ - long-context
25
+ - reasoning
26
+ - thinking
27
+ - conversational
28
+ - safe-ai
29
+ - 200k-context
30
+ - instruct
31
+ - chat
32
+ - llama
33
+ - llava
34
+ - function-calling
35
+ - tool-use
36
+ - structured-output
37
+ - json-mode
38
+ - ocr
39
+ - vqa
40
+ - visual-reasoning
41
+ - code-generation
42
+ - rag
43
+ pipeline_tag: image-text-to-text
44
+ datasets:
45
+ - common_crawl
46
+ - wikipedia
47
+ - books
48
+ - arxiv
49
+ - github
50
+ - stack_exchange
51
+ - laion
52
+ - coyo
53
+ - vqa-v2
54
+ - textvqa
55
+ - chartqa
56
+ - docvqa
57
+ - grit
58
+ - sharegpt
59
+ model-index:
60
+ - name: Helion-V2.0-Thinking
61
+ results:
62
+ - task:
63
+ type: text-generation
64
+ name: Text Generation
65
+ dataset:
66
+ name: MMLU
67
+ type: mmlu
68
+ metrics:
69
+ - type: accuracy
70
+ value: 72.4
71
+ name: Accuracy
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: HellaSwag
77
+ type: hellaswag
78
+ metrics:
79
+ - type: accuracy
80
+ value: 84.3
81
+ name: Accuracy
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: ARC-Challenge
87
+ type: arc_challenge
88
+ metrics:
89
+ - type: accuracy
90
+ value: 68.9
91
+ name: Accuracy
92
+ - task:
93
+ type: text-generation
94
+ name: Text Generation
95
+ dataset:
96
+ name: TruthfulQA
97
+ type: truthful_qa
98
+ metrics:
99
+ - type: accuracy
100
+ value: 58.7
101
+ name: Accuracy
102
+ - task:
103
+ type: text-generation
104
+ name: Text Generation
105
+ dataset:
106
+ name: Winogrande
107
+ type: winogrande
108
+ metrics:
109
+ - type: accuracy
110
+ value: 79.2
111
+ name: Accuracy
112
+ - task:
113
+ type: text-generation
114
+ name: Text Generation
115
+ dataset:
116
+ name: GSM8K
117
+ type: gsm8k
118
+ metrics:
119
+ - type: accuracy
120
+ value: 64.8
121
+ name: Accuracy
122
+ - task:
123
+ type: text-generation
124
+ name: Text Generation
125
+ dataset:
126
+ name: HumanEval
127
+ type: humaneval
128
+ metrics:
129
+ - type: pass@1
130
+ value: 48.2
131
+ name: Pass@1
132
+ - task:
133
+ type: image-text-to-text
134
+ name: Visual Question Answering
135
+ dataset:
136
+ name: VQA v2
137
+ type: vqa_v2
138
+ metrics:
139
+ - type: accuracy
140
+ value: 89.2
141
+ name: Accuracy
142
+ - task:
143
+ type: image-text-to-text
144
+ name: Text-based VQA
145
+ dataset:
146
+ name: TextVQA
147
+ type: textvqa
148
+ metrics:
149
+ - type: accuracy
150
+ value: 76.8
151
+ name: Accuracy
152
+ - task:
153
+ type: image-text-to-text
154
+ name: Chart Question Answering
155
+ dataset:
156
+ name: ChartQA
157
+ type: chartqa
158
+ metrics:
159
+ - type: accuracy
160
+ value: 81.4
161
+ name: Accuracy
162
+ - task:
163
+ type: image-text-to-text
164
+ name: Document VQA
165
+ dataset:
166
+ name: DocVQA
167
+ type: docvqa
168
+ metrics:
169
+ - type: accuracy
170
+ value: 88.7
171
+ name: Accuracy
172
+ - task:
173
+ type: function-calling
174
+ name: Function Calling
175
+ dataset:
176
+ name: Berkeley Function Calling
177
+ type: bfcl
178
+ metrics:
179
+ - type: accuracy
180
+ value: 94.3
181
+ name: Accuracy
182
+ - task:
183
+ type: structured-output
184
+ name: Structured Output
185
+ dataset:
186
+ name: JSON Schema Adherence
187
+ type: json_schema
188
+ metrics:
189
+ - type: accuracy
190
+ value: 97.1
191
+ name: Accuracy
192
+ base_model: llama
193
+ model_type: llava
194
+ inference: true
195
+ widget:
196
+ - text: "Explain the theory of relativity in simple terms:"
197
+ example_title: "Science Explanation"
198
+ - text: "Write a short story about a robot learning to paint:"
199
+ example_title: "Creative Writing"
200
+ - text: "What objects are in this image?"
201
+ src: "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/ai2d-demo.jpg"
202
+ example_title: "Image Analysis"
203
+ - text: "Extract the text from this image:"
204
+ src: "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/docvqa_example.png"
205
+ example_title: "OCR Task"
206
+ - text: "Use the calculator tool to compute: (45 * 23) + 156"
207
+ example_title: "Tool Usage"
208
+ ---