Severino commited on
Commit
27ea7f3
·
verified ·
1 Parent(s): 879cb97

Final checks.

Browse files
Files changed (1) hide show
  1. README.md +38 -77
README.md CHANGED
@@ -3,64 +3,24 @@ license: apache-2.0
3
  library_name: transformers
4
  pipeline_tag: text-generation
5
  language:
6
- - bg
7
  - ca
8
- - code
9
- - cs
10
- - cy
11
- - da
12
- - de
13
- - el
14
  - en
15
  - es
16
- - et
17
  - eu
18
- - fi
19
- - fr
20
- - ga
21
  - gl
22
- - hr
23
- - hu
24
- - it
25
- - lt
26
- - lv
27
- - mt
28
- - nl
29
- - nn
30
- - \no
31
- - oc
32
- - pl
33
- - pt
34
- - ro
35
- - ru
36
- - sh
37
- - sk
38
- - sl
39
- - sr
40
- - sv
41
- - uk
42
  datasets:
43
- - oscar-corpus/colossal-oscar-1.0
 
 
 
 
 
 
 
 
 
 
44
  - HuggingFaceFW/fineweb-edu
45
- - joelniklaus/eurlex_resources
46
- - joelniklaus/legal-mc4
47
- - projecte-aina/CATalog
48
- - UFRGS/brwac
49
- - community-datasets/hrwac
50
- - danish-foundation-models/danish-gigaword
51
- - HiTZ/euscrawl
52
- - PleIAs/French-PD-Newspapers
53
- - PleIAs/French-PD-Books
54
- - AI-team-UoA/greek_legal_code
55
- - HiTZ/latxa-corpus-v1.1
56
- - allenai/peS2o
57
- - pile-of-law/pile-of-law
58
- - PORTULAN/parlamento-pt
59
- - hoskinson-center/proof-pile
60
- - togethercomputer/RedPajama-Data-1T
61
- - bigcode/starcoderdata
62
- - bjoernp/tagesschau-2018-2023
63
- - EleutherAI/the_pile_deduplicated
64
  base_model:
65
  - BSC-LT/ALIA-40b
66
  ---
@@ -72,11 +32,11 @@ base_model:
72
 
73
  # ALIA-40b-instruct Model Card
74
 
75
- The ALIA-40b-instruct model is an instructed variant of a context-extended [base ALIA-40b model](https://huggingface.co/BSC-LT/ALIA-40b), which was pre-trained from scratch on 9.83 trillion tokens of carefully curated data spanning 35 European languages (including code). This instructed version is optimized to follow user prompts and engage in dialogue. It supports a broad range of languages (e.g. Spanish, Catalan, Basque, English, etc.) and is capable of text generation, translation, summarization, and question-answering in these languages. This version has also been gone through a preliminary alignment phase for helpfulness and safety with synthetically generated preference pairs.
76
 
77
- In keeping with our commitment to open-source development, all tools and sources used to process and create the training data are open-licensed. For clarity, our definition of open-licensed excludes any source, tool, model, or dataset whose terms of use impose restrictive conditions that impede standard open reuse.
78
 
79
- This model is released under the apermissive [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/alia).
80
 
81
  To visit the model cards of other model versions, please refer to the [Model Index](https://www.notion.so/Alia-2025-09-29-Model-Card-27db93cf5c1b808aa1f1fc8229255f24?pvs=21).
82
 
@@ -87,15 +47,15 @@ To visit the model cards of other model versions, please refer to the [Model Ind
87
 
88
  ### Description
89
 
90
- The ALIA-40b is a transformer-based, decoder-only language model tha was pre-trained from scratch on 9.37 trillion tokens of meticulously curated data. It subsequently underwent continued pretraining on additional 424 billion high-quality tokens, and was further extended with a supplementary 39 biilion tokens drawn from a similarly diverse mixture, totalling 9.83 trillion tokens.
91
 
92
- ALIA-40b-Instruct is an instructed variant of this latest ALIA-40b version. Its post-training process comprises three consecutive stages, each targetting a specific capility: (1) long-context adaptation to extend the model’s context window, (2) supervised fine-tuning to improve instruction following capabilities, and (3), a preliminary alignment stage to better match human preferences and safety.
93
 
94
- After the long-context adaptation, the model enters the supervised fine-tuning (SFT) stage. This stage is implemented in two phases for efficiency reasons: a short-context SFT con 469k conversation examples to strengthen instruction following, followed by a long-context SFT con 14k long-context instances. We separate these phases because full-context fine-tuning is computationally expensive.
95
 
96
- In the third stage, the model is aligned with human preferences through Direct Policy Optimization (DPO) using a mixture of 401k preference pairs. Of this mixture, approximately 82% of the pairs target general model helpfulness, while 18% focus on response safety. This aligment stage is preliminary, and further work is ongoin to strengthen safety and reliability.
97
 
98
- Although the base model is highly multilingual, the post-training process concentrated primarily on Spanish, Catalan, Basque, Galician, and English. We also incoporated data from other related languages where inclusion empirically improved the performance on the target languages. However, performance in those additional languages is not guaranteed duet to the limited amount of available data and the scarcity of evaluation resources.
99
 
100
  ### Hyperparameters
101
 
@@ -132,7 +92,7 @@ Here we list the specific hyperparameters used during the different training sta
132
  | Epochs | 1 |
133
  | LR Scheduler | Cosine |
134
  | Warmup Ratio | 0.03 |
135
- | Number of Samples | 12,928 |
136
 
137
  #### Alignment
138
 
@@ -140,7 +100,7 @@ Here we list the specific hyperparameters used during the different training sta
140
  | --- | --- |
141
  | Learning rate | 2e-6 |
142
  | Batch size | 1024 |
143
- | Epoch | 2 |
144
  | LR Scheduler | Linear |
145
  | Number of samples | 402,917 |
146
 
@@ -181,7 +141,7 @@ The model is not intended for malicious activities, such as harming others or vi
181
 
182
  ### Training Framework
183
 
184
- The post-training process was conducted using three complementary frameworkds, each selecto to best support its corresponding stage:
185
 
186
  - Supervised Fine-Tuning (SFT): Conducted with an internal fork of the FastChat codebase, adapted to our infrastructure and optimized for stability and efficiency in our use case.
187
  - Long-Context SFT: Performed using NeMo-Aligner, chosen to ensure compatibility with extended-context training while maintaining consistency with the FastChat-based SFT.
@@ -198,7 +158,7 @@ The accelerated partition is composed of 1,120 nodes with the following specific
198
  - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
199
  - 4x NDR200 (BW per node 800Gb/s)
200
  - 512 GB of Main memory (DDR5)
201
- - 460GB on NVMe storage
202
 
203
  The table below specifies the number of nodes and GPUs employed for each post-training stage:
204
 
@@ -247,8 +207,8 @@ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
247
  ```
248
 
249
  ---
250
- Using this template, each turn in the conversation is preceded by a `<|im_start|>` delimiter indicating the beginning of a message, followwed by and the role of the entity
251
- (either `user`, for content supplied by the user, or `assistant` for models responses), and finished with the `<|im_end|>` token:
252
 
253
  ```
254
  <s><|im_start|>user
@@ -265,7 +225,7 @@ The dataset used in the initial supervised fine-tuning stage consists of 469k co
265
 
266
  The synthetic conversations are generated using [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), leveraging seed data and prompts from pre-training corpora, as well as other openly available instruction datasets.
267
 
268
- The table below provides a detailed breakdown of the datasets included in this mixture, specifying their origin, type, license, and contribution to the overall corpus.:
269
 
270
  | **Dataset** | **ca** | **en** | **es** | **eu** | **gl** | **pt** | **Total Conversations** |
271
  | --- | --- | --- | --- | --- | --- | --- | --- |
@@ -286,7 +246,7 @@ The table below provides a detailed breakdown of the datasets included in this m
286
  | **fineweb-edu_qa** | 23374 | 20803 | 23311 | 22284 | 22307 | | 112079 |
287
  | **Total** | **81633** | **199730** | **89313** | **49265** | **36605** | **21711** | **478257** |
288
 
289
- Following the short-context supervised fine-tuning, a second stage was introduced using the remaining 12k short-context samples from our mix, together with 480 long-context samples.
290
 
291
  The long-context data was synthetically generated with Salamandra-7B using source texts from FineWebEdu, FineWeb2, and Wikipedia. The length of the examples varies between 16k and 160k tokens. The resulting outputs were subsequently filtered with the same DeepSeek-V3-0324 model to ensure quality and consistency.
292
 
@@ -378,7 +338,7 @@ The following table provides a detailed overview of the supervised fine-tuning d
378
  <td>tower-blocks</td>
379
  <td>Mixture</td>
380
  <td>Various licenses (only open licensed instances are used)</td>
381
- <td><a href="https://huggingface.co/datasets/projecte-aina/RAG_Multilingual">TowerBlocks-v0.2</a> filtered by subdataset license and the languages of interest.</td>
382
  </tr>
383
  <tr>
384
  <td>oasst2_self-identity-rephrase</td>
@@ -419,14 +379,15 @@ The following table provides a detailed overview of the supervised fine-tuning d
419
 
420
  ### Alignment Data
421
 
422
- The alignment data was synthetically generated from a corpus of approximately 401k prompts designed to improve both helpfulness and safety.
423
 
424
  - **Helpfulness**: Prompts include instruction following, mathematics, question answering, and reasoning tasks across Catalan, Spanish, English, Euskera, and Galician. Additionally, M-Personas conversations, a resource specifically generated for this project, were incorporated and will also be released.
425
  - **Safety**: Prompts were synthetically generated from seed prompts written by human annotators, covering nine harm categories to ensure broad coverage of safety-related scenarios.
426
 
427
- Following approaches similar to UltraFeedback and PKU, each instrucition underwent the following process:
428
 
429
  1. Multiple responses were produced using a pool of permissively licensed models (see [Model Pool](#model-pool-for-synthetic-data-generation) on helpfulness or safety, depending on the prompt.
 
430
  3. Preference pairs were constructed from these ratings. This phase should be considered preliminary, as future versions of the model will incorporate human annotators to refine and curate the generation and evaluation pipeline.
431
 
432
  The table below presents the distribution of helpfulness prompts by language, detailing the number of examples contributed from each language:
@@ -487,7 +448,7 @@ In the table below, we list the permissively licensed models that were used to g
487
  <tr>
488
  <td>Deepseek</td>
489
  <td>DeepSeek-V3-0324</td>
490
- <td>671</td>
491
  <td>aligned</td>
492
  <td>MIT</td>
493
  </tr>
@@ -528,14 +489,14 @@ In the table below, we list the permissively licensed models that were used to g
528
  </tr>
529
  <tr>
530
  <td>Mistral</td>
531
- <td>Mixtral-8x7B-Instruct-v0_1</td>
532
  <td>56</td>
533
  <td>aligned</td>
534
  <td>Apache 2.0</td>
535
  </tr>
536
  <tr>
537
  <td></td>
538
- <td>Mistral-7B-Instruct-v0_3</td>
539
  <td>7</td>
540
  <td>aligned</td>
541
  <td>Apache 2.0</td>
@@ -579,7 +540,7 @@ In the table below, we list the permissively licensed models that were used to g
579
  <td>FLOR_BSC</td>
580
  <td>Aitana_6_3B_BSC_Instructed</td>
581
  <td>6.3</td>
582
- <td>base</td>
583
  <td>Apache 2.0</td>
584
  </tr>
585
  <tr>
@@ -640,7 +601,7 @@ In the table below, we list the permissively licensed models that were used to g
640
 
641
  ### Gold-standard benchmarks
642
 
643
- Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench), as well as existing English tasks available in the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. The tables below report results for a representative selection of evaluation datasets, capturing models performance across a variety of tasks within these benchmarks.
644
 
645
  Only tasks that are human-generated, human-translated, or involve strong human-in-the-loop process (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation) were used. This approach explains the variation in the number of tasks reported across languages. As additional high-quality tasks are published, we will update the evaluation results accordingly. We also plan to expand evaluation to other languages, provided that the datasets meet our quality standards.
646
 
@@ -740,7 +701,7 @@ To assess the long-context capabilities of our model, we performed a "needle in
740
  - **Needle Phrase**: *"The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day."*
741
  - **System Prompt:** “You are a helpful AI bot that answers questions for a user. Keep your response short and direct”
742
  - **Retrieval Question**: *"What is the best thing to do in San Francisco?"*
743
- - **Evaluator**: [prometheus-8x7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0), used as the evaluation judge to determine wheter the model correctly retrieved and utilized the long-context information.
744
 
745
  This test specifically targets the model’s ability to retain and access information across very long sequences, providing a benchmark for evaluating its extended-context reasoning and retrieval performance.
746
 
@@ -795,7 +756,7 @@ This project has benefited from the contributions of numerous teams and institut
795
 
796
  We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà. We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.
797
 
798
- We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, specially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipes Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
799
 
800
  Their valuable efforts have been instrumental in the development of this work.
801
 
 
3
  library_name: transformers
4
  pipeline_tag: text-generation
5
  language:
 
6
  - ca
 
 
 
 
 
 
7
  - en
8
  - es
 
9
  - eu
 
 
 
10
  - gl
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  datasets:
12
+ - CohereLabs/aya_dataset
13
+ - projecte-aina/CoQCat
14
+ - databricks/databricks-dolly-15k
15
+ - projecte-aina/dolly3k_ca
16
+ - projecte-aina/MentorES
17
+ - projecte-aina/MentorCA
18
+ - HuggingFaceH4/no_robots
19
+ - projecte-aina/RAG_Multilingual
20
+ - Unbabel/TowerBlocks-v0.2
21
+ - OpenAssistant/oasst2
22
+ - open-r1/OpenR1-Math-220k
23
  - HuggingFaceFW/fineweb-edu
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  base_model:
25
  - BSC-LT/ALIA-40b
26
  ---
 
32
 
33
  # ALIA-40b-instruct Model Card
34
 
35
+ The ALIA-40b-instruct model is an instructed variant of a context-extended [base ALIA-40b model](https://huggingface.co/BSC-LT/ALIA-40b), which was pre-trained from scratch on 9.83 trillion tokens of carefully curated data spanning 35 European languages (including code). This instructed version is optimized to follow user prompts and engage in dialogue. It supports a broad range of languages (e.g. Spanish, Catalan, Basque, English, etc.) and is capable of text generation, translation, summarization, and question-answering in these languages. This version has also gone through a preliminary alignment phase for helpfulness and safety with synthetically generated preference pairs.
36
 
37
+ In keeping with our commitment to open-source development, all tools and sources used to process and create the training data are open-licensed. For clarity, our definition of open-licensed excludes any source, tool, model, or dataset whose terms of use impose restrictive conditions that impede standard open reuse.
38
 
39
+ This model is released under the permissive [Apache 2.0 license](https://www.apache.org/licenses/LICENSE-2.0). Along with the open weights, all training scripts and configuration files are made publicly available in [this GitHub repository](https://github.com/langtech-bsc/alia).
40
 
41
  To visit the model cards of other model versions, please refer to the [Model Index](https://www.notion.so/Alia-2025-09-29-Model-Card-27db93cf5c1b808aa1f1fc8229255f24?pvs=21).
42
 
 
47
 
48
  ### Description
49
 
50
+ The ALIA-40b is a transformer-based, decoder-only language model that was pre-trained from scratch on 9.37 trillion tokens of meticulously curated data. It subsequently underwent continued pretraining on additional 424 billion high-quality tokens, and was further extended with a supplementary 39 billion tokens drawn from a similarly diverse mixture, totalling 9.83 trillion tokens.
51
 
52
+ ALIA-40b-Instruct is an instructed variant of this latest ALIA-40b version. Its post-training process comprises three consecutive stages, each targeting a specific capability: (1) long-context adaptation to extend the model’s context window, (2) supervised fine-tuning to improve instruction following capabilities, and (3) a preliminary alignment stage to better match human preferences and safety.
53
 
54
+ After the long-context adaptation, the model enters the supervised fine-tuning (SFT) stage. This stage is implemented in two phases for efficiency reasons: a short-context SFT with 469k conversation examples to strengthen instruction following, followed by a long-context SFT with 9k long-context instances. We separate these phases because full-context fine-tuning is computationally expensive.
55
 
56
+ In the third stage, the model is aligned with human preferences through Direct Policy Optimization (DPO) using a mixture of 403k preference pairs. Of this mixture, approximately 82% of the pairs target general model helpfulness, while 18% focus on response safety. This alignment stage is preliminary, and further work is ongoing to strengthen safety and reliability.
57
 
58
+ Although the base model is highly multilingual, the post-training process concentrated primarily on Spanish, Catalan, Basque, Galician, and English. We also incorporated data from other related languages where inclusion empirically improved the performance on the target languages. However, performance in those additional languages is not guaranteed due to the limited amount of available data and the scarcity of evaluation resources.
59
 
60
  ### Hyperparameters
61
 
 
92
  | Epochs | 1 |
93
  | LR Scheduler | Cosine |
94
  | Warmup Ratio | 0.03 |
95
+ | Number of Samples | 9,380 |
96
 
97
  #### Alignment
98
 
 
100
  | --- | --- |
101
  | Learning rate | 2e-6 |
102
  | Batch size | 1024 |
103
+ | Epochs | 2 |
104
  | LR Scheduler | Linear |
105
  | Number of samples | 402,917 |
106
 
 
141
 
142
  ### Training Framework
143
 
144
+ The post-training process was conducted using three complementary frameworks, each selected to best support its corresponding stage:
145
 
146
  - Supervised Fine-Tuning (SFT): Conducted with an internal fork of the FastChat codebase, adapted to our infrastructure and optimized for stability and efficiency in our use case.
147
  - Long-Context SFT: Performed using NeMo-Aligner, chosen to ensure compatibility with extended-context training while maintaining consistency with the FastChat-based SFT.
 
158
  - 2x Intel Sapphire Rapids 8460Y+ at 2.3Ghz and 32c each (64 cores)
159
  - 4x NDR200 (BW per node 800Gb/s)
160
  - 512 GB of Main memory (DDR5)
161
+ - 460GB of NVMe storage
162
 
163
  The table below specifies the number of nodes and GPUs employed for each post-training stage:
164
 
 
207
  ```
208
 
209
  ---
210
+ Using this template, each turn in the conversation is preceded by a `<|im_start|>` delimiter indicating the beginning of a message, followed by the role of the entity
211
+ (either `user`, for content supplied by the user, or `assistant` for the model's responses), and finished with the `<|im_end|>` token:
212
 
213
  ```
214
  <s><|im_start|>user
 
225
 
226
  The synthetic conversations are generated using [DeepSeek-V3-0324](https://huggingface.co/deepseek-ai/DeepSeek-V3-0324), leveraging seed data and prompts from pre-training corpora, as well as other openly available instruction datasets.
227
 
228
+ The table below provides a detailed breakdown of the datasets included in this mixture, specifying their origin, type, license, and contribution to the overall corpus:
229
 
230
  | **Dataset** | **ca** | **en** | **es** | **eu** | **gl** | **pt** | **Total Conversations** |
231
  | --- | --- | --- | --- | --- | --- | --- | --- |
 
246
  | **fineweb-edu_qa** | 23374 | 20803 | 23311 | 22284 | 22307 | | 112079 |
247
  | **Total** | **81633** | **199730** | **89313** | **49265** | **36605** | **21711** | **478257** |
248
 
249
+ Following the short-context supervised fine-tuning, a second stage was introduced using the remaining 9k short-context samples from our mix, together with 480 long-context samples.
250
 
251
  The long-context data was synthetically generated with Salamandra-7B using source texts from FineWebEdu, FineWeb2, and Wikipedia. The length of the examples varies between 16k and 160k tokens. The resulting outputs were subsequently filtered with the same DeepSeek-V3-0324 model to ensure quality and consistency.
252
 
 
338
  <td>tower-blocks</td>
339
  <td>Mixture</td>
340
  <td>Various licenses (only open licensed instances are used)</td>
341
+ <td><a href="https://huggingface.co/datasets/Unbabel/TowerBlocks-v0.2">TowerBlocks-v0.2</a> filtered by subdataset license and the languages of interest.</td>
342
  </tr>
343
  <tr>
344
  <td>oasst2_self-identity-rephrase</td>
 
379
 
380
  ### Alignment Data
381
 
382
+ The alignment data was synthetically generated from a corpus of approximately 403k prompts designed to improve both helpfulness and safety.
383
 
384
  - **Helpfulness**: Prompts include instruction following, mathematics, question answering, and reasoning tasks across Catalan, Spanish, English, Euskera, and Galician. Additionally, M-Personas conversations, a resource specifically generated for this project, were incorporated and will also be released.
385
  - **Safety**: Prompts were synthetically generated from seed prompts written by human annotators, covering nine harm categories to ensure broad coverage of safety-related scenarios.
386
 
387
+ Following approaches similar to UltraFeedback and PKU, each instruction underwent the following process:
388
 
389
  1. Multiple responses were produced using a pool of permissively licensed models (see [Model Pool](#model-pool-for-synthetic-data-generation) on helpfulness or safety, depending on the prompt.
390
+ 2. These responses were rated by a judge (Deepseek-V3-0324). Helpfulness responses were given an overall rating, while safety responses were given a score based on their level of severity over a list of harm categories.
391
  3. Preference pairs were constructed from these ratings. This phase should be considered preliminary, as future versions of the model will incorporate human annotators to refine and curate the generation and evaluation pipeline.
392
 
393
  The table below presents the distribution of helpfulness prompts by language, detailing the number of examples contributed from each language:
 
448
  <tr>
449
  <td>Deepseek</td>
450
  <td>DeepSeek-V3-0324</td>
451
+ <td>685</td>
452
  <td>aligned</td>
453
  <td>MIT</td>
454
  </tr>
 
489
  </tr>
490
  <tr>
491
  <td>Mistral</td>
492
+ <td>Mixtral-8x7B-Instruct-v0.1</td>
493
  <td>56</td>
494
  <td>aligned</td>
495
  <td>Apache 2.0</td>
496
  </tr>
497
  <tr>
498
  <td></td>
499
+ <td>Mistral-7B-Instruct-v0.3</td>
500
  <td>7</td>
501
  <td>aligned</td>
502
  <td>Apache 2.0</td>
 
540
  <td>FLOR_BSC</td>
541
  <td>Aitana_6_3B_BSC_Instructed</td>
542
  <td>6.3</td>
543
+ <td>instructed</td>
544
  <td>Apache 2.0</td>
545
  </tr>
546
  <tr>
 
601
 
602
  ### Gold-standard benchmarks
603
 
604
+ Evaluation is done using the Language Model Evaluation Harness (Gao et al., 2024). We evaluate on a set of tasks taken from [SpanishBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/spanish_bench), [CatalanBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/catalan_bench), [BasqueBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/basque_bench) and [GalicianBench](https://github.com/EleutherAI/lm-evaluation-harness/tree/main/lm_eval/tasks/galician_bench), as well as existing English tasks available in the LM Evaluation Harness. These benchmarks include both new and existing tasks and datasets. The tables below report results for a representative selection of evaluation datasets, capturing model's performance across a variety of tasks within these benchmarks.
605
 
606
  Only tasks that are human-generated, human-translated, or involve strong human-in-the-loop process (i.e., machine translation followed by professional revision or machine generation followed by human revision and annotation) were used. This approach explains the variation in the number of tasks reported across languages. As additional high-quality tasks are published, we will update the evaluation results accordingly. We also plan to expand evaluation to other languages, provided that the datasets meet our quality standards.
607
 
 
701
  - **Needle Phrase**: *"The best thing to do in San Francisco is eat a sandwich and sit in Dolores Park on a sunny day."*
702
  - **System Prompt:** “You are a helpful AI bot that answers questions for a user. Keep your response short and direct”
703
  - **Retrieval Question**: *"What is the best thing to do in San Francisco?"*
704
+ - **Evaluator**: [prometheus-8x7b-v2.0](https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0), used as the evaluation judge to determine whether the model correctly retrieved and utilized the long-context information.
705
 
706
  This test specifically targets the model’s ability to retain and access information across very long sequences, providing a benchmark for evaluating its extended-context reasoning and retrieval performance.
707
 
 
756
 
757
  We are especially grateful to our ILENIA project partners: CENID, HiTZ and CiTIUS for their participation. We also extend our genuine gratitude to the Spanish Senate and Congress, Fundación Dialnet, and the ‘Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)’ of the University of Las Palmas de Gran Canaria. Many other institutions have been involved in the project. Our thanks to Òmnium Cultural, Parlament de Catalunya, Institut d'Estudis Aranesos, Racó Català, Vilaweb, ACN, Nació Digital, El món and Aquí Berguedà. We thank the Welsh government, DFKI, Occiglot project, especially Malte Ostendorff, and The Common Crawl Foundation, especially Pedro Ortiz, for their collaboration.
758
 
759
+ We would also like to give special thanks to the NVIDIA team, with whom we have met regularly, especially to: Ignacio Sarasua, Adam Henryk Grzywaczewski, Oleg Sudakov, Sergio Perez, Miguel Martinez, Felipe Soares and Meriem Bendris. Their constant support has been especially appreciated throughout the entire process.
760
 
761
  Their valuable efforts have been instrumental in the development of this work.
762