Spaces:
Running
on
CPU Upgrade
Running
on
CPU Upgrade
Update src/about.py
Browse files- src/about.py +2 -2
src/about.py
CHANGED
|
@@ -50,8 +50,8 @@ Per tutti questi task, a un punteggio migliore corrisponde una performance maggi
|
|
| 50 |
|
| 51 |
## Reproducibility
|
| 52 |
Per riprodurre i risultati scaricate la <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank"> Eleuther AI Language Model Evaluation Harness </a> ed eseguite:
|
| 53 |
-
* lm-eval --model hf --model_args pretrained=<vostro modello> --tasks hellaswag_it,arc_it --device cuda:0 --batch_size
|
| 54 |
-
* lm-eval --model hf --model_args pretrained=<vostro modello>, --tasks m_mmlu_it --num_fewshot 5 --device cuda:0 --batch_size
|
| 55 |
"""
|
| 56 |
|
| 57 |
EVALUATION_QUEUE_TEXT = """
|
|
|
|
| 50 |
|
| 51 |
## Reproducibility
|
| 52 |
Per riprodurre i risultati scaricate la <a href="https://github.com/EleutherAI/lm-evaluation-harness" target="_blank"> Eleuther AI Language Model Evaluation Harness </a> ed eseguite:
|
| 53 |
+
* lm-eval --model hf --model_args pretrained=<vostro modello> --tasks hellaswag_it,arc_it --device cuda:0 --batch_size 1;
|
| 54 |
+
* lm-eval --model hf --model_args pretrained=<vostro modello>, --tasks m_mmlu_it --num_fewshot 5 --device cuda:0 --batch_size 1
|
| 55 |
"""
|
| 56 |
|
| 57 |
EVALUATION_QUEUE_TEXT = """
|