Update README.md
Browse files
README.md
CHANGED
|
@@ -1,10 +1,17 @@
|
|
| 1 |
---
|
| 2 |
title: README
|
| 3 |
-
emoji:
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: indigo
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
title: README
|
| 3 |
+
emoji: ⚖️
|
| 4 |
colorFrom: blue
|
| 5 |
colorTo: indigo
|
| 6 |
sdk: static
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
Hi! Welcome on the org page of the Evaluation team at HuggingFace.
|
| 11 |
+
We want to support the community in building and sharing quality evaluations, for reproducible and fair model comparisions, to cut through the hype of releases and better understand actual model capabilities.
|
| 12 |
+
|
| 13 |
+
We're behind the:
|
| 14 |
+
- [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) (over 11K models evaluated since 2023)
|
| 15 |
+
- [lighteval](https://github.com/huggingface/lighteval) LLM evaluation suite, fast and filled with the SOTA benchmarks you might want
|
| 16 |
+
- [evaluation guidebook](https://github.com/huggingface/evaluation-guidebook), your reference for LLM evals
|
| 17 |
+
- [leaderboards on the hub](https://huggingface.co/blog?tag=leaderboard) initiative, to encourage people to build more leaderboards in the open for more reproducible evaluation. You'll find some doc [here](https://huggingface.co/docs/leaderboards/index) to build your own!
|