Update README.md
Browse files
README.md
CHANGED
|
@@ -14,6 +14,6 @@ We're behind the:
|
|
| 14 |
- [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) (over 11K models evaluated since 2023)
|
| 15 |
- [lighteval](https://github.com/huggingface/lighteval) LLM evaluation suite, fast and filled with the SOTA benchmarks you might want
|
| 16 |
- [evaluation guidebook](https://github.com/huggingface/evaluation-guidebook), your reference for LLM evals
|
| 17 |
-
- [leaderboards on the hub](https://huggingface.co/blog?tag=leaderboard) initiative, to encourage people to build more leaderboards in the open for more reproducible evaluation. You'll find some doc [here](https://huggingface.co/docs/leaderboards/index) to build your own!
|
| 18 |
|
| 19 |
We're not behind the [evaluate metrics guide](https://huggingface.co/evaluate-metric) but if you want to understand metrics better we really recommend checking it out!
|
|
|
|
| 14 |
- [Open LLM Leaderboard](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/) (over 11K models evaluated since 2023)
|
| 15 |
- [lighteval](https://github.com/huggingface/lighteval) LLM evaluation suite, fast and filled with the SOTA benchmarks you might want
|
| 16 |
- [evaluation guidebook](https://github.com/huggingface/evaluation-guidebook), your reference for LLM evals
|
| 17 |
+
- [leaderboards on the hub](https://huggingface.co/blog?tag=leaderboard) initiative, to encourage people to build more leaderboards in the open for more reproducible evaluation. You'll find some doc [here](https://huggingface.co/docs/leaderboards/index) to build your own, and you can look for the best leaderboard for your use case [here](https://huggingface.co/spaces/OpenEvals/find-a-leaderboard)!
|
| 18 |
|
| 19 |
We're not behind the [evaluate metrics guide](https://huggingface.co/evaluate-metric) but if you want to understand metrics better we really recommend checking it out!
|