Updated README.md to include the official repo.
Browse files
README.md
CHANGED
|
@@ -10,6 +10,10 @@ tags:
|
|
| 10 |
- moe
|
| 11 |
---
|
| 12 |
# Model Card for Mixtral-8x22B
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py).
|
| 14 |
|
| 15 |
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
|
|
|
|
| 10 |
- moe
|
| 11 |
---
|
| 12 |
# Model Card for Mixtral-8x22B
|
| 13 |
+
HuggingFace staffs cloned this repo to an official new repo [mistral-community/Mixtral-8x22B-v0.1](https://huggingface.co/mistral-community/Mixtral-8x22B-v0.1), you can download from there if you want to. \
|
| 14 |
+
Thanks HF staffs for crediting me! \
|
| 15 |
+
Also [here's a very owo music](https://www.youtube.com/watch?v=dGYYzLLuYfs)! owo...
|
| 16 |
+
|
| 17 |
Converted to HuggingFace Transformers format using the script [here](https://huggingface.co/v2ray/Mixtral-8x22B-v0.1/blob/main/convert.py).
|
| 18 |
|
| 19 |
The Mixtral-8x22B Large Language Model (LLM) is a pretrained generative Sparse Mixture of Experts.
|