| | --- |
| | language: |
| | - en |
| | - zh |
| | - id |
| | - th |
| | - vi |
| | - ms |
| | - lo |
| | datasets: |
| | - cerebras/SlimPajama-627B |
| | - Skywork/SkyPile-150B |
| | - allenai/MADLAD-400 |
| | - cc100 |
| | tags: |
| | - multilingual |
| | - sea |
| | - sailor |
| | license: apache-2.0 |
| | base_model: Qwen/Qwen1.5-14B |
| | inference: false |
| | --- |
| | |
| | <div align="center"> |
| | <img src="banner_sailor.jpg" width="700"/> |
| | </div> |
| |
|
| | Sailor is a suite of Open Language Models tailored for South-East Asia (SEA), focusing on languages such as 🇮🇩Indonesian, 🇹🇭Thai, 🇻🇳Vietnamese, 🇲🇾Malay, and 🇱🇦Lao. |
| | Developed with careful data curation, Sailor models are designed to understand and generate text across diverse linguistic landscapes of SEA region. |
| | Built from [Qwen 1.5](https://huggingface.co/collections/Qwen/qwen15-65c0a2f577b1ecb76d786524) , Sailor encompasses models of varying sizes, spanning from 0.5B to 14B versions for different requirements. |
| | We further fine-tune the base model with open-source datasets to get instruction-tuned models, namedly Sailor-Chat. |
| | Benchmarking results demonstrate Sailor's proficiency in tasks such as question answering, commonsense reasoning, and other tasks in SEA languages. |
| |
|
| | > The logo was generated by MidJourney |
| |
|
| | ## Model Summary |
| | - **Model Collections:** [Base Model & Chat Model](https://huggingface.co/collections/sail/sailor-65e19a749f978976f1959825) |
| | - **Project Website:** [sea-sailor.github.io/blog/sailor1/](https://sea-sailor.github.io/blog/sailor1/) |
| | - **Codebase:** [github.com/sail-sg/sailor-llm](https://github.com/sail-sg/sailor-llm) |
| | - **Technical Report:** [arxiv.org/pdf/2404.03608.pdf](https://arxiv.org/pdf/2404.03608.pdf) |
| |
|
| |
|
| | ## Training details |
| | Sailor is crafted by continually pre-training from language models like the remarkable Qwen 1.5 models, which already has a great performance on SEA languages. |
| | The pre-training corpus heavily leverages the publicly available corpus, including |
| | [SlimPajama](https://huggingface.co/datasets/cerebras/SlimPajama-627B), |
| | [SkyPile](https://huggingface.co/datasets/Skywork/SkyPile-150B), |
| | [CC100](https://huggingface.co/datasets/cc100) and [MADLAD-400](https://huggingface.co/datasets/allenai/MADLAD-400). |
| |
|
| | By employing aggressive data deduplication and careful data cleaning on the collected corpus, we have attained a high-quality dataset spanning various languages. |
| | Through systematic experiments to determine the weights of different languages, Sailor models undergo training from 200B to 400B tokens, tailored to different model sizes. |
| | The approach boosts their performance on SEA languages while maintaining proficiency in English and Chinese without significant compromise. |
| | Finally, we continually pre-train the Qwen1.5-0.5B model with 400 Billion tokens, and other models with 200 Billion tokens to obtain the Sailor models. |
| |
|
| | ## Requirements |
| | The code of Sailor has been in the latest Hugging face transformers and we advise you to install `transformers>=4.37.0`. |
| |
|
| | ## Quickstart |
| |
|
| | Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents. |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | device = "cuda" # the device to load the model |
| | |
| | model = AutoModelForCausalLM.from_pretrained("sail/Sailor-14B", device_map="auto") |
| | tokenizer = AutoTokenizer.from_pretrained("sail/Sailor-14B") |
| | |
| | input_message = "Model bahasa adalah model probabilistik" |
| | ### The given Indonesian input translates to 'A language model is a probabilistic model of.' |
| | |
| | model_inputs = tokenizer([input_message], return_tensors="pt").to(device) |
| | |
| | generated_ids = model.generate( |
| | model_inputs.input_ids, |
| | max_new_tokens=64 |
| | ) |
| | |
| | generated_ids = [ |
| | output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids) |
| | ] |
| | |
| | response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0] |
| | print(response) |
| | ``` |
| |
|
| | # License |
| |
|
| | Sailor is distributed under the terms of the Apache License 2.0. |
| | No restrict on the research and the commercial use, but should comply with the [Qwen License](https://huggingface.co/Qwen/Qwen1.5-1.8B/blob/main/LICENSE). |
| |
|
| | ## Citation |
| |
|
| | If you find sailor useful, please cite our work as follows: |
| |
|
| | ``` |
| | @inproceedings{dou-etal-2024-sailor, |
| | title = "Sailor: Open Language Models for South-{E}ast {A}sia", |
| | author = "Dou, Longxu and Liu, Qian and Zeng, Guangtao and Guo, Jia and Zhou, Jiahui and Mao, Xin and Jin, Ziqi and Lu, Wei and Lin, Min", |
| | booktitle = "Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", |
| | year = "2024", |
| | } |
| | ``` |
| |
|
| | # Contact Us |
| |
|
| | If you have any questions, please raise an issue or contact us at [doulx@sea.com](mailto:doulx@sea.com) or [liuqian.sea@gmail.com](mailto:liuqian.sea@gmail.com). |