This is a replicate of https://huggingface.co/Open-Orca/Mistral-7B-OpenOrca
But in safetensor format
Prompt Template
To use the prompt for further training and inference, please use OpenAI's Chat Markup Language (ChatML) format, with <|im_start|> and <|im_end|> tokens added to support this.
This means that, e.g., in oobabooga the "MPT-Chat" instruction template should work, as it also uses ChatML.
This formatting is also available via a pre-defined Transformers chat template,
which means that lists of messages can be formatted for you with the apply_chat_template() method:
chat = [
{"role": "system", "content": "You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!"}
{"role": "user", "content": "How are you?"},
{"role": "assistant", "content": "I am doing well!"},
{"role": "user", "content": "Please tell me about how mistral winds have attracted super-orcas."},
]
tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
which will yield:
<|im_start|>system
You are MistralOrca, a large language model trained by Alignment Lab AI. Write out your reasoning step-by-step to be sure you get the right answers!
<|im_end|>
<|im_start|>user
How are you?<|im_end|>
<|im_start|>assistant
I am doing well!<|im_end|>
<|im_start|>user
Please tell me about how mistral winds have attracted super-orcas.<|im_end|>
<|im_start|>assistant
If you use tokenize=True and return_tensors="pt" instead, then you will get a tokenized
and formatted conversation ready to pass to model.generate().
Inference
See this notebook for inference details.
Note that you need the development snapshot of Transformers currently, as support for Mistral hasn't been released into PyPI yet:
pip install git+https://github.com/huggingface/transformers
- Downloads last month
- 8