diff --git a/README.md b/README.md index 8409dfd..061f838 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https:// ## Installation -It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference) +It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference @@ -115,6 +115,21 @@ result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` +## Generate with `transformers` + +If you want to use Hugging Face `transformers` to generate text, you can do something like this. + +```py +from transformers import pipeline + +messages = [ + {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, + {"role": "user", "content": "Who are you?"}, +] +chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3") +chatbot(messages) +``` + ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.