From 83e9aa141f2e28c82232fea5325f54edf17c43de Mon Sep 17 00:00:00 2001 From: Patrick von Platen Date: Wed, 22 May 2024 16:34:42 +0000 Subject: [PATCH] Add minor reference to transformers (#7) - Add minor reference to transformers (6d09a724d62369e03273260e91471f90885a3626) - Update README.md (28ef5dc97e80c593a11786c42a287da995ff6c87) Co-authored-by: Omar Sanseviero --- README.md | 17 ++++++++++++++++- 1 file changed, 16 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 8409dfd..061f838 100644 --- a/README.md +++ b/README.md @@ -13,7 +13,7 @@ Mistral-7B-v0.3 has the following changes compared to [Mistral-7B-v0.2](https:// ## Installation -It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference) +It is recommended to use `mistralai/Mistral-7B-Instruct-v0.3` with [mistral-inference](https://github.com/mistralai/mistral-inference). For HF transformers code snippets, please keep scrolling. ``` pip install mistral_inference @@ -115,6 +115,21 @@ result = tokenizer.instruct_tokenizer.tokenizer.decode(out_tokens[0]) print(result) ``` +## Generate with `transformers` + +If you want to use Hugging Face `transformers` to generate text, you can do something like this. + +```py +from transformers import pipeline + +messages = [ + {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, + {"role": "user", "content": "Who are you?"}, +] +chatbot = pipeline("text-generation", model="mistralai/Mistral-7B-Instruct-v0.3") +chatbot(messages) +``` + ## Limitations The Mistral 7B Instruct model is a quick demonstration that the base model can be easily fine-tuned to achieve compelling performance.