From ddec6b4d38acd3445fb3f2e5bc0b0e116e3e05f2 Mon Sep 17 00:00:00 2001 From: groupuser Date: Wed, 3 Jul 2024 23:11:32 +0000 Subject: [PATCH] Update README.md --- README.md | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index f5af30d..c8cf9b3 100644 --- a/README.md +++ b/README.md @@ -4,19 +4,12 @@ pipeline_tag: text-generation tags: - finetuned inference: true +widget: +- messages: + - role: user + content: What is your favorite condiment? --- -# Model Card for Mistral-7B-Instruct-v0.2 - -The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2. - -Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1 -- 32k context window (vs 8k context in v0.1) -- Rope-theta = 1e6 -- No Sliding-Window Attention - -For full details of this model please read our [paper](https://arxiv.org/abs/2310.06825) and [release blog post](https://mistral.ai/news/la-plateforme/). - ## Instruction format In order to leverage instruction fine-tuning, your prompt should be surrounded by `[INST]` and `[/INST]` tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.