| create_file | ||
| test_directory | ||
| add_file.txt | ||
| bootstrap-treeview-master.zip | ||
| Cheetah_Dataset_RESTAPI.pdf | ||
| cute_cheetah.png | ||
| economic_data.csv | ||
| jpeg_sample.jpeg | ||
| jpg_sample.jpg | ||
| pdf_sample.pdf | ||
| README.md | ||
| upload_test.json | ||
| license | pipeline_tag | language | tags | inference | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| apache-2.0 | text-generation |
|
|
true |
Model Card for Mistral-7B-Instruct-v0.2
The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.
Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1
- 32k context window (vs 8k context in v0.1)
- Rope-theta = 1e6
- No Sliding-Window Attention
For full details of this model please read our paper and release blog post.
Instruction format
In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.