Go to file
2024-07-15 04:23:25 +00:00
create_file delete by junit test 2024-07-15 01:53:46 +00:00
test_directory test! 2024-07-04 00:34:30 +00:00
add_file.txt Update add_file.txt 2024-07-12 05:02:11 +00:00
bootstrap-treeview-master.zip Upload bootstrap-treeview-master.zip 2024-07-09 02:35:15 +00:00
Cheetah_Dataset_RESTAPI.pdf Upload Cheetah_Dataset_RESTAPI.pdf 2024-07-08 04:42:38 +00:00
cute_cheetah.png Upload cute_cheetah.png 2024-07-09 04:50:33 +00:00
economic_data.csv Upload economic_data.csv 2024-07-04 23:48:26 +00:00
jpeg_sample.jpeg Upload jpeg_sample.jpeg 2024-07-05 00:34:50 +00:00
jpg_sample.jpg Upload jpg_sample.jpg 2024-07-05 00:35:04 +00:00
pdf_sample.pdf Upload pdf_sample.pdf 2024-07-05 00:34:30 +00:00
README.md Update README.md 2024-07-15 04:23:25 +00:00
upload_test.json commit diff test 2024-07-09 16:16:21 +09:00

license pipeline_tag language library_name tags inference
apache-2.0 text-generation
en
ko
cn
jp
TensorFlow
finetuned
tag_test
test1
test2
test3
true

Model Card for Mistral-7B-Instruct-v0.2

The Mistral-7B-Instruct-v0.2 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-7B-v0.2.

Mistral-7B-v0.2 has the following changes compared to Mistral-7B-v0.1

  • 32k context window (vs 8k context in v0.1)
  • Rope-theta = 1e6
  • No Sliding-Window Attention

For full details of this model please read our paper and release blog post.

Instruction format

In order to leverage instruction fine-tuning, your prompt should be surrounded by [INST] and [/INST] tokens. The very first instruction should begin with a begin of sentence id. The next instructions should not. The assistant generation will be ended by the end-of-sentence token id.