From e1945c40cd546c78e41f1151f4db032b271faeaa Mon Sep 17 00:00:00 2001 From: Omar Sanseviero Date: Wed, 29 May 2024 12:27:16 +0000 Subject: [PATCH] Update README.md (#75) - Update README.md (129019f3f22ae389b7817877283f184dad062a9b) Co-authored-by: Akshay L Aradhya --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 523b6f8..5984295 100644 --- a/README.md +++ b/README.md @@ -447,7 +447,7 @@ For Hugging Face support, we recommend using transformers or TGI, but a similar **Overview** Llama 3 was pretrained on over 15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 10M human-annotated examples. Neither the pretraining nor the fine-tuning datasets include Meta user data. -**Data Freshness** The pretraining data has a cutoff of March 2023 for the 7B and December 2023 for the 70B models respectively. +**Data Freshness** The pretraining data has a cutoff of March 2023 for the 8B and December 2023 for the 70B models respectively. ## Benchmarks