Update README.md
This commit is contained in:
parent
33923f685b
commit
16f78806b1
@ -108,7 +108,7 @@ The dataset is comprised of a filtered mixture of open-source large-scale datase
|
|||||||
|
|
||||||
### Training Procedure
|
### Training Procedure
|
||||||
|
|
||||||
The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's [GitHub repository - config*](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-2-1.6b.yml). The final checkpoint of pre-training, before cooldown, is provided in the `global_step420000` [branch](https://huggingface.co/stabilityai/stablelm-2-1_6b/blob/global_step420000/README.md).
|
The model is pre-trained on the aforementioned datasets in `bfloat16` precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's [GitHub repository - config*](https://github.com/Stability-AI/StableLM/blob/main/configs/stablelm-2-1_6b.yml). The final checkpoint of pre-training, before cooldown, is provided in the `global_step420000` [branch](https://huggingface.co/stabilityai/stablelm-2-1_6b/blob/global_step420000/README.md).
|
||||||
|
|
||||||
### Training Infrastructure
|
### Training Infrastructure
|
||||||
|
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user