From 524aad8e8f5a1f2361d9b0065dd98b0b0c57aff6 Mon Sep 17 00:00:00 2001 From: Niels Horn Date: Tue, 27 Feb 2024 21:45:50 +0000 Subject: [PATCH] Update README.md --- README.md | 17 +++++++---------- 1 file changed, 7 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index f6cba97..a0a97aa 100644 --- a/README.md +++ b/README.md @@ -20,11 +20,9 @@ model-index: value: 0.5792084706530948 --- - - # mistral-1L-tiny +A tiny single-layer 35.1M parameter Mistral model, with a hidden size of 512, and an MLP intermediate size of 1024. This model is trained on the roneneldan/TinyStories dataset. It achieves the following results on the evaluation set: - Loss: 1.6868 @@ -32,18 +30,17 @@ It achieves the following results on the evaluation set: ## Model description -More information needed +This work is inspired by the 21M parameter one-layer GPT-Neo of the [Tiny Stories paper](https://arxiv.org/abs/2305.07759). +Results reproduced to acquire high-frequency checkpoints for further analysis. ## Intended uses & limitations -More information needed - -## Training and evaluation data - -More information needed +Analysis of feature dynamics and emergence in real-world language models. ## Training procedure +Trained for 90171 steps, corresponding to ~2 hours on a single H100. + ### Training hyperparameters The following hyperparameters were used during training: @@ -57,7 +54,7 @@ The following hyperparameters were used during training: ### Training results - +Quite consistent English text generation. ### Framework versions