| original | ||
| .gitattributes | ||
| config.json | ||
| generation_config.json | ||
| model.safetensors | ||
| README.md | ||
| special_tokens_map.json | ||
| tokenizer_config.json | ||
| tokenizer.json | ||
| language | library_name | pipeline_tag | tags | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
transformers | text-generation |
|
This repository is an early access checkpoint for Llama 3.2 1B Instruct.
This repo contains two versions of the model, for use with transformers and with the original llama3 codebase (under the original directory).
Use with transformers
Here is an example of simple usage with transformers
from transformers import pipeline
import torch
model_id = "nltpt/Llama-3.2-1B-Instruct"
pipe = pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto"
)
messages = [
{"role": "user", "content": "What is the capital of France?"},
]
pipe(messages, max_length=50)
Use with llama3
Please follow the instructions provided for that repository. To download the checkpoints from the Hub, see this example command using huggingface-cli:
huggingface-cli download nltpt/Llama-3.2-1B-Instruct --include "original/*" --local-dir Llama-3.2-1B-Instruct