Update README.md
This commit is contained in:
parent
3db5329733
commit
c9231f629c
@ -279,7 +279,7 @@ See the snippet below for usage with Transformers:
|
|||||||
import transformers
|
import transformers
|
||||||
import torch
|
import torch
|
||||||
|
|
||||||
model_id = "meta-llama/Meta-Llama-3-70B-Instruct"
|
model_id = "meta-llama/Meta-Llama-3-8B-Instruct"
|
||||||
|
|
||||||
pipeline = transformers.pipeline(
|
pipeline = transformers.pipeline(
|
||||||
"text-generation",
|
"text-generation",
|
||||||
@ -317,12 +317,12 @@ print(outputs[0]["generated_text"][len(prompt):])
|
|||||||
|
|
||||||
### Use with `llama3`
|
### Use with `llama3`
|
||||||
|
|
||||||
Please, follow the instructions in the repository
|
Please, follow the instructions in the [repository](https://github.com/meta-llama/llama3)
|
||||||
|
|
||||||
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
|
To download Original checkpoints, see the example command below leveraging `huggingface-cli`:
|
||||||
|
|
||||||
```
|
```
|
||||||
huggingface-cli download meta-llama/Meta-Llama-3-70B-Instruct --include "original/*" --local-dir Meta-Llama-3-70B-Instruct
|
huggingface-cli download meta-llama/Meta-Llama-3-8B-Instruct --include "original/*" --local-dir Meta-Llama-3-8B-Instruct
|
||||||
```
|
```
|
||||||
|
|
||||||
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
|
For Hugging Face support, we recommend using transformers or TGI, but a similar command works.
|
||||||
|
|||||||
Loading…
Reference in New Issue
Block a user