diff --git a/README.md b/README.md index 8ea750a..fb18818 100644 --- a/README.md +++ b/README.md @@ -74,6 +74,7 @@ With instruction-tuned models, you need to use chat templates to process our inp ```python from transformers import pipeline +import torch pipe = pipeline("text-generation", model="google/gemma-3-1b-it", device="cuda", torch_dtype=torch.bfloat16)