Update README.md
Add pipeline usage instructions
This commit is contained in:
parent
b7fbc6d46a
commit
6b63048562
26
README.md
26
README.md
@ -24,6 +24,32 @@ high quality instruction following behavior not characteristic of the foundation
|
||||
[EleutherAI’s](https://www.eleuther.ai/) [Pythia-12b](https://huggingface.co/EleutherAI/pythia-12b) and fine-tuned
|
||||
on a ~15K record instruction corpus generated by Databricks employees and released under a permissive license (CC-BY-SA)
|
||||
|
||||
## Usage
|
||||
|
||||
To use the model with the `transformers` library on a machine with GPUs:
|
||||
|
||||
```
|
||||
from transformers import pipeline
|
||||
|
||||
instruct_pipeline = pipeline(model="databricks/dolly-v2-12b", trust_remote_code=True, device_map="auto")
|
||||
```
|
||||
|
||||
You can then use the pipeline to answer instructions:
|
||||
|
||||
```
|
||||
instruct_pipeline("Explain to me the difference between nuclear fission and fusion.")
|
||||
```
|
||||
|
||||
To reduce memory usage you can load the model with `bfloat16`:
|
||||
|
||||
```
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
instruct_pipeline = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")
|
||||
```
|
||||
|
||||
|
||||
## Known Limitations
|
||||
|
||||
### Performance Limitations
|
||||
|
Loading…
x
Reference in New Issue
Block a user