Add transformers usage (#7)
- Add transformers usage (d8ea9c90c084a1f46d532ac6238e5d1df177a78e) - update README with examples (89d04b32c4f6f5abefa48e973a95a85ad9ce73ad) - update with bark-small reference (ee9b2d6e28d1b5d5e7a150bf7bf70fae0dfdb05d) - update Bark.generate_speech -> generate (0fb30c75ea4361fa4520550405ed6243360331f5) Co-authored-by: Yoach Lacombe <ylacombe@users.noreply.huggingface.co>
This commit is contained in:
parent
a3f055a80d
commit
8877ba1fd8
93
README.md
93
README.md
@ -36,9 +36,89 @@ This model is meant for research purposes only.
|
||||
The model output is not censored and the authors do not endorse the opinions in the generated content.
|
||||
Use at your own risk.
|
||||
|
||||
The following is additional information about the models released here.
|
||||
Two checkpoints are released:
|
||||
- [small](https://huggingface.co/suno/bark-small)
|
||||
- [**large** (this checkpoint)](https://huggingface.co/suno/bark)
|
||||
|
||||
## Model Usage
|
||||
|
||||
## Example
|
||||
|
||||
Try out Bark yourself!
|
||||
|
||||
* Bark Colab:
|
||||
|
||||
<a target="_blank" href="https://colab.research.google.com/drive/1eJfA2XUa-mXwdMy7DoYKVYHI1iTd9Vkt?usp=sharing">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a>
|
||||
|
||||
* Hugging Face Colab:
|
||||
|
||||
<a target="_blank" href="https://colab.research.google.com/drive/1dWWkZzvu7L9Bunq9zvD-W02RFUXoW-Pd?usp=sharing">
|
||||
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/>
|
||||
</a>
|
||||
|
||||
* Hugging Face Demo:
|
||||
|
||||
<a target="_blank" href="https://huggingface.co/spaces/suno/bark">
|
||||
<img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HuggingFace"/>
|
||||
</a>
|
||||
|
||||
|
||||
## 🤗 Transformers Usage
|
||||
|
||||
|
||||
You can run Bark locally with the 🤗 Transformers library from version 4.31.0 onwards.
|
||||
|
||||
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main:
|
||||
|
||||
```
|
||||
pip install git+https://github.com/huggingface/transformers.git
|
||||
```
|
||||
|
||||
2. Run the following Python code to generate speech samples:
|
||||
|
||||
```python
|
||||
from transformers import AutoProcessor, AutoModel
|
||||
|
||||
|
||||
processor = AutoProcessor.from_pretrained("suno/bark-small")
|
||||
model = AutoModel.from_pretrained("suno/bark-small")
|
||||
|
||||
inputs = processor(
|
||||
text=["Hello, my name is Suno. And, uh — and I like pizza. [laughs] But I also have other interests such as playing tic tac toe."],
|
||||
return_tensors="pt",
|
||||
)
|
||||
|
||||
speech_values = model.generate(**inputs, do_sample=True)
|
||||
```
|
||||
|
||||
3. Listen to the speech samples either in an ipynb notebook:
|
||||
|
||||
```python
|
||||
from IPython.display import Audio
|
||||
|
||||
sampling_rate = model.generation_config.sample_rate
|
||||
Audio(speech_values.cpu().numpy().squeeze(), rate=sampling_rate)
|
||||
```
|
||||
|
||||
Or save them as a `.wav` file using a third-party library, e.g. `scipy`:
|
||||
|
||||
```python
|
||||
import scipy
|
||||
|
||||
sampling_rate = model.config.sample_rate
|
||||
scipy.io.wavfile.write("bark_out.wav", rate=sampling_rate, data=speech_values.cpu().numpy().squeeze())
|
||||
```
|
||||
|
||||
For more details on using the Bark model for inference using the 🤗 Transformers library, refer to the [Bark docs](https://huggingface.co/docs/transformers/model_doc/bark).
|
||||
|
||||
## Suno Usage
|
||||
|
||||
You can also run Bark locally through the original [Bark library]((https://github.com/suno-ai/bark):
|
||||
|
||||
1. First install the [`bark` library](https://github.com/suno-ai/bark)
|
||||
|
||||
3. Run the following Python code:
|
||||
|
||||
```python
|
||||
from bark import SAMPLE_RATE, generate_audio, preload_models
|
||||
@ -52,10 +132,10 @@ text_prompt = """
|
||||
Hello, my name is Suno. And, uh — and I like pizza. [laughs]
|
||||
But I also have other interests such as playing tic tac toe.
|
||||
"""
|
||||
audio_array = generate_audio(text_prompt)
|
||||
speech_array = generate_audio(text_prompt)
|
||||
|
||||
# play text in notebook
|
||||
Audio(audio_array, rate=SAMPLE_RATE)
|
||||
Audio(speech_array, rate=SAMPLE_RATE)
|
||||
```
|
||||
|
||||
[pizza.webm](https://user-images.githubusercontent.com/5068315/230490503-417e688d-5115-4eee-9550-b46a2b465ee3.webm)
|
||||
@ -71,6 +151,9 @@ write_wav("/path/to/audio.wav", SAMPLE_RATE, audio_array)
|
||||
|
||||
## Model Details
|
||||
|
||||
|
||||
The following is additional information about the models released here.
|
||||
|
||||
Bark is a series of three transformer models that turn text into audio.
|
||||
|
||||
### Text to semantic tokens
|
||||
@ -102,4 +185,4 @@ We anticipate that this model's text to audio capabilities can be used to improv
|
||||
While we hope that this release will enable users to express their creativity and build applications that are a force
|
||||
for good, we acknowledge that any text to audio model has the potential for dual use. While it is not straightforward
|
||||
to voice clone known people with Bark, it can still be used for nefarious purposes. To further reduce the chances of unintended use of Bark,
|
||||
we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
|
||||
we also release a simple classifier to detect Bark-generated audio with high accuracy (see notebooks section of the main repository).
|
||||
|
Loading…
x
Reference in New Issue
Block a user