Chatterbox TTS
- License: MIT
- GitHubRepo
- DemoPage
Introduction
We're excited to introduce Chatterbox, Resemble AI's first production-grade open-source TTS model. Licensed under MIT, Chatterbox has been benchmarked against leading closed-source systems like ElevenLabs and is consistently preferred in side-by-side evaluations. Whether you're working on memes, videos, games, or AI agents, Chatterbox brings your content to life. It's also the first open-source TTS model to support emotion exaggeration control, a powerful feature that makes voices stand out. Try it now on our Hugging Face Gradio app!
Key Details
- State-of-the-art zeroshot TTS
- 0.5B Llama backbone
- Unique exaggeration/intensity control
- Ultra-stable with alignment-informed inference
- Trained on 0.5M hours of cleaned data
- Watermarked outputs
- Easy voice conversion script
- Outperforms ElevenLabs
Tips
- General Use: Default settings (
exaggeration=0.5
,cfg_weight=0.5
) work well. If the reference speaker has a fast speaking style, loweringcfg_weight
to around0.3
can improve pacing. - Expressive or Dramatic Speech: Lower
cfg_weight
values (e.g. ~0.3) and increaseexaggeration
to around 0.7 or higher.
Installation
pip install chatterbox-tts
Or, install from source:
conda create -yn chatterbox python=3.11 conda activate chatterbox git clone https://github.com/resemble-ai/chatterbox.git cd chatterbox pip install -e .
Usage
import torchaudio as ta from chatterbox.tts import ChatterboxTTS model = ChatterboxTTS.from_pretrained(device="cuda") text = "Ezreal and Jinx teamed up with Ahri, Yasuo, and Teemo to take down the enemy's Nexus in an epic late-game pentakill." wav = model.generate(text) ta.save("test-1.wav", wav, model.sr)
Supported Languages
Currently, only English.
Acknowledgements
Cosyvoice, Real-Time-Voice-Cloning, HiFT-GAN, Llama 3, S3Tokenizer, Built-in PerTh Watermarking for Responsible AI
Extracting Watermark
import perth import librosa AUDIO_PATH = "YOUR_FILE.wav" watermarked_audio, sr = librosa.load(AUDIO_PATH, sr=None) watermarker = perth.PerthImplicitWatermarker() watermark = watermarker.get_watermark(watermarked_audio, sample_rate=sr) print(f"Extracted watermark: {watermark}")
Official Discord
Join us on Discord and let's build something awesome together!
Citation
If you find this model useful, please consider citing:
@misc{chatterboxtts2025, author = {{Resemble AI}}, title = {{Chatterbox-TTS}}, year = {2025}, howpublished = {\url{https://github.com/resemble-ai/chatterbox}}, note = {GitHub repository} }
Disclaimer
Don't use this model to do bad things. Prompts are sourced from freely available data on the internet.
Checkout the repository for more details.
Resemble AI Releases SoTA Open-Source TTS Model, Chatterbox
Resemble AI has introduced Chatterbox, a production-grade open-source Text-to-Speech (TTS) model. Chatterbox is based on a 0.5B Llama backbone, uses alignment-informed inference for stability, and has been trained on 0.5M hours of cleaned data. It also includes a unique exaggeration/intensity control feature and is licensed under the MIT license. The model is benchmarked against leading closed-source systems like ElevenLabs and has consistently been preferred in side-by-side evaluations. A Hugging Face Gradio app is available for trying out the model.
Key features of Chatterbox include SoTA zeroshot TTS, a 0.5B Llama backbone, unique exaggeration/intensity control, ultra-stable alignment-informed inference, training on 0.5M hours of cleaned data, watermarked outputs for responsible AI, and easy voice conversion.
To use Chatterbox, it can be installed via pip or from source. The default settings work well for most prompts, but lowering 'cfg_weight' and increasing 'exaggeration' can improve results for expressive or dramatic speech. The model has been developed and tested on Python 3.11 on Debian 11 OS, and examples of usage can be found in 'example_tts.py' and 'example_vc.py'. Currently, only English is supported.
Resemble AI's Perth (Perceptual Threshold) Watermarker, which is built into Chatterbox, uses imperceptible neural watermarks that survive MP3 compression, audio editing, and common manipulations while maintaining nearly 100% detection accuracy. Watermark extraction can be performed using a provided script.