-
Notifications
You must be signed in to change notification settings - Fork 543
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a maximum limit of characters for a single inference? #10
Comments
Yeah, the model was trained on clips of at most 30 seconds, so long prompts can be out of distribution. The way we handle this on the website is by chunking the text and generating audio for the chunks one at a time. |
In the gradio interface, I get a loud and constant buzz like a sawtooth wave for anything after :30 sec. Also happens after :30 sec using the same text in the sample python code on the project page. |
can you add this auto chunking to the gradio please ty |
we may add it to the public inference code at some point but it's not a priority. you're invited to contribute it however |
I have a PR for this here |
Trying to run TTS on <200 characters seems fine, but on a larger test (~1600 characters) the result is a random mix of phrases in a random order.
The text was updated successfully, but these errors were encountered: