-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Roadmap for training XTTS for custom language #5
Comments
Right dvae is not necessary finetune. You just need to finetune gpt part and hifigan. This repo is only using for finetune hifigan. |
If you want finetune gpt part with language not in xtts original model. You have to make some change in training code |
Hello, @tuanh123789 , @VafaKnm . I am going to train XTTS for Uzbek langauge, but I cann't find any example to train XTTS for new language . Please help me, if you have full code for trainning XTTS for other langauge, can you share it ? |
You can check out this repo where the author fine-tune on a new language vietnamese: https://github.com/thinhlpg/TTS/tree/add-vietnamese-xtts. See the commit history you can see the author only need to add vi language into tokenizer part. Then you can fine-tune up to the document of XTTS for GPT part. Then you can finetune Hifigan for better sound result for your language (I bet so). |
@mpquochung |
Hi @VafaKnm @RifatMamayusupov , below is my code for fine-tuning XTTS for a new language. It works well in my case with over 100 hours of audio. |
Hi! First, i want to thank you for making this repo and sharing your experiences.
I am trying to train XTTS model for Persian language. I read the topics in this repo coqui-ai/TTS#3704 and i found that we have 3 steps to get XTTS model: DVAE -> GPT-2 -> HifiGAN
According to the discussions in that repo, I realized that DVAE does not need fine tuning because it is independent of the language type. Assuming this claim is true, the next step is fine-tuning GPT-2 . Does the rope you provide (this repo), includes fine-tuning GPT-2 too?
The text was updated successfully, but these errors were encountered: