-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training produced lora with zero effect #66
Comments
To use the trained LoRA in ComfyUI etc., you need to convert the LoRA after training. This may have been done automatically in GUI etc. If we convert the trained LoRA by default, it will not be readable by other scripts in this repository, so we do not plan to convert it by default. Please ask the inference tool to support LoRA from Musubi Tuner. |
Oh.. I haven't thought of that! Might be. If this is how unconverted lora expected to behave than it is definitely it! Thanks. By the way what should I expect if I continue training by loading weights from converted lora and from diffusion pipe loras? I have done both, and it seems fine, but I'm not sure. |
In principle, it should not be a problem. However, since alpha will be the same as dim(rank) in the converted LoRA, the learning rate should be lower than when training from scratch with Musubi Tuner. |
I had the same issue unfortunately. In my last attempt I trained for about 4000 steps. After the .safetensors file in outputs failed to have any effect, I ran the conversion script and then used the output in ComfyUI but that did not change anything. I also generated videos using the script provided in the inference section in the readme. I ran with or without the lora I trained and saw no difference, even multiplying by 2 or 4. It's possible or likely that I need to train more, however a complete lack of effect makes it seem like something is wrong. I previously did another run using video which ran for about 48 hours on a 3090, about 128 epochs, and I had the same issue there. I ran a similar dataset in diffusion-pipe previously, only for 2 epochs, and was able to see some change after that. If I paste the info specific to my training attempt, is it possible to see if I used the wrong settings at some point? Thank you for taking a look if you're able.
^ Unfortunately I lost the output from these commands but they seemed to run as expected.
|
Currently I'm successfully training on a fork of musubi tuner with some basic gui but I might switch to this repo. However when I tried training with this repo, it made a lora that had no effect. In both cases I used the same images and even the same cache generated with this repo, same settings.
Why did training with this command produced an ineffective lora, while training in gui version with same data and settings produced the correct lora? I couldn't find a difference between this and the command issued by gui, but I'm not sure. Is something wrong with it?:
The text was updated successfully, but these errors were encountered: