Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Artifiacts and pixlated outpot #70

Open
abozahran opened this issue Feb 1, 2025 · 3 comments
Open

Artifiacts and pixlated outpot #70

abozahran opened this issue Feb 1, 2025 · 3 comments

Comments

@abozahran
Copy link

I alwayes get Artifiacts and pixlated outpot , i tried 768, 1024

Image

the training based on 13 images

Image


the following is the outpot

Image

the first two from the left is my lora compared to other lora found at civitai

Image

@Sarania
Copy link

Sarania commented Feb 2, 2025

That looks like an issue caused by a bug in an old version of musubi tuner:

#7

This has been solved for a while so first please make sure that the version of musubi in use by your GUI is fully up to date then recache your latents and try again.

@abozahran
Copy link
Author

abozahran commented Feb 2, 2025

I downloaded the last repo and started fresh and here is the result

IMG-20250202-WA0135.jpg

IMG-20250202-WA0143.jpg

IMG-20250202-WA0133.jpg

IMG-20250202-WA0140.jpg

IMG-20250202-WA0145.jpg

And here is the real person

WhatsApp Image 2024-03-11 at 1.44.57 AM.jpeg

This is the training setting

accelerate launch --num_cpu_threads_per_process 1 --mixed_precision bf16 hv_train_network.py --dit E:\AI\apps\hunyuan\Kohya-musubi-tuner-main\path\to\ckpts\hunyuan-video-t2v-720p\transformers\mp_rank_00_model_states.pt --dataset_config E:\AI\apps\hunyuan\Kohya-musubi-tuner-main\path\to\dataset.toml --sdpa --mixed_precision bf16 --fp8_base --optimizer_type adamw8bit --learning_rate 2e-4 --gradient_checkpointing --max_data_loader_n_workers 2 --persistent_data_loader_workers --network_module networks.lora --network_dim 32 --timestep_sampling shift --discrete_flow_shift 7.0 --max_train_epochs 200 --save_every_n_epochs 25 --seed 42 --output_dir E:\AI\apps\hunyuan\Kohya-musubi-tuner-main\path\to\output_dir --output_name name-of-lora

pause

My dataset is 13 high resolution images
I stopped the training at 75% with total 1800 steps and 175 epoch

@Sylsatra
Copy link

Hi, I hope you no longer have this problem. If you are still waiting for the solution, try using these parameters: --timestep_sampling sigmoid and --discrete_flow_shift 1.0. Additionally, you shouldn't cancel the training at the end when using --timestep_sampling shift and --discrete_flow_shift 7.0, because that’s where the magic happens! LOL!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants