Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for loading a smaller model for sample generation? #91

Open
jnz86 opened this issue Feb 20, 2025 · 0 comments
Open

Support for loading a smaller model for sample generation? #91

jnz86 opened this issue Feb 20, 2025 · 0 comments

Comments

@jnz86
Copy link

jnz86 commented Feb 20, 2025

I have a 12GB card. The trainer says it’s working but the output is terrible, separate issue.

I wanted to turn on samples. But I’m using the FP16 Hunyuan checkpoint with swapped blocks. This checkpoint will never run on its own on my machine. It will always Out Of Memory even if it’s a 256x256x1 generation.

So just like the sample settings that point to VAE and Encoders, would it be possible to point to a different model like the FP8 or a GGUF as a checkpoint for the sample generation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant