Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Potential bug] Memory overflow when cutting small areas #42

Open
Shyryp opened this issue Nov 5, 2024 · 3 comments
Open

[Potential bug] Memory overflow when cutting small areas #42

Shyryp opened this issue Nov 5, 2024 · 3 comments

Comments

@Shyryp
Copy link

Shyryp commented Nov 5, 2024

Memory consumption increases dramatically when trying to cut out areas of an image that are very small in "ranged size" and "forced size" modes.

The problem is that the code first enlarges the entire image and then cuts out the necessary fragments from it. This is not an optimal way to cut out areas of an image in "ranged size" and "forced size" modes.

I have not studied the entire code in detail and cannot quite imagine why it is necessary to enlarge the entire original image in order to then obtain fragments of the image in a given resolution.

The code fragment that should be optimized (lines 406-428 in the inpaint_cropandstitch.py ​​file):

        # Upscale image and masks if requested, they will be downsized at stitch phase
        if rescale_factor < 0.999 or rescale_factor > 1.001:
            samples = image            
            samples = samples.movedim(-1, 1)
            width = round(samples.shape[3] * rescale_factor)
            height = round(samples.shape[2] * rescale_factor)
            samples = rescale(samples, width, height, rescale_algorithm)
            effective_upscale_factor_x = float(width)/float(original_width)
            effective_upscale_factor_y = float(height)/float(original_height)
            samples = samples.movedim(1, -1)
            image = samples

            samples = mask
            samples = samples.unsqueeze(1)
            samples = rescale(samples, width, height, "nearest")
            samples = samples.squeeze(1)
            mask = samples

            samples = blend_mask
            samples = samples.unsqueeze(1)
            samples = rescale(samples, width, height, "nearest")
            samples = samples.squeeze(1)
            blend_mask = samples

Example of reproducing the problem:
If I try to cut a 50x50 pixel fragment from the original 2048x2048 image in the "forced size" mode with the force width = 1024 and force height = 1024, then during operation I will get a memory overflow, since it is not the cut 50x50 pixel fragment that will be enlarged, but the entire image, and by the factor that would be obtained for the 50x50 image, enlarged to the size of 1024x1024.
That is, the following will happen:

  1. The magnification factor is 1024/50 = 20.48 times
  2. We enlarge the original image by 20.48 times: 2048 * 20.48 = 41943 pixels in height and the same in width.
  3. The enlarged image will have a resolution of 41943x41943 and will occupy in memory: 41943 * 41943 * 70 = 123,145,067,430 bytes or 123 gigabytes.

123 gigabytes will not fit in the modern RAM of an average PC.

Even for larger cut-out image fragments, we get problems with memory overload, so it is probably worth thinking about optimizing the program code when using the "ranged size" and "forced size" modes.

@lquesada
Copy link
Owner

lquesada commented Nov 5, 2024 via email

@Shyryp
Copy link
Author

Shyryp commented Nov 5, 2024

Thank you!

Fixing this bug will be really useful and will greatly optimize the work of your Inpaint Crop node, while reducing the impact on RAM in cycles (at the moment, even if the original images are increased by 3-5 times, in the current imperfect ComfyUI cycles this leads to a rapid memory overflow) when using the "ranged size" and "forced size" modes.

Your nodes are useful not only for regenerating certain areas of the image, but also for cutting out a part of the image to feed it to various auto taggers or to LLava/Qwen-VL, which analyze the image. Especially in combination with the "ranged size" and "forced size" modes. And it is useful not only for this.

It would probably even be useful to have an analog of Inpaint Crop, which would only crop the image according to a given mask, without creating a Stitch object.

In general, I try to avoid regeneration of small objects in the picture when using Inpaint Crop in "ranged size" and "forced size" modes, but, for example, when regenerating eyes (especially if they are determined by other neurons in automatic mode) on HiRes images, problems may arise due to the increase in the entire HiRes image several times.

@senendds
Copy link

Hello,

Could you clarify which is the "set of issues" that cropping before scaling presents?

And, as a mitigation of the problem, couldn't you add a parameter to limit the effective scale to apply to a certain maximum?

Thanks for the nodes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants