You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am obtaining an OOM error when using ndmeasure.label with a large array.
Minimal Complete Verifiable Example:
nx=5120# things are OK < 2500arr=da.random.random(size=(nx,nx,nx))
darr_bin=arr>0.8# The next line will faillabel_image, num_labels=dask_image.ndmeasure.label(darr_bin)
Note that this problem occurs already in the last line, and not when executing the computation via, e.g., num_labels.compute().
This also means that I have the same problem when using a (large) cluster as the OOM always occurs on node 1.
Environment:
[I could reproduce this problem on several machines, below is one particular environment]
thanks a lot for reporting this and sorry for the late reply here.
I could reproduce the issue. Essentially dask_image.ndmeasure.label applies scipy.ndimage.label to the individual chunks of the input image. It then fuses the obtained labels after obtaining a list of equivalent labels from examining the boundaries of the chunks. The current implementation doesn't work too well with very large number of boundaries to examine.
Until we improve the implementation I'd suggest you to try increasing the chunksize of your input array, e.g.:
nx=5120# things are OK < 2500arr=da.random.random(size=(nx,nx,nx), chunksize=(800, 800, 800))
darr_bin=arr>0.8# The next line will faillabel_image, num_labels=dask_image.ndmeasure.label(darr_bin)
The configuration above works well for me. For generic dask arrays as input sources you could apply arr.rechunk to change the chunksizes of existing arrays.
I am obtaining an OOM error when using ndmeasure.label with a large array.
Minimal Complete Verifiable Example:
Note that this problem occurs already in the last line, and not when executing the computation via, e.g.,
num_labels.compute()
.This also means that I have the same problem when using a (large) cluster as the OOM always occurs on node 1.
Environment:
[I could reproduce this problem on several machines, below is one particular environment]
The text was updated successfully, but these errors were encountered: