-
Notifications
You must be signed in to change notification settings - Fork 58
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance of interp_like when converting dfsu to dfs2 for large files #413
Comments
😟6h! More details needed:
@jsmariegaard Do we really need a loop here? mikeio/mikeio/interpolation.py Lines 124 to 125 in c06b415
|
Haha, yea it's a long time to match a big file, but hopefully it could be improved nonetheless :) .
|
Bummer. It would be very helpful if you could profile it e.g. with cProfile and snakeviz. But that would of course require a smaller file or that you only consider a subset using the area argument...
But you're probably right about the element-loop. It doesn't look healthy. |
You could also try with
which use nearest point instead of IDW interpolation. |
Here's the profile for testing it on a much smaller dfsu (report attached as zip). I'll try a test on a larger file overnight to see how scale might change that. |
Hmm, yes, the contains() method (to find out if points are inside the mesh) is actually what takes the most time in this case. But closely followed by the dot product. It may be better to set extrapolate=True and then remove points outside the domain afterwards
|
The results from the larger dfsu file are in (no. elements = 3 760 848). It looks like the contains() method is the main culprit. The block taking ~15% is the cKDTree, not _interp_itemstep (which is 3% here). |
Did you try
|
.. and can it really be true that this process takes that long in Python when in Mike Zero it is quite fast? |
@cethceth, which version of MIKE IO are you using? |
Solved it with 'extrapolate' instead of 'extrapolation' :) |
Yes. But it works fine with extrapolate now. Any idea of how to mask the output grid (fast)? I would prefer not having to load shapefiles. |
@cethceth maybe you can tell us a bit more about your use-case., not sure we can do anything about it right now, but it is useful to know. I suppose converting the dfsu to dfs2 is simply an intermediate step? |
@ecomodeller we want an automatic tool for converting dfsu files to tiff files which we use for reporting, calculation of damage cost etc.. We have that now. But since the output tiff files are not masked this has to be done as a post processing step in GIS. I was hoping to gather all the processing in python.. and also avoid having to load shapefiles in python for masking the output. If there was a way to get the extent of the dfsu file which could then be used when masking the dfs2/tiff file in python that would be really nice :) |
Is your feature request related to a problem? Please describe.
Converting large dfsu files to dfs2 seems to take a long time (for a 120mb item, it took roughly 6 hours). I used the following code:
The PC has 64GB memory and it's CPU is: Intel Xeon E5-2650 v4 @ 2.20Ghz.
Describe the solution you'd like
Is there anything I could do to make this run faster?
Describe alternatives you've considered
I haven't tried anything yet. One idea is to vectorize the for loop in "_interp_itemstep".
Additional context
Add any other context or screenshots about the feature request here.
The text was updated successfully, but these errors were encountered: