Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Resume _tmp_ file build #291

Open
biohoo opened this issue Jan 21, 2025 · 3 comments
Open

Resume _tmp_ file build #291

biohoo opened this issue Jan 21, 2025 · 3 comments

Comments

@biohoo
Copy link

biohoo commented Jan 21, 2025

When a particularly long process crashes or otherwise halts (e.g. when running out of colab compute tokens), the iw3 module doesn't allow one to pick up building the file where it last left off, starting from scratch. It would be nice to implement a resume feature. There's a --resume flag but it just skips completed files. I would like to see if there's a way to update the --resume flag to both skip completed files AND resume in-progress tmp files.

@nagadomi
Copy link
Owner

Basically I consider it impossible,
but possible if you can accept the cost of extra HDD space and processing time. See #267

If you have a better way that works with all currently available codecs and containers, please describe it in detail.

@biohoo
Copy link
Author

biohoo commented Jan 21, 2025

Thanks for the reply and reference to issue #267 !

I like the recommendation to split the file into multiple parts and stitch them back together. That might be sufficient for my purposes, and I can imagine making the function dynamic enough to split into 10, 100, or even 1000 subelements depending on the length of the job and available resources.

I'll see if I can implement a decent utility. At a high level, it'll be something like:

  • Take a large file (e.g. 2 GB), detect that it's "large" and recommend splitting into multiple parts.
  • User selects splitting into some multiple (10x, 100x, 1000x)
  • iw3 is ran (either in parallel or in series) on all split files. If a break in the process, I can put some logic in there to detect what files it has completed (or simpy use the --resume command in the main method) and resume processing the remainder).
  • a final ffmpeg command detects and stitches the parts back together into a unified file.

@nagadomi
Copy link
Owner

Yes, it is already possible with ffmpeg command.

The disadvantage is that it requires twice the disk space.
segment and concat are supposed to be possible without re-encoding, so processing time is not really an issue.

My one concern is whether it's actually possible to seamlessly stitch the video, including the audio. If this problem does not exist, it can be integrated into iw3 as an optional conversion mode.

I don't have access to a computer right now so will try that next month.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants