Replies: 6 comments 1 reply
-
Hey @jjcovert , thank you for the suggestion! I agree that resuming would be a great feature to have, I'll see what I can do (will create an issue for this discussion). |
Beta Was this translation helpful? Give feedback.
-
Hey @jjcovert I got good and bad news! Bad news — the resume functionality is still not implemented. Good news — I have implemented the retry on the 500 (and all the rest of 5xx range) errors, so you might be able to get away with v2.2.9. For a full retry behaviour description, see #187 . This version also includes the speed and memory optimisations for export mode (details in #185 ) Please let me know how it goes, if you're going to try it? Alternatively, of course, you can wait for v3.0.1, where I'm planning to implement the retry function, but I can't promise that it will be implemented soon, I'm kind of stuck with the changes for v3.0.0, as there's a considerable rewrite of the CLI. |
Beta Was this translation helpful? Give feedback.
-
@rusq did the resume function make it in to v3.0? |
Beta Was this translation helpful? Give feedback.
-
Incidentally both (running on Windows 11) stopped around 4.2GB |
Beta Was this translation helpful? Give feedback.
-
Hey @sm-eclipsefi, the size of the archive looks very suspicious to me, they are both coincidentally close to maxint32 It's hard to say what it is waiting for, it shouldn't be processing a single thread for 11 hours. I suspect it could be:
Next time you run it, could I ask you to run the following command:
archive creates a directory, not a zip archive, and later it can be converted to "slack export" format with If it hangs and does not complete, press Ctrl+C then it should terminate gracefully, closing files and finishing the trace. After that, could you encrypt it with my GPG key (built-in):
and attach it to this issue, so I could have a look, please? |
Beta Was this translation helpful? Give feedback.
-
Re the resume functionality, it's planned for v3.1: #174 |
Beta Was this translation helpful? Give feedback.
-
After 17GB of export, my slackdump process died with a 500 response from Slack. I'd love to be able to start again where it left off. It was a 500 error, so hopefully it was resolved on their end (and not due to bad data etc).
Being able to resume where it died would be huge for someone trying to get a complete workspace export. And it's probably as easy as enumerating everything (into a file) on process start, and removing lines from the file as tasks get completed. On start up, it can see if this file already exists and start with the next line as the first task.
Thoughts?
edit: Thinking about it a bit more - an "if this file already exists, skip it" check before exporting a channel/thread/file might help speed things up dramatically and accomplish almost the same thing. Hmm
Beta Was this translation helpful? Give feedback.
All reactions