-
Great tool! Sorry if this is a noob question or something wrong with my system but not sure how to navigate this. I was able to successfully dump a few channels but having issues running the workplace export itself. I used this command ./slackdump -export Workspace_export.zip -export-type standard -download and it seems to be successful but after about 15 minutes each time I get this error: 2023/01/28 23:20:34 messages fetch complete, total: 500365 goroutine 1 [running]: goroutine 6 [syscall, 14 minutes]: goroutine 7 [select, 14 minutes]: goroutine 51 [chan receive, 14 minutes]: goroutine 15 [IO wait, 1 minutes]: goroutine 20 [select, 14 minutes]: goroutine 21 [select, 14 minutes]: goroutine 22 [select, 14 minutes]: goroutine 23 [select, 14 minutes]: goroutine 52 [chan receive, 14 minutes]: goroutine 53 [chan receive, 14 minutes]: goroutine 39 [chan receive, 14 minutes]: |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 9 replies
-
Hey @DAJA48 thanks for raising this, this is not a user error — it's the programming assumption error :) The reason it runs out of memory is because when fetching channel messages — it caches them in memory before writing to disk, and one of the channels must be too big to fit in the memory available to the program. It might take some time to implement the solution. The possible options to workaround this would be the following:
I'll create an issue based on this discussion, and will update when this is resolved. Sorry for the inconvenience! Thank you for posting the full callstack - that's very useful! |
Beta Was this translation helpful? Give feedback.
-
Issue: #184 |
Beta Was this translation helpful? Give feedback.
-
Hi @DAJA48, I think I have fixed the issue (see #185), now slackdump should use at least 19 times less memory for that particular operation that was failing. You can read the details in the pull request, but so that you'd have an understanding — for a (synthetic, generated) message history of 1 mln messages it was using an estimate about 5.7 GB memory for sorting. After optimisation, it uses only 170 MB. I have prepared a test build, but I'm not sure if you're running on Linux or macOS, so I built both. Could you please try and rerun the export you were running again, but with this build, and see if this works for you? slackdump-darwin.zip If this works, I'll release 2.2.9. |
Beta Was this translation helpful? Give feedback.
Hi @DAJA48,
I think I have fixed the issue (see #185), now slackdump should use at least 19 times less memory for that particular operation that was failing. You can read the details in the pull request, but so that you'd have an understanding — for a (synthetic, generated) message history of 1 mln messages it was using an estimate about 5.7 GB memory for sorting. After optimisation, it uses only 170 MB.
I have prepared a test build, but I'm not sure if you're running on Linux or macOS, so I built both.
Could you please try and rerun the export you were running again, but with this build, and see if this works for you?
slackdump-darwin.zip
slackdump-linux.zip
If this works, I'll release 2…