-
Notifications
You must be signed in to change notification settings - Fork 900
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop chunks of specific a space partition #545
Comments
Our |
Just a word of warning, be careful when doing this. We use hash partitioning for space dimensions, so it is usually not the case that each partition corresponds 1-to-1 with a value in the dimension. For example, if you have 1000 device ids and only 10 partitions, on average you'll have 100 different device ids per partition, so by dropping a partition you may be deleting more information than you intend. |
Given that similar functionality is getting discussed under issue #563 since 2018 with no movements, it looks like "manual chunk drop" may be a stop-gap measure, but it raises much bigger question: Is it SAFE to tinker with chunks? In our use-case, we are totally in control of our space partitioning, so we can clearly see which chunks are to be dropped, but would it create some kind of metadata out-of-sync condition for TimeScale ? |
You can run
|
Where is actually before Group By. Or you can write directly: |
shouldn't this issue be reopened? function |
Agree, this feature alongside other space partition features like #563 should be kept open if the space partitioning implementation is planned to be expanded upon in Timescale in the future. |
is it possible to drop chunks if a specific space partition, e.g. to remove chunks belonging to "partition id 1 older that 1 days"
The text was updated successfully, but these errors were encountered: