Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workaround for affected versions: 2.17.0, 2.17.1, 2.17.2 and 2.18.0 #7643

Closed
philkra opened this issue Feb 1, 2025 · 1 comment
Closed
Labels

Comments

@philkra
Copy link
Contributor

philkra commented Feb 1, 2025

Description

Executing a DELETE statement on Hypertables with compression enabled with a compress_segmentby column is present in the WHERE clause, and the operator <>, leads to deleting all rows within the compressed chunk i.e. even the rows which are not matched by WHERE clause.

Example

ALTER TABLE public.my_hyper_table SET (
  timescaledb.compress,
  timescaledb.compress_segmentby = 'my_segment_by'
);

-- This statement will delete all records from the affected chunks
DELETE FROM my_hyper_table WHERE my_segment_by <> 'Some value';

The = operator is not affected by this issue.

Workaround

In order to prevent this situation, please set the following GUC:

set timescaledb.enable_compressed_direct_batch_delete to false;

We apologize for the inconvenience and are working on a bug fix, which will be rolled out in the next 2.18.1 release. Updates will be posted on this ticket.

TimescaleDB version affected

  • 2.17.0
  • 2.17.1
  • 2.17.2
  • 2.18.0
@philkra philkra added the bug label Feb 1, 2025
@philkra
Copy link
Contributor Author

philkra commented Feb 1, 2025

closing as duplicate of #7644

@philkra philkra closed this as completed Feb 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant