Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore: release branch for TimescaleDB v2.18.0. #3764

Draft
wants to merge 16 commits into
base: latest
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
89 changes: 89 additions & 0 deletions _partials/_cloud_self_configuration.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,89 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

## Policies

### `timescaledb.max_background_workers (int)`

Max background worker processes allocated to TimescaleDB. Set to at least 1 +
the number of databases loaded with a TimescaleDB extension in a PostgreSQL
instance. Default value is 16.

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.

## Hypercore features

### `timescaledb.default_hypercore_use_access_method (bool)`

The default value for `hypercore_use_access_method` for functions that have this parameter. This function is in `user` context, meaning that any user can set it for the session. The default value is `false`.

<EarlyAccess />

## $SERVICE_LONG tuning

### `timescaledb.disable_load (bool)`

Disable the loading of the actual extension

### `timescaledb.enable_cagg_reorder_groupby (bool)`
Enable group by reordering

### `timescaledb.enable_chunk_append (bool)`
Enable chunk append node

### `timescaledb.enable_constraint_aware_append (bool)`
Enable constraint-aware append scans

### `timescaledb.enable_constraint_exclusion (bool)`
Enable constraint exclusion

### `timescaledb.enable_job_execution_logging (bool)`
Enable job execution logging

### `timescaledb.enable_optimizations (bool)`
Enable TimescaleDB query optimizations

### `timescaledb.enable_ordered_append (bool)`
Enable ordered append scans

### `timescaledb.enable_parallel_chunk_append (bool)`
Enable parallel chunk append node

### `timescaledb.enable_runtime_exclusion (bool)`
Enable runtime chunk exclusion

### `timescaledb.enable_tiered_reads (bool)`

Enable [tiered reads][enabling-data-tiering] to that you query your data normally when it's distributed across different storage tiers.
Your hypertable is spread across the tiers, so queries and `JOIN`s work and fetch the same data as usual.

By default, tiered data is not accessed by queries. Querying tiered data may slow down query performance
as the data is not stored locally on Timescale's high-performance storage tier.


### `timescaledb.enable_transparent_decompression (bool)`
Enable transparent decompression


### `timescaledb.restoring (bool)`
Stop any background workers which could have been performing tasks. This is especially useful you
migrate data to your [$SERVICE_LONG][pg-dump-and-restore] or [self-hosted database][migrate-entire].

### `timescaledb.max_cached_chunks_per_hypertable (int)`
Maximum cached chunks

### `timescaledb.max_open_chunks_per_insert (int)`
Maximum open chunks per insert

### `timescaledb.max_tuples_decompressed_per_dml_transaction (int)`

The max number of tuples that can be decompressed during an INSERT, UPDATE, or DELETE.

[enabling-data-tiering]: /use-timescale/:currentVersion:/data-tiering/enabling-data-tiering/
[pg-dump-and-restore]: /migrate/:currentVersion:/pg-dump-and-restore/
[migrate-entire]: /self-hosted/:currentVersion:/migration/entire-database/
1 change: 1 addition & 0 deletions _partials/_deprecated_2_18_0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
<Tag variant="hollow">Old API from [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
6 changes: 1 addition & 5 deletions _partials/_early_access.md
Original file line number Diff line number Diff line change
@@ -1,5 +1 @@
<Highlight type="important">
This feature is early access. Early access features might be subject to billing
changes in the future. If you have feedback, reach out to your customer success
manager, or [contact us](https://www.timescale.com/contact/).
</Highlight>
<Tag variant="hollow">Early access: TimescaleDB v2.18.0</Tag>
21 changes: 21 additions & 0 deletions _partials/_hypercore-conversion-overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
When you convert chunks from the rowstore to the columnstore, multiple records are grouped into a single row.
The columns of this row hold an array-like structure that stores all the data. For example, data in the following
rowstore chunk:

| Timestamp | Device ID | Device Type | CPU |Disk IO|
|---|---|---|---|---|
|12:00:01|A|SSD|70.11|13.4|
|12:00:01|B|HDD|69.70|20.5|
|12:00:02|A|SSD|70.12|13.2|
|12:00:02|B|HDD|69.69|23.4|
|12:00:03|A|SSD|70.14|13.0|
|12:00:03|B|HDD|69.70|25.2|

Is converted and compressed into arrays in a row in the columnstore:

|Timestamp|Device ID|Device Type|CPU|Disk IO|
|-|-|-|-|-|
|[12:00:01, 12:00:01, 12:00:02, 12:00:02, 12:00:03, 12:00:03]|[A, B, A, B, A, B]|[SSD, HDD, SSD, HDD, SSD, HDD]|[70.11, 69.70, 70.12, 69.69, 70.14, 69.70]|[13.4, 20.5, 13.2, 23.4, 13.0, 25.2]|

Because a single row takes up less disk space, you can reduce your chunk size by more than 90%, and can also
speed up your queries. This saves on storage costs, and keeps your queries operating at lightning speed.
44 changes: 44 additions & 0 deletions _partials/_hypercore_manual_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Stop the jobs that are automatically adding chunks to the columnstore**

Retrieve the list of jobs from the [timescaledb_information.jobs][informational-views] view
to find the job you need to [alter_job][alter_job].

``` sql
SELECT alter_job(JOB_ID, scheduled => false);
```

1. **Convert a chunk to update back to the rowstore**

``` sql
CALL convert_to_rowstore('_timescaledb_internal._hyper_2_2_chunk');
```

1. **Update the data in the chunk you added to the rowstore**

Best practice is to structure your [INSERT][insert] statement to include appropriate
partition key values, such as the timestamp. TimescaleDB adds the data to the correct chunk:

``` sql
INSERT INTO metrics (time, value)
VALUES ('2025-01-01T00:00:00', 42);
```

1. **Convert the updated chunks back to the columnstore**

``` sql
CALL convert_to_columnstore('_timescaledb_internal._hyper_1_2_chunk');
```

1. **Restart the jobs that are automatically converting chunks to the columnstore**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```

[alter_job]: /api/:currentVersion:/actions/alter_job/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[insert]: /use-timescale/:currentVersion:/write-data/insert/
[setup-hypercore]: /use-timescale/:currentVersion:/hypercore/real-time-analytics-in-hypercore/
[compression_alter-table]: /api/:currentVersion:/hypercore/alter_table/
96 changes: 96 additions & 0 deletions _partials/_hypercore_policy_workflow.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,96 @@
import EarlyAccess from "versionContent/_partials/_early_access.mdx";

1. **Connect to your $SERVICE_LONG**

In [$CONSOLE][services-portal] open an [SQL editor][in-console-editors]. You can also connect to your service using [psql][connect-using-psql].

1. **Enable columnstore on a hypertable**

Create a [job][job] that automatically moves chunks in a hypertable to the columnstore at a specific time interval.
By default, your table is `orderedby` the time column. For efficient queries on columnstore data, remember to
`segmentby` the column you will use most often to filter your data:

* [Use `ALTER TABLE` for a hypertable][alter_table_hypercore]
```sql
ALTER TABLE stocks_real_time SET (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol');
```
* [Use ALTER MATERIALIZED VIEW for a continuous aggregate][compression_continuous-aggregate]
```sql
ALTER MATERIALIZED VIEW stock_candlestick_daily set (
timescaledb.enable_columnstore = true,
timescaledb.segmentby = 'symbol' );
```
Before you say `huh`, a continuous aggregate is a specialized hypertable.

1. **Add a policy to convert chunks to the columnstore at a specific time interval**

For example, 60 days after the data was added to the table:
``` sql
CALL add_columnstore_policy('older_stock_prices', after => INTERVAL '60d');
```
See [add_columnstore_policy][add_columnstore_policy].

1. **View the policies that you set or the policies that already exist**

``` sql
SELECT * FROM timescaledb_information.jobs
WHERE proc_name='policy_compression';
```
See [timescaledb_information.jobs][informational-views].

1. **Pause a columnstore policy**

If you need to modify or add a lot of data to a chunk in the columnstore, best practice is to stop any jobs moving
chunks to the columnstore, [convert the chunk back to the rowstore][convert_to_rowstore], then modify the data.
After the update, [convert the chunk to the columnstore][convert_to_columnstore] and restart the jobs.

``` sql
SELECT * FROM timescaledb_information.jobs where
proc_name = 'policy_compression' AND relname = 'stocks_real_time'

-- Select the JOB_ID from the results

SELECT alter_job(JOB_ID, scheduled => false);
```
See [alter_job][alter_job].

1. **Restart a columnstore policy**

``` sql
SELECT alter_job(JOB_ID, scheduled => true);
```
See [alter_job][alter_job].

1. **Remove a columnstore policy**

``` sql
CALL remove_columnstore_policy('older_stock_prices');
```
See [remove_columnstore_policy][remove_columnstore_policy].

1. **Disable columnstore**

If your table has chunks in the columnstore, you have to
[convert the chunks back to the rowstore][convert_to_rowstore] before you disable the columnstore.
``` sql
ALTER TABLE stocks_real_time SET (timescaledb.enable_columnstore = false);
```
See [alter_table_hypercore][alter_table_hypercore].


[job]: /api/:currentVersion:/actions/add_job/
[alter_table_hypercore]: /api/:currentVersion:/hypercore/alter_table/
[compression_continuous-aggregate]: /api/:currentVersion:/hypercore/alter_materialized_view/
[convert_to_rowstore]: /api/:currentVersion:/hypercore/convert_to_rowstore/
[convert_to_columnstore]: /api/:currentVersion:/hypercore/convert_to_columnstore/
[informational-views]: /api/:currentVersion:/informational-views/jobs/
[add_columnstore_policy]: /api/:currentVersion:/hypercore/add_columnstore_policy/
[hypercore_workflow]: /api/:currentVersion:/hypercore/#hypercore-workflow
[alter_job]: /api/:currentVersion:/actions/alter_job/
[remove_columnstore_policy]: /api/:currentVersion:/hypercore/remove_columnstore_policy/
[in-console-editors]: /getting-started/:currentVersion:/run-queries-from-console/
[services-portal]: https://console.cloud.timescale.com/dashboard/services
[connect-using-psql]: /use-timescale/:currentVersion:/integrations/query-admin/psql#connect-to-your-service
[insert]: /use-timescale/:currentVersion:/write-data/insert/
2 changes: 1 addition & 1 deletion _partials/_integration-prereqs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ Before integrating:

[create-service]: /getting-started/:currentVersion:/services/
[enable-timescaledb]: /self-hosted/:currentVersion:/install/
[connection-info]: /use-timescale/:currentVersion:/integrations/find-connection-details/
[connection-info]: /use-timescale/:currentVersion:/integrations/find-connection-details/
3 changes: 2 additions & 1 deletion _partials/_multi-node-deprecation.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
<Highlight type="warning">

[Multi-node support is deprecated][multi-node-deprecation].
[Multi-node support is sunsetted][multi-node-deprecation].

TimescaleDB v2.13 is the last release that includes multi-node support for PostgreSQL
versions 13, 14, and 15.

</Highlight>

[multi-node-deprecation]: https://github.com/timescale/timescaledb/blob/main/docs/MultiNodeDeprecation.md
8 changes: 8 additions & 0 deletions _partials/_prereqs-cloud-and-self.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

This procedure also works for [self-hosted $TIMESCALE_DB][enable-timescaledb].

[create-service]: /getting-started/:currentVersion:/services/
[enable-timescaledb]: /self-hosted/:currentVersion:/install/
5 changes: 5 additions & 0 deletions _partials/_prereqs-cloud-only.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
To follow the procedure on this page you need to:

* Create a [target $SERVICE_LONG][create-service]

[create-service]: /getting-started/:currentVersion:/services/
1 change: 1 addition & 0 deletions _partials/_since_2_18_0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
<Tag variant="hollow">Since [TimescaleDB v2.18.0](https://github.com/timescale/timescaledb/releases/tag/2.18.0)</Tag>
4 changes: 2 additions & 2 deletions _partials/_usage-based-storage-intro.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
$CLOUD_LONG charges are based on the amount of storage you use. You don't pay for
fixed storage size, and you don't need to worry about scaling disk size as your
data grows; We handle it all for you. To reduce your data costs further,
use [compression][compression], a [data retention policy][data-retention], and
use [Hypercore][hypercore], a [data retention policy][data-retention], and
[tiered storage][data-tiering].

[compression]: /use-timescale/:currentVersion:/compression/about-compression
[hypercore]: /api/:currentVersion:/hypercore/
[data-retention]: /use-timescale/:currentVersion:/data-retention/
[data-tiering]: /use-timescale/:currentVersion:/data-tiering/
45 changes: 41 additions & 4 deletions about/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,46 @@ keywords: [changelog, upgrades, updates, releases]

# Changelog

All the latest features and updates to Timescale products.
All the latest features and updates to Timescale products.

## 🤖 TimescaleDB v2.18 and SQL Assistant Improvements in Data Mode and PopSQL

<Label type="date">February 6, 2025</Label>

### TimescaleDB v2.18 - dense indexes in the columnstore and query vectorization improvements
Starting this week, all new services created on Timescale Cloud use [TimescaleDB v2.18](https://github.com/timescale/timescaledb/releases/tag/2.18.0). Existing services will be upgraded gradually during their maintenance window.

Highlighted features in TimescaleDB v2.18.0 include:

* The ability to add dense indexes (btree and hash) to the columnstore through the new hypercore table access method.
* Significant performance improvements through vectorization (SIMD) for aggregations using a group by with one column and/or using a filter clause when querying the columnstore.
* Hypertables support triggers for transition tables, which is one of the most upvoted community feature requests.
* Updated methods to manage Timescale's hybrid row-columnar store (hypercore). These methods highlight columnstore usage. The columnstore includes an optimized columnar format as well as compression.

### SQL Assistant improvements

We made a few improvements to SQL Assistant:

**Dedicated SQL Assistant threads** 🧵

Each query, notebook, and dashboard now gets its own conversation thread, keeping your chats organized.

![Dedicated threads](https://assets.timescale.com/docs/images/timescale-cloud-sql-assistant-threads.gif)

**Delete messages** ❌

Made a typo? Asked the wrong question? You can now delete individual messages from your thread to keep the conversation clean and relevant.

![Delete messages in SQL Assistant threads](https://assets.timescale.com/docs/images/timescale-cloud-sql-assistant-delete-messages.png)

**Support for OpenAI `o3-mini` ⚡**

We’ve added support for OpenAI’s latest `o3-mini` model, bringing faster response times and improved reasoning for SQL queries.

![SQL Assistant o3 mini](https://assets.timescale.com/docs/images/timescale-cloud-sql-assistant-o3-mini.png)

## 🌐 IP Allowlists in Data Mode and PopSQL

<Label type="date">January 31, 2025</Label>

For enhanced network security, you can now also create IP allowlists in the $CONSOLE data mode and PopSQL. Similarly to the [ops mode IP allowlists][ops-mode-allow-list], this feature grants access to your data only to certain IP addresses. For example, you might require your employees to use a VPN and add your VPN static egress IP to the allowlist.
Expand Down Expand Up @@ -209,7 +246,7 @@ In the **Jobs** section of the **Explorer**, users can now see the status (compl
### Pgai Vectorizer: vector embeddings as database indexes (early access)
This early access feature enables you to automatically create, update, and maintain embeddings as your data changes. Just like an index, Timescale handles all the complexity: syncing, versioning, and cleanup happen automatically.
This means no manual tracking, zero maintenance burden, and the freedom to rapidly experiment with different embedding models and chunking strategies without building new pipelines.
Navigate to the AI tab in your service overview and follow the instructions to add your OpenAI API key and set up your first vectorizer or read our [guide to automate embedding generation with pgai Vectorizer](https://github.com/timescale/pgai/blob/main/docs/vectorizer.md) for more details.
Navigate to the AI tab in your service overview and follow the instructions to add your OpenAI API key and set up your first vectorizer or read our [guide to automate embedding generation with pgai Vectorizer](https://github.com/timescale/pgai/blob/main/docs/vectorizer/overview.md) for more details.

![Vectorizer setup](https://s3.amazonaws.com/assets.timescale.com/docs/images/vectorizer-setup.png)

Expand Down Expand Up @@ -606,7 +643,7 @@ select ollama_generate
;
```

To learn more, see the [pgai Ollama documentation](https://github.com/timescale/pgai/blob/main/docs/ollama.md).
To learn more, see the [pgai Ollama documentation](https://github.com/timescale/pgai/blob/main/docs/model_calling/ollama.md).

## 🧙 Compression Wizard

Expand Down Expand Up @@ -710,4 +747,4 @@ To learn more, see the [postgresql-unit documentation](https://github.com/df7cb/
[ops-mode-allow-list]: /about/:currentVersion:/changelog/#-ip-allow-lists
[popsql-web]: https://app.popsql.com/login
[popsql-desktop]: https://popsql.com/download
[console]: https://console.cloud.timescale.com/dashboard/services
[console]: https://console.cloud.timescale.com/dashboard/services
2 changes: 1 addition & 1 deletion about/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ notes about our downloadable products, see:
* [pgspot](https://github.com/timescale/pgspot/releases) - spot vulnerabilities in PostgreSQL extension scripts.
* [live-migration](https://hub.docker.com/r/timescale/live-migration/tags) - a Docker image to migrate data to a Timescale Cloud service.


This documentation is based on TimescaleDB v2.18.0 and compatible products.

<Highlight type="note">

Expand Down
Loading