From 5be44966d8ebde285e74737a297a806d93b8a448 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Carlos=20Sans=C3=B3n?= Date: Wed, 19 Jun 2024 14:33:12 +0200 Subject: [PATCH] Documentation on how to materialzie on S3 buckets --- docs/mint.json | 6 + .../queries/advanced/materialized-queries.mdx | 2 - docs/sources/advanced/s3-storage.mdx | 136 ++++++++++++++++++ docs/sources/duckdb.mdx | 15 ++ 4 files changed, 157 insertions(+), 2 deletions(-) create mode 100644 docs/sources/advanced/s3-storage.mdx diff --git a/docs/mint.json b/docs/mint.json index b3cdeacfd..d7299e830 100644 --- a/docs/mint.json +++ b/docs/mint.json @@ -116,6 +116,12 @@ "sources/trino" ] }, + { + "group": "Advanced", + "pages": [ + "sources/advanced/s3-storage" + ] + }, { "group": "Basics", "pages": [ diff --git a/docs/queries/advanced/materialized-queries.mdx b/docs/queries/advanced/materialized-queries.mdx index a82a1f421..cee913489 100644 --- a/docs/queries/advanced/materialized-queries.mdx +++ b/docs/queries/advanced/materialized-queries.mdx @@ -12,8 +12,6 @@ Materializing queries can be useful in a variety of scenarios: - **Scale your queries**. Store queries that are too large or expensive to run on your database, and use them as a source for other queries. - **Share data between sources**. Store tables from different sources, and use them together in a single query, even if they are in different databases! -At the moment materializing only work with [Potsgresql connector](/sources/postgresql). - ## Materializing a query Almost any query can be materialized. diff --git a/docs/sources/advanced/s3-storage.mdx b/docs/sources/advanced/s3-storage.mdx new file mode 100644 index 000000000..364621128 --- /dev/null +++ b/docs/sources/advanced/s3-storage.mdx @@ -0,0 +1,136 @@ +--- +title: 'Materialize on S3' +description: 'Configure your project to store materialized files on an S3 bucket' +--- + +## Introduction + +Query materializations are typically stored locally on the Latitude server. However, for large datasets or projects that require frequent updates, storing materialized queries on an S3 bucket can be a better option. + +This guide will walk you through configuring your project to store materialized files on an S3 bucket. + +## Step 0: Create an S3 bucket + +If you already have an available S3 bucket, you can skip this step. + +To create a new S3 bucket: + +1. Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in to your account. +2. In the navigation pane, click on **S3** under the **Storage** section. +3. Click on **Create bucket**. +4. Enter a name for your bucket and choose the region where you want to store your data. +5. Click on **Create bucket**. + +## Step 1: Create an IAM user for the Latitude server + +Latitude needs an IAM user with read and write permissions to your S3 bucket. + +The best way to manage these permissions is to first create a policy for the Latitude server, and then assign it to a new IAM user. This will ensure that the Latitude server has the necessary permissions to access your S3 bucket, and that you can manage and even revoke these permissions as needed. + + + + 1. Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in to your account. + 2. In the navigation pane, click on **IAM** in the **Security** section. + 3. Click on **Policies** under the **Access management** section. + 4. Click on **Create policy**. + 5. In the **Policy editor** section, select **JSON** and paste the following policy document: + + ```json + { + "Version": "2012-10-17", + "Statement": [ + { + "Sid": "LatitudeBucketListing", + "Effect": "Allow", + "Action": [ "s3:ListAllMyBuckets" ], + "Resource": [ "*" ] + }, + { + "Sid": "LatitudeBucketOperations", + "Action": [ + "s3:GetObject", + "s3:PutObject", + "s3:DeleteObject", + "s3:ListBucket" + ], + "Resource": [ + "arn:aws:s3:::", + "arn:aws:s3:::/*" + ], + "Effect": "Allow" + } + ] + } + ``` + + Replace `` with the name of your S3 bucket. + + + This policy will allow the Latitude server to read, write, and list all files from the specified S3 bucket. Latitude requires these permissions to check, store, and retrieve materialized queries. + + + 6. Click **Next**, add a name and description for your policy, and click **Create policy**. + + + 1. In the AWS Management Console, go to **IAM** and click on **Users** under the **Access management** section. + 2. Click **Create user**. + 3. Enter a user name (e.g., "LatitudeServer") and click **Next**. + 4. In the **Permissions options** section, select **Attach policies directly**. + 5. Search for the policy you created and check the box next to it. + 6. Click **Next**, review the information, and click **Create user**. + + + 1. Navigate to the **User** section in IAM and find the user you just created. + 2. In the **Summary** section, click **Create access key**. + 3. Select **Other** as your use case, add a description tag, and click **Next**. + 4. Click **Create access key**. + 5. Copy the **Access key** and **Secret access key** values, or download the .csv file containing them. + + + If you forget the access key and secret access key, you can always create new keys for the same user, and even revoke the old ones for security purposes. + + + + +## Step 2: Configure Your Latitude Project + +Add your IAM user credentials to your Latitude project by modifying the `latitude.json` file: + +```json +{ + "name": , + "version": , + "storage": { + "type": "s3", + "bucket": , + "region": , + "accessKeyId": , + "secretAccessKey": + } +} +``` + + + We recommend storing credentials as secrets instead of directly saving them in the `latitude.json` file. To learn more about how to do this, see the [Source credentials](/sources/credentials) documentation. + + +This configuration will alow the `latitude materialize` command to store your materialized queries in your S3 bucket. + +## Step 3: Configure Your DuckDB Source + +Although your Latitude project is configured to materialize on S3, you will still need to configure your DuckDB source to read materialized queries from S3. This can be done by adding your S3 credentials to the DuckDB source configuration. + +```yaml +type: duckdb +details: + s3: + accessKeyId: + secretAccessKey: + region: +``` + + + You can use different credentials for writing and reading materialized queries. This can be useful for separating development and production environments. + + +For more information about this configuration, see the [DuckDB source configuration](/sources/duckdb) documentation. \ No newline at end of file diff --git a/docs/sources/duckdb.mdx b/docs/sources/duckdb.mdx index 9b2e303fe..3cdf5ad5e 100644 --- a/docs/sources/duckdb.mdx +++ b/docs/sources/duckdb.mdx @@ -42,6 +42,10 @@ To configure the connection with your DuckDB database follow these steps: type: duckdb details: url: + s3: + accessKeyId: + secretAccessKey: + region: ``` @@ -51,6 +55,17 @@ To configure the connection with your DuckDB database follow these steps: - **URL** → (Optional) This refers to the location of the DuckDB database file. DuckDB is an embedded database, so instead of connecting over a network, you specify the path to the database file on your local system or a shared filesystem. This path tells your application exactly where to find the DuckDB database you want to work with. In its absence, duckDB starts a new database in memory that will not get persisted once the connection is closed. +- **S3** → (Optional) You can configure the connector to be able to access remote files on S3 buckets. To add access to an S3 bucket, you need all of the following credentials from an authorized IAM user: + - **accessKeyId** → The access key ID of the S3 user. + - **secretAccessKey** → The secret access key of the S3 user. + - **region** → The region of the S3 user. + + + This option will configure a temporal secret with the IAM User's credentials on the DuckDB connection. + + For more information about DuckDB's S3 support, see the [official documentation](https://duckdb.org/docs/extensions/httpfs/s3api.html). + + ## Testing the connection To test the connection you can: