Skip to content

Commit

Permalink
Documentation on how to materialzie on S3 buckets
Browse files Browse the repository at this point in the history
  • Loading branch information
csansoon committed Jun 19, 2024
1 parent e907c53 commit 511726f
Show file tree
Hide file tree
Showing 4 changed files with 157 additions and 2 deletions.
6 changes: 6 additions & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -116,6 +116,12 @@
"sources/trino"
]
},
{
"group": "Advanced",
"pages": [
"sources/advanced/s3-storage"
]
},
{
"group": "Basics",
"pages": [
Expand Down
2 changes: 0 additions & 2 deletions docs/queries/advanced/materialized-queries.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,6 @@ Materializing queries can be useful in a variety of scenarios:
- **Scale your queries**. Store queries that are too large or expensive to run on your database, and use them as a source for other queries.
- **Share data between sources**. Store tables from different sources, and use them together in a single query, even if they are in different databases!

<Warning>At the moment materializing only work with [Potsgresql connector](/sources/postgresql).</Warning>

## Materializing a query

Almost any query can be materialized.
Expand Down
136 changes: 136 additions & 0 deletions docs/sources/advanced/s3-storage.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,136 @@
---
title: 'Materialize on S3'
description: 'Configure your project to store materialized files on an S3 bucket'
---

## Introduction

Query materializations are typically stored locally on the Latitude server. However, for large datasets or projects that require frequent updates, storing materialized queries on an S3 bucket can be a better option.

This guide will walk you through configuring your project to store materialized files on an S3 bucket.

## Step 0: Create an S3 bucket

If you already have an available S3 bucket, you can skip this step.

To create a new S3 bucket:

1. Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in to your account.
2. In the navigation pane, click on **S3** under the **Storage** section.
3. Click on **Create bucket**.
4. Enter a name for your bucket and choose the region where you want to store your data.
5. Click on **Create bucket**.

## Step 1: Create an IAM user for the Latitude server

Latitude needs an IAM user with read and write permissions to your S3 bucket.

The best way to manage these permissions is to first create a policy for the Latitude server, and then assign it to a new IAM user. This will ensure that the Latitude server has the necessary permissions to access your S3 bucket, and that you can manage and even revoke these permissions as needed.

<Steps>
<Step title="Create a Policy for the Latitude Server">
1. Go to the [AWS Management Console](https://console.aws.amazon.com/) and log in to your account.
2. In the navigation pane, click on **IAM** in the **Security** section.
3. Click on **Policies** under the **Access management** section.
4. Click on **Create policy**.
5. In the **Policy editor** section, select **JSON** and paste the following policy document:

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "LatitudeBucketListing",
"Effect": "Allow",
"Action": [ "s3:ListAllMyBuckets" ],
"Resource": [ "*" ]
},
{
"Sid": "LatitudeBucketOperations",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
],
"Effect": "Allow"
}
]
}
```

Replace `<bucket_name>` with the name of your S3 bucket.

<Note>
This policy will allow the Latitude server to read, write, and list all files from the specified S3 bucket. Latitude requires these permissions to check, store, and retrieve materialized queries.
</Note>

6. Click **Next**, add a name and description for your policy, and click **Create policy**.
</Step>
<Step title="Create an IAM User abd attach the Policy">
1. In the AWS Management Console, go to **IAM** and click on **Users** under the **Access management** section.
2. Click **Create user**.
3. Enter a user name (e.g., "LatitudeServer") and click **Next**.
4. In the **Permissions options** section, select **Attach policies directly**.
5. Search for the policy you created and check the box next to it.
6. Click **Next**, review the information, and click **Create user**.
</Step>
<Step title="Obtain Access Credentials for the IAM User">
1. Navigate to the **User** section in IAM and find the user you just created.
2. In the **Summary** section, click **Create access key**.
3. Select **Other** as your use case, add a description tag, and click **Next**.
4. Click **Create access key**.
5. Copy the **Access key** and **Secret access key** values, or download the .csv file containing them.

<Note>
If you forget the access key and secret access key, you can always create new keys for the same user, and even revoke the old ones for security purposes.
</Note>
</Step>
</Steps>

## Step 2: Configure Your Latitude Project

Add your IAM user credentials to your Latitude project by modifying the `latitude.json` file:

```json
{
"name": <latitude-project-name>,
"version": <latitude-project-version>,
"storage": {
"type": "s3",
"bucket": <your-s3-bucket-name>,
"region": <your-s3-bucket-region>,
"accessKeyId": <your-access-key-id>,
"secretAccessKey": <your-secret-access-key>
}
}
```

This configuration will alow the `latitude materialize` command to store your materialized queries in your S3 bucket.

<Note>
You can store your credentials as secrets instead of directly saving them in the `latitude.json` file. To learn more about storing credentials as secrets, see the [Source credentials](/sources/credentials) documentation.
</Note>

## Step 3: Configure Your DuckDB Source

Although your Latitude project is configured to materialize on S3, you will still need to configure your DuckDB source to read materialized queries from S3. This can be done by adding your S3 credentials to the DuckDB source configuration.

```yaml
type: duckdb
details:
s3:
accessKeyId: <your-access-key-id>
secretAccessKey: <your-secret-access-key>
region: <your-region>
```
<Note>
You can use different credentials for writing and reading materialized queries. This can be useful for separating development and production environments.
</Note>
For more information about this configuration, see the [DuckDB source configuration](/sources/duckdb) documentation.
15 changes: 15 additions & 0 deletions docs/sources/duckdb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,10 @@ To configure the connection with your DuckDB database follow these steps:
type: duckdb
details:
url: <your-duckdb-url>
s3:
accessKeyId: <your-access-key-id>
secretAccessKey: <your-secret-access-key>
region: <your-region>
```
</Step>
<Step title='Done!'></Step>
Expand All @@ -51,6 +55,17 @@ To configure the connection with your DuckDB database follow these steps:
- **URL** → (Optional) This refers to the location of the DuckDB database file. DuckDB is an embedded database, so instead of connecting over a network, you specify the path to the database file on your local system or a shared filesystem. This path tells your application exactly where to find the DuckDB database you want to work with. In its absence, duckDB starts a new database in memory that will not get persisted once the connection is closed.
- **S3** → (Optional) You can configure the connector to be able to access remote files on S3 buckets. To add access to an S3 bucket, you need all of the following credentials from an authorized IAM user:
- **accessKeyId** → The access key ID of the S3 user.
- **secretAccessKey** → The secret access key of the S3 user.
- **region** → The region of the S3 user.
<Note>
This option will configure a temporal secret with the IAM User's credentials on the DuckDB connection.
For more information about DuckDB's S3 support, see the [official documentation](https://duckdb.org/docs/extensions/httpfs/s3api.html).
</Note>
## Testing the connection
To test the connection you can:
Expand Down

0 comments on commit 511726f

Please sign in to comment.