Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Y-- committed Dec 4, 2024
1 parent acc4126 commit 50789a7
Showing 1 changed file with 15 additions and 3 deletions.
18 changes: 15 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ See our [official documentation][docs] for further details.
- `SELECT` queries executed by the DuckDB engine can directly read Postgres tables. (If you only query Postgres tables you need to run `SET duckdb.force_execution TO true`, see the **IMPORTANT** section above for details)
- Able to read [data types](https://www.postgresql.org/docs/current/datatype.html) that exist in both Postgres and DuckDB. The following data types are supported: numeric, character, binary, date/time, boolean, uuid, json, and arrays.
- If DuckDB cannot support the query for any reason, execution falls back to Postgres.
- Read and Write support for object storage (AWS S3, Cloudflare R2, or Google GCS):
- Read and Write support for object storage (AWS S3, Azure, Cloudflare R2, or Google GCS):
- Read parquet, CSV and JSON files:
- `SELECT n FROM read_parquet('s3://bucket/file.parquet') AS (n int)`
- `SELECT n FROM read_csv('s3://bucket/file.csv') AS (n int)`
Expand Down Expand Up @@ -124,9 +124,9 @@ CREATE EXTENSION pg_duckdb;

See our [official documentation][docs] for more usage information.

pg_duckdb relies on DuckDB's vectorized execution engine to read and write data to object storage bucket (AWS S3, Cloudflare R2, or Google GCS) and/or MotherDuck. The follow two sections describe how to get started with these destinations.
pg_duckdb relies on DuckDB's vectorized execution engine to read and write data to object storage bucket (AWS S3, Azure, Cloudflare R2, or Google GCS) and/or MotherDuck. The follow two sections describe how to get started with these destinations.

### Object storage bucket (AWS S3, Cloudflare R2, or Google GCS)
### Object storage bucket (AWS S3, Azure, Cloudflare R2, or Google GCS)

Querying data stored in Parquet, CSV, JSON, Iceberg and Delta format can be done with `read_parquet`, `read_csv`, `read_json`, `iceberg_scan` and `delta_scan` respectively.

Expand Down Expand Up @@ -157,6 +157,18 @@ Querying data stored in Parquet, CSV, JSON, Iceberg and Delta format can be done
LIMIT 100;
```

Note, for Azure, you will need to first install the Azure extension:
```sql
SELECT duckdb.install_extension('azure');
```

You may then store a secret using the `connection_string` parameter as such:
```sql
INSERT INTO duckdb.secrets
(type, connection_string)
VALUES ('Azure', '<your connection string>');
```

### Connect with MotherDuck

pg_duckdb also integrates with [MotherDuck][md]. To enable this support you first need to [generate an access token][md-access-token] and then add the following line to your `postgresql.conf` file:
Expand Down

0 comments on commit 50789a7

Please sign in to comment.