From 50789a75e9a5a27e87836bd1dd2fdea381cec978 Mon Sep 17 00:00:00 2001 From: Yves Date: Wed, 4 Dec 2024 15:21:55 +0100 Subject: [PATCH] Update README.md --- README.md | 18 +++++++++++++++--- 1 file changed, 15 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index 6ccd2ae3..b09526b9 100644 --- a/README.md +++ b/README.md @@ -17,7 +17,7 @@ See our [official documentation][docs] for further details. - `SELECT` queries executed by the DuckDB engine can directly read Postgres tables. (If you only query Postgres tables you need to run `SET duckdb.force_execution TO true`, see the **IMPORTANT** section above for details) - Able to read [data types](https://www.postgresql.org/docs/current/datatype.html) that exist in both Postgres and DuckDB. The following data types are supported: numeric, character, binary, date/time, boolean, uuid, json, and arrays. - If DuckDB cannot support the query for any reason, execution falls back to Postgres. -- Read and Write support for object storage (AWS S3, Cloudflare R2, or Google GCS): +- Read and Write support for object storage (AWS S3, Azure, Cloudflare R2, or Google GCS): - Read parquet, CSV and JSON files: - `SELECT n FROM read_parquet('s3://bucket/file.parquet') AS (n int)` - `SELECT n FROM read_csv('s3://bucket/file.csv') AS (n int)` @@ -124,9 +124,9 @@ CREATE EXTENSION pg_duckdb; See our [official documentation][docs] for more usage information. -pg_duckdb relies on DuckDB's vectorized execution engine to read and write data to object storage bucket (AWS S3, Cloudflare R2, or Google GCS) and/or MotherDuck. The follow two sections describe how to get started with these destinations. +pg_duckdb relies on DuckDB's vectorized execution engine to read and write data to object storage bucket (AWS S3, Azure, Cloudflare R2, or Google GCS) and/or MotherDuck. The follow two sections describe how to get started with these destinations. -### Object storage bucket (AWS S3, Cloudflare R2, or Google GCS) +### Object storage bucket (AWS S3, Azure, Cloudflare R2, or Google GCS) Querying data stored in Parquet, CSV, JSON, Iceberg and Delta format can be done with `read_parquet`, `read_csv`, `read_json`, `iceberg_scan` and `delta_scan` respectively. @@ -157,6 +157,18 @@ Querying data stored in Parquet, CSV, JSON, Iceberg and Delta format can be done LIMIT 100; ``` +Note, for Azure, you will need to first install the Azure extension: +```sql +SELECT duckdb.install_extension('azure'); +``` + +You may then store a secret using the `connection_string` parameter as such: +```sql +INSERT INTO duckdb.secrets +(type, connection_string) +VALUES ('Azure', ''); +``` + ### Connect with MotherDuck pg_duckdb also integrates with [MotherDuck][md]. To enable this support you first need to [generate an access token][md-access-token] and then add the following line to your `postgresql.conf` file: