h1
stringclasses 12
values | h2
stringclasses 93
values | h3
stringclasses 960
values | h5
stringlengths 0
79
| content
stringlengths 24
533k
| tokens
int64 7
158k
|
---|---|---|---|---|---|
Summary | This document contains [DuckDB's official documentation and guides](https://duckdb.org/) in a single-file easy-to-search form.
If you find any issues, please report them [as a GitHub issue](https://github.com/duckdb/duckdb-web/issues).
Contributions are very welcome in the form of [pull requests](https://github.com/duckdb/duckdb-web/pulls).
If you are considering submitting a contribution to the documentation, please consult our [contributor guide](https://github.com/duckdb/duckdb-web/blob/main/CONTRIBUTING.md).
Code repositories:
* DuckDB source code: [github.com/duckdb/duckdb](https://github.com/duckdb/duckdb)
* DuckDB documentation source code: [github.com/duckdb/duckdb-web](https://github.com/duckdb/duckdb-web) | 184 |
|||
Connect | Connect | Connect or Create a Database | To use DuckDB, you must first create a connection to a database. The exact syntax varies between the [client APIs](#docs:api:overview) but it typically involves passing an argument to configure persistence. | 43 |
|
Connect | Connect | Persistence | DuckDB can operate in both persistent mode, where the data is saved to disk, and in in-memory mode, where the entire data set is stored in the main memory.
> **Tip. ** Both persistent and in-memory databases use spilling to disk to facilitate larger-than-memory workloads (i.e., out-of-core-processing).
#### Persistent Database {#docs:connect:overview::persistent-database}
To create or open a persistent database, set the path of the database file, e.g., `my_database.duckdb`, when creating the connection.
This path can point to an existing database or to a file that does not yet exist and DuckDB will open or create a database at that location as needed.
The file may have an arbitrary extension, but `.db` or `.duckdb` are two common choices with `.ddb` also used sometimes.
Starting with v0.10, DuckDB's storage format is [backwards-compatible](#docs:internals:storage::backward-compatibility), i.e., DuckDB is able to read database files produced by an older versions of DuckDB.
For example, DuckDB v0.10 can read and operate on files created by the previous DuckDB version, v0.9.
For more details on DuckDB's storage format, see the [storage page](#docs:internals:storage).
#### In-Memory Database {#docs:connect:overview::in-memory-database}
DuckDB can operate in in-memory mode. In most clients, this can be activated by passing the special value `:memory:` as the database file or omitting the database file argument. In in-memory mode, no data is persisted to disk, therefore, all data is lost when the process finishes. | 362 |
|
Connect | Concurrency | Handling Concurrency | DuckDB has two configurable options for concurrency:
1. One process can both read and write to the database.
2. Multiple processes can read from the database, but no processes can write ([`access_mode = 'READ_ONLY'`](#docs:configuration:overview::configuration-reference)).
When using option 1, DuckDB supports multiple writer threads using a combination of [MVCC (Multi-Version Concurrency Control)](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) and optimistic concurrency control (see [Concurrency within a Single Process](#::concurrency-within-a-single-process)), but all within that single writer process. The reason for this concurrency model is to allow for the caching of data in RAM for faster analytical queries, rather than going back and forth to disk during each query. It also allows the caching of functions pointers, the database catalog, and other items so that subsequent queries on the same connection are faster.
> DuckDB is optimized for bulk operations, so executing many small transactions is not a primary design goal. | 214 |
|
Connect | Concurrency | Concurrency within a Single Process | DuckDB supports concurrency within a single process according to the following rules. As long as there are no write conflicts, multiple concurrent writes will succeed. Appends will never conflict, even on the same table. Multiple threads can also simultaneously update separate tables or separate subsets of the same table. Optimistic concurrency control comes into play when two threads attempt to edit (update or delete) the same row at the same time. In that situation, the second thread to attempt the edit will fail with a conflict error. | 102 |
|
Connect | Concurrency | Writing to DuckDB from Multiple Processes | Writing to DuckDB from multiple processes is not supported automatically and is not a primary design goal (see [Handling Concurrency](#::handling-concurrency)).
If multiple processes must write to the same file, several design patterns are possible, but would need to be implemented in application logic. For example, each process could acquire a cross-process mutex lock, then open the database in read/write mode and close it when the query is complete. Instead of using a mutex lock, each process could instead retry the connection if another process is already connected to the database (being sure to close the connection upon query completion). Another alternative would be to do multi-process transactions on a MySQL, PostgreSQL, or SQLite database, and use DuckDB's [MySQL](#docs:extensions:mysql), [PostgreSQL](#docs:extensions:postgres), or [SQLite](#docs:extensions:sqlite) extensions to execute analytical queries on that data periodically.
Additional options include writing data to Parquet files and using DuckDB's ability to [read multiple Parquet files](#docs:data:parquet:overview), taking a similar approach with [CSV files](#docs:data:csv:overview), or creating a web server to receive requests and manage reads and writes to DuckDB. | 254 |
|
Connect | Concurrency | Optimistic Concurrency Control | DuckDB uses [optimistic concurrency control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control), an approach generally considered to be the best fit for read-intensive analytical database systems as it speeds up read query processing. As a result any transactions that modify the same rows at the same time will cause a transaction conflict error:
```console
Transaction conflict: cannot update a table that has been altered!
```
> **Tip. ** A common workaround when a transaction conflict is encountered is to rerun the transaction. | 108 |
|
Data Import | Importing Data | The first step to using a database system is to insert data into that system.
DuckDB provides can directly connect to [many popular data sources](#docs:data:data_sources) and offers several data ingestion methods that allow you to easily and efficiently fill up the database.
On this page, we provide an overview of these methods so you can select which one is best suited for your use case. | 79 |
||
Data Import | Importing Data | `INSERT` Statements | `INSERT` statements are the standard way of loading data into a database system. They are suitable for quick prototyping, but should be avoided for bulk loading as they have significant per-row overhead.
```sql
INSERT INTO people VALUES (1, 'Mark');
```
For a more detailed description, see the [page on the `INSERT statement`](#docs:data:insert). | 77 |
|
Data Import | Importing Data | CSV Loading | Data can be efficiently loaded from CSV files using several methods. The simplest is to use the CSV file's name:
```sql
SELECT * FROM 'test.csv';
```
Alternatively, use the [`read_csv` function](#docs:data:csv:overview) to pass along options:
```sql
SELECT * FROM read_csv('test.csv', header = false);
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--from):
```sql
COPY tbl FROM 'test.csv' (HEADER false);
```
It is also possible to read data directly from **compressed CSV files** (e.g., compressed with [gzip](https://www.gzip.org/)):
```sql
SELECT * FROM 'test.csv.gz';
```
DuckDB can create a table from the loaded data using the [`CREATE TABLE ... AS SELECT` statement](#docs:sql:statements:create_table::create-table--as-select-ctas):
```sql
CREATE TABLE test AS
SELECT * FROM 'test.csv';
```
For more details, see the [page on CSV loading](#docs:data:csv:overview). | 239 |
|
Data Import | Importing Data | Parquet Loading | Parquet files can be efficiently loaded and queried using their filename:
```sql
SELECT * FROM 'test.parquet';
```
Alternatively, use the [`read_parquet` function](#docs:data:parquet:overview):
```sql
SELECT * FROM read_parquet('test.parquet');
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--from):
```sql
COPY tbl FROM 'test.parquet';
```
For more details, see the [page on Parquet loading](#docs:data:parquet:overview). | 121 |
|
Data Import | Importing Data | JSON Loading | JSON files can be efficiently loaded and queried using their filename:
```sql
SELECT * FROM 'test.json';
```
Alternatively, use the [`read_json_auto` function](#docs:data:json:overview):
```sql
SELECT * FROM read_json_auto('test.json');
```
Or use the [`COPY` statement](#docs:sql:statements:copy::copy--from):
```sql
COPY tbl FROM 'test.json';
```
For more details, see the [page on JSON loading](#docs:data:json:overview). | 114 |
|
Data Import | Importing Data | Appender | In several APIs (C, C++, Go, Java, and Rust), the [Appender](#docs:data:appender) can be used as an alternative for bulk data loading.
This class can be used to efficiently add rows to the database system without using SQL statements. | 56 |
|
Data Import | Data Sources | DuckDB sources several data sources, including file formats, network protocols, and database systems:
* [AWS S3 buckets and storage with S3-compatible API](#docs:extensions:httpfs:s3api)
* [Azure Blob Storage](#docs:extensions:azure)
* [Cloudflare R2](#docs:guides:network_cloud_storage:cloudflare_r2_import)
* [CSV](#docs:data:csv:overview)
* [Delta Lake](#docs:extensions:delta)
* Excel (via the [`spatial` extension](#docs:extensions:spatial:overview)): see the [Excel Import](#docs:guides:file_formats:excel_import) and [Excel Export](#docs:guides:file_formats:excel_export)
* [httpfs](#docs:extensions:httpfs:https)
* [Iceberg](#docs:extensions:iceberg)
* [JSON](#docs:data:json:overview)
* [MySQL](#docs:extensions:mysql)
* [Parquet](#docs:data:parquet:overview)
* [PostgreSQL](#docs:extensions:postgres)
* [SQLite](#docs:extensions:sqlite) | 246 |
||
Data Import | CSV Files | Examples | The following examples use the [`flights.csv`](https://duckdb.org/data/flights.csv) file.
Read a CSV file from disk, auto-infer options:
```sql
SELECT * FROM 'flights.csv';
```
Use the `read_csv` function with custom options:
```sql
SELECT *
FROM read_csv('flights.csv',
delim = '|',
header = true,
columns = {
'FlightDate': 'DATE',
'UniqueCarrier': 'VARCHAR',
'OriginCityName': 'VARCHAR',
'DestCityName': 'VARCHAR'
});
```
Read a CSV from stdin, auto-infer options:
```bash
cat flights.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"
```
Read a CSV file into a table:
```sql
CREATE TABLE ontime (
FlightDate DATE,
UniqueCarrier VARCHAR,
OriginCityName VARCHAR,
DestCityName VARCHAR
);
COPY ontime FROM 'flights.csv';
```
Alternatively, create a table without specifying the schema manually using a [`CREATE TABLE .. AS SELECT` statement](#docs:sql:statements:create_table::create-table--as-select-ctas):
```sql
CREATE TABLE ontime AS
SELECT * FROM 'flights.csv';
```
We can use the [`FROM`-first syntax](#docs:sql:query_syntax:from::from-first-syntax) to omit `SELECT *`.
```sql
CREATE TABLE ontime AS
FROM 'flights.csv';
```
Write the result of a query to a CSV file.
```sql
COPY (SELECT * FROM ontime) TO 'flights.csv' WITH (HEADER, DELIMITER '|');
```
If we serialize the entire table, we can simply refer to it with its name.
```sql
COPY ontime TO 'flights.csv' WITH (HEADER, DELIMITER '|');
``` | 388 |
|
Data Import | CSV Files | CSV Loading | CSV loading, i.e., importing CSV files to the database, is a very common, and yet surprisingly tricky, task. While CSVs seem simple on the surface, there are a lot of inconsistencies found within CSV files that can make loading them a challenge. CSV files come in many different varieties, are often corrupt, and do not have a schema. The CSV reader needs to cope with all of these different situations.
The DuckDB CSV reader can automatically infer which configuration flags to use by analyzing the CSV file using the [CSV sniffer](https://duckdb.org/2023/10/27/csv-sniffer). This will work correctly in most situations, and should be the first option attempted. In rare situations where the CSV reader cannot figure out the correct configuration it is possible to manually configure the CSV reader to correctly parse the CSV file. See the [auto detection page](#docs:data:csv:auto_detection) for more information. | 190 |
|
Data Import | CSV Files | Parameters | Below are parameters that can be passed to the CSV reader. These parameters are accepted by the [`read_csv` function](#::csv-functions). But not all parameters are accepted by the [`COPY` statement](#docs:sql:statements:copy::copy-to).
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `all_varchar` | Option to skip type detection for CSV parsing and assume all columns to be of type `VARCHAR`. This option is only supported by the `read_csv` function. | `BOOL` | `false` |
| `allow_quoted_nulls` | Option to allow the conversion of quoted values to `NULL` values | `BOOL` | `true` |
| `auto_detect` | Enables [auto detection of CSV parameters](#docs:data:csv:auto_detection). | `BOOL` | `true` |
| `auto_type_candidates` | This option allows you to specify the types that the sniffer will use when detecting CSV column types. The `VARCHAR` type is always included in the detected types (as a fallback option). See [example](#::auto_type_candidates-details). | `TYPE[]` | [default types](#::auto_type_candidates-details) |
| `columns` | A struct that specifies the column names and column types contained within the CSV file (e.g., `{'col1': 'INTEGER', 'col2': 'VARCHAR'}`). Using this option implies that auto detection is not used. | `STRUCT` | (empty) |
| `compression` | The compression type for the file. By default this will be detected automatically from the file extension (e.g., `t.csv.gz` will use gzip, `t.csv` will use `none`). Options are `none`, `gzip`, `zstd`. | `VARCHAR` | `auto` |
| `dateformat` | Specifies the date format to use when parsing dates. See [Date Format](#docs:sql:functions:dateformat). | `VARCHAR` | (empty) |
| `decimal_separator` | The decimal separator of numbers. | `VARCHAR` | `.` |
| `delimiter` | Specifies the delimiter character that separates columns within each row (line) of the file. Alias for `sep`. This option is only available in the `COPY` statement. | `VARCHAR` | `,` |
| `delim` | Specifies the delimiter character that separates columns within each row (line) of the file. Alias for `sep`. | `VARCHAR` | `,` |
| `escape` | Specifies the string that should appear before a data character sequence that matches the `quote` value. | `VARCHAR` | `"` |
| `filename` | Whether or not an extra `filename` column should be included in the result. | `BOOL` | `false` |
| `force_not_null` | Do not match the specified columns' values against the NULL string. In the default case where the `NULL` string is empty, this means that empty values will be read as zero-length strings rather than `NULL`s. | `VARCHAR[]` | `[]` |
| `header` | Specifies that the file contains a header line with the names of each column in the file. | `BOOL` | `false` |
| `hive_partitioning` | Whether or not to interpret the path as a [Hive partitioned path](#docs:data:partitioning:hive_partitioning). | `BOOL` | `false` |
| `ignore_errors` | Option to ignore any parsing errors encountered β and instead ignore rows with errors. | `BOOL` | `false` |
| `max_line_size` | The maximum line size in bytes. | `BIGINT` | 2097152 |
| `names` | The column names as a list, see [example](#docs:data:csv:tips::provide-names-if-the-file-does-not-contain-a-header). | `VARCHAR[]` | (empty) |
| `new_line` | Set the new line character(s) in the file. Options are `'\r'`,`'\n'`, or `'\r\n'`. Note that the CSV parser only distinguishes between single-character and double-character line delimiters. Therefore, it does not differentiate between `'\r'` and `'\n'`.| `VARCHAR` | (empty) |
| `normalize_names` | Boolean value that specifies whether or not column names should be normalized, removing any non-alphanumeric characters from them. | `BOOL` | `false` |
| `null_padding` | If this option is enabled, when a row lacks columns, it will pad the remaining columns on the right with null values. | `BOOL` | `false` |
| `nullstr` | Specifies the string that represents a `NULL` value or (since v0.10.2) a list of strings that represent a `NULL` value. | `VARCHAR` or `VARCHAR[]` | (empty) |
| `parallel` | Whether or not the parallel CSV reader is used. | `BOOL` | `true` |
| `quote` | Specifies the quoting string to be used when a data value is quoted. | `VARCHAR` | `"` |
| `sample_size` | The number of sample rows for [auto detection of parameters](#docs:data:csv:auto_detection). | `BIGINT` | 20480 |
| `sep` | Specifies the delimiter character that separates columns within each row (line) of the file. Alias for `delim`. | `VARCHAR` | `,` |
| `skip` | The number of lines at the top of the file to skip. | `BIGINT` | 0 |
| `timestampformat` | Specifies the date format to use when parsing timestamps. See [Date Format](#docs:sql:functions:dateformat). | `VARCHAR` | (empty) |
| `types` or `dtypes` | The column types as either a list (by position) or a struct (by name). [Example here](#docs:data:csv:tips::override-the-types-of-specific-columns). | `VARCHAR[]` or `STRUCT` | (empty) |
| `union_by_name` | Whether the columns of multiple schemas should be [unified by name](#docs:data:multiple_files:combining_schemas::union-by-name), rather than by position. Note that using this option increases memory consumption. | `BOOL` | `false` |
#### `auto_type_candidates` Details {#docs:data:csv:overview::auto_type_candidates-details}
The `auto_type_candidates` option lets you specify the data types that should be considered by the CSV reader for [column data type detection](#docs:data:csv:auto_detection::type-detection).
Usage example:
```sql
SELECT * FROM read_csv('csv_file.csv', auto_type_candidates = ['BIGINT', 'DATE']);
```
The default value for the `auto_type_candidates` option is `['SQLNULL', 'BOOLEAN', 'BIGINT', 'DOUBLE', 'TIME', 'DATE', 'TIMESTAMP', 'VARCHAR']`. | 1,480 |
|
Data Import | CSV Files | CSV Functions | The `read_csv` automatically attempts to figure out the correct configuration of the CSV reader using the [CSV sniffer](https://duckdb.org/2023/10/27/csv-sniffer). It also automatically deduces types of columns. If the CSV file has a header, it will use the names found in that header to name the columns. Otherwise, the columns will be named `column0, column1, column2, ...`. An example with the [`flights.csv`](https://duckdb.org/data/flights.csv) file:
```sql
SELECT * FROM read_csv('flights.csv');
```
<div class="narrow_table"></div>
| FlightDate | UniqueCarrier | OriginCityName | DestCityName |
|------------|---------------|----------------|-----------------|
| 1988-01-01 | AA | New York, NY | Los Angeles, CA |
| 1988-01-02 | AA | New York, NY | Los Angeles, CA |
| 1988-01-03 | AA | New York, NY | Los Angeles, CA |
The path can either be a relative path (relative to the current working directory) or an absolute path.
We can use `read_csv` to create a persistent table as well:
```sql
CREATE TABLE ontime AS
SELECT * FROM read_csv('flights.csv');
DESCRIBE ontime;
```
<div class="narrow_table"></div>
| column_name | column_type | null | key | default | extra |
|----------------|-------------|------|------|---------|-------|
| FlightDate | DATE | YES | NULL | NULL | NULL |
| UniqueCarrier | VARCHAR | YES | NULL | NULL | NULL |
| OriginCityName | VARCHAR | YES | NULL | NULL | NULL |
| DestCityName | VARCHAR | YES | NULL | NULL | NULL |
```sql
SELECT * FROM read_csv('flights.csv', sample_size = 20_000);
```
If we set `delim`/`sep`, `quote`, `escape`, or `header` explicitly, we can bypass the automatic detection of this particular parameter:
```sql
SELECT * FROM read_csv('flights.csv', header = true);
```
Multiple files can be read at once by providing a glob or a list of files. Refer to the [multiple files section](#docs:data:multiple_files:overview) for more information. | 533 |
|
Data Import | CSV Files | Writing Using the `COPY` Statement | The [`COPY` statement](#docs:sql:statements:copy::copy-to) can be used to load data from a CSV file into a table. This statement has the same syntax as the one used in PostgreSQL. To load the data using the `COPY` statement, we must first create a table with the correct schema (which matches the order of the columns in the CSV file and uses types that fit the values in the CSV file). `COPY` detects the CSV's configuration options automatically.
```sql
CREATE TABLE ontime (
flightdate DATE,
uniquecarrier VARCHAR,
origincityname VARCHAR,
destcityname VARCHAR
);
COPY ontime FROM 'flights.csv';
SELECT * FROM ontime;
```
<div class="narrow_table"></div>
| flightdate | uniquecarrier | origincityname | destcityname |
|------------|---------------|----------------|-----------------|
| 1988-01-01 | AA | New York, NY | Los Angeles, CA |
| 1988-01-02 | AA | New York, NY | Los Angeles, CA |
| 1988-01-03 | AA | New York, NY | Los Angeles, CA |
If we want to manually specify the CSV format, we can do so using the configuration options of `COPY`.
```sql
CREATE TABLE ontime (flightdate DATE, uniquecarrier VARCHAR, origincityname VARCHAR, destcityname VARCHAR);
COPY ontime FROM 'flights.csv' (DELIMITER '|', HEADER);
SELECT * FROM ontime;
``` | 328 |
|
Data Import | CSV Files | Reading Faulty CSV Files | DuckDB supports reading erroneous CSV files. For details, see the [Reading Faulty CSV Files page](#docs:data:csv:reading_faulty_csv_files). | 34 |
|
Data Import | CSV Files | Limitations | The CSV reader only supports input files using UTF-8 character encoding. For CSV files using different encodings, use e.g., the [`iconv` command-line tool](https://linux.die.net/man/1/iconv) to convert them to UTF-8. For example:
```bash
iconv -f ISO-8859-2 -t UTF-8 input.csv > input-utf-8.csv
``` | 88 |
|
Data Import | CSV Files | Order Preservation | The CSV reader respects the `preserve_insertion_order` [configuration option](#docs:configuration:overview).
When `true` (the default), the order of the rows in the resultset returned by the CSV reader is the same as the order of the corresponding lines read from the file(s).
When `false`, there is no guarantee that the order is preserved. | 74 |
|
Data Import | CSV Files | CSV Auto Detection | When using `read_csv`, the system tries to automatically infer how to read the CSV file using the [CSV sniffer](https://duckdb.org/2023/10/27/csv-sniffer).
This step is necessary because CSV files are not self-describing and come in many different dialects. The auto-detection works roughly as follows:
* Detect the dialect of the CSV file (delimiter, quoting rule, escape)
* Detect the types of each of the columns
* Detect whether or not the file has a header row
By default the system will try to auto-detect all options. However, options can be individually overridden by the user. This can be useful in case the system makes a mistake. For example, if the delimiter is chosen incorrectly, we can override it by calling the `read_csv` with an explicit delimiter (e.g., `read_csv('file.csv', delim = '|')`).
The detection works by operating on a sample of the file. The size of the sample can be modified by setting the `sample_size` parameter. The default sample size is `20480` rows. Setting the `sample_size` parameter to `-1` means the entire file is read for sampling. The way sampling is performed depends on the type of file. If we are reading from a regular file on disk, we will jump into the file and try to sample from different locations in the file. If we are reading from a file in which we cannot jump β such as a `.gz` compressed CSV file or `stdin` β samples are taken only from the beginning of the file. | 325 |
|
Data Import | CSV Files | `sniff_csv` Function | It is possible to run the CSV sniffer as a separate step using the `sniff_csv(filename)` function, which returns the detected CSV properties as a table with a single row.
The `sniff_csv` function accepts an optional `sample_size` parameter to configure the number of rows sampled.
```sql
FROM sniff_csv('my_file.csv');
FROM sniff_csv('my_file.csv', sample_size = 1000);
```
| Column name | Description | Example |
|----|-----|-------|
| `Delimiter` | delimiter | `,` |
| `Quote` | quote character | `"` |
| `Escape` | escape | `\` |
| `NewLineDelimiter` | new-line delimiter | `\r\n` |
| `SkipRow` | number of rows skipped | 1 |
| `HasHeader` | whether the CSV has a header | `true` |
| `Columns` | column types encoded as a `LIST` of `STRUCT`s | `({'name': 'VARCHAR', 'age': 'BIGINT'})` |
| `DateFormat` | date Format | `%d/%m/%Y` |
| `TimestampFormat` | timestamp Format | `%Y-%m-%dT%H:%M:%S.%f` |
| `UserArguments` | arguments used to invoke `sniff_csv` | `sample_size = 1000` |
| `Prompt` | prompt ready to be used to read the CSV | `FROM read_csv('my_file.csv', auto_detect=false, delim=',', ...)` |
#### Prompt {#docs:data:csv:auto_detection::prompt}
The `Prompt` column contains a SQL command with the configurations detected by the sniffer.
```sql
-- use line mode in CLI to get the full command
.mode line
SELECT Prompt FROM sniff_csv('my_file.csv');
```
```text
Prompt = FROM read_csv('my_file.csv', auto_detect=false, delim=',', quote='"', escape='"', new_line='\n', skip=0, header=true, columns={...});
``` | 422 |
|
Data Import | CSV Files | Detection Steps | #### Dialect Detection {#docs:data:csv:auto_detection::dialect-detection}
Dialect detection works by attempting to parse the samples using the set of considered values. The detected dialect is the dialect that has (1) a consistent number of columns for each row, and (2) the highest number of columns for each row.
The following dialects are considered for automatic dialect detection.
<div class="narrow_table"></div>
<!-- markdownlint-disable MD056 -->
| Parameters | Considered values |
|------------|-----------------------|
| `delim` | `,` `|` `;` `\t` |
| `quote` | `"` `'` (empty) |
| `escape` | `"` `'` `\` (empty) |
<!-- markdownlint-enable MD056 -->
Consider the example file [`flights.csv`](https://duckdb.org/data/flights.csv):
```csv
FlightDate|UniqueCarrier|OriginCityName|DestCityName
1988-01-01|AA|New York, NY|Los Angeles, CA
1988-01-02|AA|New York, NY|Los Angeles, CA
1988-01-03|AA|New York, NY|Los Angeles, CA
```
In this file, the dialect detection works as follows:
* If we split by a `|` every row is split into `4` columns
* If we split by a `,` rows 2-4 are split into `3` columns, while the first row is split into `1` column
* If we split by `;`, every row is split into `1` column
* If we split by `\t`, every row is split into `1` column
In this example β the system selects the `|` as the delimiter. All rows are split into the same amount of columns, and there is more than one column per row meaning the delimiter was actually found in the CSV file.
#### Type Detection {#docs:data:csv:auto_detection::type-detection}
After detecting the dialect, the system will attempt to figure out the types of each of the columns. Note that this step is only performed if we are calling `read_csv`. In case of the `COPY` statement the types of the table that we are copying into will be used instead.
The type detection works by attempting to convert the values in each column to the candidate types. If the conversion is unsuccessful, the candidate type is removed from the set of candidate types for that column. After all samples have been handled β the remaining candidate type with the highest priority is chosen. The default set of candidate types is given below, in order of priority:
<div class="narrow_table monospace_table"></div>
| Types |
|-----------|
| BOOLEAN |
| BIGINT |
| DOUBLE |
| TIME |
| DATE |
| TIMESTAMP |
| VARCHAR |
Note everything can be cast to `VARCHAR`. This type has the lowest priority, i.e., columns are converted to `VARCHAR` if they cannot be cast to anything else. In [`flights.csv`](https://duckdb.org/data/flights.csv) the `FlightDate` column will be cast to a `DATE`, while the other columns will be cast to `VARCHAR`.
The set of candidate types that should be considered by the CSV reader can be explicitly specified using the [`auto_type_candidates`](#docs:data:csv:overview::auto_type_candidates-details) option.
In addition to the default set of candidate types, other types that may be specified using the `auto_type_candidates` options are:
<div class="narrow_table monospace_table"></div>
| Types |
|-----------|
| DECIMAL |
| FLOAT |
| INTEGER |
| SMALLINT |
| TINYINT |
Even though the set of data types that can be automatically detected may appear quite limited, the CSV reader can configured to read arbitrarily complex types by using the `types`-option described in the next section.
Type detection can be entirely disabled by using the `all_varchar` option. If this is set all columns will remain as `VARCHAR` (as they originally occur in the CSV file). | 887 |
|
Data Import | CSV Files | Detection Steps | Overriding Type Detection | The detected types can be individually overridden using the `types` option. This option takes either of two options:
* A list of type definitions (e.g., `types = ['INTEGER', 'VARCHAR', 'DATE']`). This overrides the types of the columns in-order of occurrence in the CSV file.
* Alternatively, `types` takes a `name` β `type` map which overrides options of individual columns (e.g., `types = {'quarter': 'INTEGER'}`).
The set of column types that may be specified using the `types` option is not as limited as the types available for the `auto_type_candidates` option: any valid type definition is acceptable to the `types`-option. (To get a valid type definition, use the [`typeof()`](#docs:sql:functions:utility::typeofexpression) function, or use the `column_type` column of the [`DESCRIBE`](#docs:guides:meta:describe) result.)
The `sniff_csv()` function's `Column` field returns a struct with column names and types that can be used as a basis for overriding types. | 233 |
Data Import | CSV Files | Header Detection | Header detection works by checking if the candidate header row deviates from the other rows in the file in terms of types. For example, in [`flights.csv`](https://duckdb.org/data/flights.csv), we can see that the header row consists of only `VARCHAR` columns β whereas the values contain a `DATE` value for the `FlightDate` column. As such β the system defines the first row as the header row and extracts the column names from the header row.
In files that do not have a header row, the column names are generated as `column0`, `column1`, etc.
Note that headers cannot be detected correctly if all columns are of type `VARCHAR` β as in this case the system cannot distinguish the header row from the other rows in the file. In this case, the system assumes the file has a header. This can be overridden by setting the `header` option to `false`.
#### Dates and Timestamps {#docs:data:csv:auto_detection::dates-and-timestamps}
DuckDB supports the [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) format by default for timestamps, dates and times. Unfortunately, not all dates and times are formatted using this standard. For that reason, the CSV reader also supports the `dateformat` and `timestampformat` options. Using this format the user can specify a [format string](#docs:sql:functions:dateformat) that specifies how the date or timestamp should be read.
As part of the auto-detection, the system tries to figure out if dates and times are stored in a different representation. This is not always possible β as there are ambiguities in the representation. For example, the date `01-02-2000` can be parsed as either January 2nd or February 1st. Often these ambiguities can be resolved. For example, if we later encounter the date `21-02-2000` then we know that the format must have been `DD-MM-YYYY`. `MM-DD-YYYY` is no longer possible as there is no 21nd month.
If the ambiguities cannot be resolved by looking at the data the system has a list of preferences for which date format to use. If the system choses incorrectly, the user can specify the `dateformat` and `timestampformat` options manually.
The system considers the following formats for dates (` dateformat`). Higher entries are chosen over lower entries in case of ambiguities (i.e., ISO 8601 is preferred over `MM-DD-YYYY`).
<div class="narrow_table monospace_table"></div>
| dateformat |
|------------|
| ISO 8601 |
| %y-%m-%d |
| %Y-%m-%d |
| %d-%m-%y |
| %d-%m-%Y |
| %m-%d-%y |
| %m-%d-%Y |
The system considers the following formats for timestamps (` timestampformat`). Higher entries are chosen over lower entries in case of ambiguities.
<div class="narrow_table monospace_table"></div>
| timestampformat |
|----------------------|
| ISO 8601 |
| %y-%m-%d %H:%M:%S |
| %Y-%m-%d %H:%M:%S |
| %d-%m-%y %H:%M:%S |
| %d-%m-%Y %H:%M:%S |
| %m-%d-%y %I:%M:%S %p |
| %m-%d-%Y %I:%M:%S %p |
| %Y-%m-%d %H:%M:%S.%f | | 779 |
|
Data Import | CSV Files | Reading Faulty CSV Files | CSV files can come in all shapes and forms, with some presenting many errors that make the process of cleanly reading them inherently difficult. To help users read these files, DuckDB supports detailed error messages, the ability to skip faulty lines, and the possibility of storing faulty lines in a temporary table to assist users with a data cleaning step. | 67 |
|
Data Import | CSV Files | Structural Errors | DuckDB supports the detection and skipping of several different structural errors. In this section, we will go over each error with an example.
For the examples, consider the following table:
```sql
CREATE TABLE people (name VARCHAR, birth_date DATE);
```
DuckDB detects the following error types:
* `CAST`: Casting errors occur when a column in the CSV file cannot be cast to the expected schema value. For example, the line `Pedro,The 90s` would cause an error since the string `The 90s` cannot be cast to a date.
* `MISSING COLUMNS`: This error occurs if a line in the CSV file has fewer columns than expected. In our example, we expect two columns; therefore, a row with just one value, e.g., `Pedro`, would cause this error.
* `TOO MANY COLUMNS`: This error occurs if a line in the CSV has more columns than expected. In our example, any line with more than two columns would cause this error, e.g., `Pedro,01-01-1992,pdet`.
* `UNQUOTED VALUE`: Quoted values in CSV lines must always be unquoted at the end; if a quoted value remains quoted throughout, it will cause an error. For example, assuming our scanner uses `quote='"'`, the line `"pedro"holanda, 01-01-1992` would present an unquoted value error.
* `LINE SIZE OVER MAXIMUM`: DuckDB has a parameter that sets the maximum line size a CSV file can have, which by default is set to `2,097,152` bytes. Assuming our scanner is set to `max_line_size = 25`, the line `Pedro Holanda, 01-01-1992` would produce an error, as it exceeds 25 bytes.
* `INVALID UNICODE`: DuckDB only supports UTF-8 strings; thus, lines containing non-UTF-8 characters will produce an error. For example, the line `pedro\xff\xff, 01-01-1992` would be problematic.
#### Anatomy of a CSV Error {#docs:data:csv:reading_faulty_csv_files::anatomy-of-a-csv-error}
By default, when performing a CSV read, if any structural errors are encountered, the scanner will immediately stop the scanning process and throw the error to the user.
These errors are designed to provide as much information as possible to allow users to evaluate them directly in their CSV file.
This is an example for a full error message:
```console
Conversion Error: CSV Error on Line: 5648
Original Line: Pedro,The 90s
Error when converting column "birth_date". date field value out of range: "The 90s", expected format is (DD-MM-YYYY)
Column date is being converted as type DATE
This type was auto-detected from the CSV file.
Possible solutions:
* Override the type for this column manually by setting the type explicitly, e.g. types={'birth_date': 'VARCHAR'}
* Set the sample size to a larger value to enable the auto-detection to scan more values, e.g. sample_size=-1
* Use a COPY statement to automatically derive types from an existing table.
file= people.csv
delimiter = , (Auto-Detected)
quote = " (Auto-Detected)
escape = " (Auto-Detected)
new_line = \r\n (Auto-Detected)
header = true (Auto-Detected)
skip_rows = 0 (Auto-Detected)
date_format = (DD-MM-YYYY) (Auto-Detected)
timestamp_format = (Auto-Detected)
null_padding=0
sample_size=20480
ignore_errors=false
all_varchar=0
```
The first block provides us with information regarding where the error occurred, including the line number, the original CSV line, and which field was problematic:
```console
Conversion Error: CSV Error on Line: 5648
Original Line: Pedro,The 90s
Error when converting column "birth_date". date field value out of range: "The 90s", expected format is (DD-MM-YYYY)
```
The second block provides us with potential solutions:
```console
Column date is being converted as type DATE
This type was auto-detected from the CSV file.
Possible solutions:
* Override the type for this column manually by setting the type explicitly, e.g. types={'birth_date': 'VARCHAR'}
* Set the sample size to a larger value to enable the auto-detection to scan more values, e.g. sample_size=-1
* Use a COPY statement to automatically derive types from an existing table.
```
Since the type of this field was auto-detected, it suggests defining the field as a `VARCHAR` or fully utilizing the dataset for type detection.
Finally, the last block presents some of the options used in the scanner that can cause errors, indicating whether they were auto-detected or manually set by the user. | 1,046 |
|
Data Import | CSV Files | Using the `ignore_errors` Option | There are cases where CSV files may have multiple structural errors, and users simply wish to skip these and read the correct data. Reading erroneous CSV files is possible by utilizing the `ignore_errors` option. With this option set, rows containing data that would otherwise cause the CSV parser to generate an error will be ignored. In our example, we will demonstrate a CAST error, but note that any of the errors described in our Structural Error section would cause the faulty line to be skipped.
For example, consider the following CSV file, [`faulty.csv`](https://duckdb.org/data/faulty.csv):
```csv
Pedro,31
Oogie Boogie, three
```
If you read the CSV file, specifying that the first column is a `VARCHAR` and the second column is an `INTEGER`, loading the file would fail, as the string `three` cannot be converted to an `INTEGER`.
For example, the following query will throw a casting error.
```sql
FROM read_csv('faulty.csv', columns = {'name': 'VARCHAR', 'age': 'INTEGER'});
```
However, with `ignore_errors` set, the second row of the file is skipped, outputting only the complete first row. For example:
```sql
FROM read_csv(
'faulty.csv',
columns = {'name': 'VARCHAR', 'age': 'INTEGER'},
ignore_errors = true
);
```
Outputs:
<div class="narrow_table"></div>
| name | age |
|-------|-----|
| Pedro | 31 |
One should note that the CSV Parser is affected by the projection pushdown optimization. Hence, if we were to select only the name column, both rows would be considered valid, as the casting error on the age would never occur. For example:
```sql
SELECT name
FROM read_csv('faulty.csv', columns = {'name': 'VARCHAR', 'age': 'INTEGER'});
```
Outputs:
<div class="narrow_table"></div>
| name |
|--------------|
| Pedro |
| Oogie Boogie | | 435 |
|
Data Import | CSV Files | Retrieving Faulty CSV Lines | Being able to read faulty CSV files is important, but for many data cleaning operations, it is also necessary to know exactly which lines are corrupted and what errors the parser discovered on them. For scenarios like these, it is possible to use DuckDB's CSV Rejects Table feature.
By default, this feature creates two temporary tables.
1. `reject_scans`: Stores information regarding the parameters of the CSV Scanner
2. `reject_errors`: Stores information regarding each CSV faulty line and in which CSV Scanner they happened.
Note that any of the errors described in our Structural Error section will be stored in the rejects tables. Also, if a line has multiple errors, multiple entries will be stored for the same line, one for each error.
#### Reject Scans {#docs:data:csv:reading_faulty_csv_files::reject-scans}
The CSV Reject Scans Table returns the following information:
<div class="narrow_table"></div>
| Column name | Description | Type |
|:--|:-----|:-|
| `scan_id` | The internal ID used in DuckDB to represent that scanner | `UBIGINT` |
| `file_id` | A scanner might happen over multiple files, so the file_id represents a unique file in a scanner | `UBIGINT` |
| `file_path` | The file path | `VARCHAR` |
| `delimiter` | The delimiter used e.g., ; | `VARCHAR` |
| `quote` | The quote used e.g., " | `VARCHAR` |
| `escape` | The quote used e.g., " | `VARCHAR` |
| `newline_delimiter` | The newline delimiter used e.g., \r\n | `VARCHAR` |
| `skip_rows` | If any rows were skipped from the top of the file | `UINTEGER` |
| `has_header` | If the file has a header | `BOOLEAN` |
| `columns` | The schema of the file (i.e., all column names and types) | `VARCHAR` |
| `date_format` | The format used for date types | `VARCHAR` |
| `timestamp_format` | The format used for timestamp types| `VARCHAR` |
| `user_arguments` | Any extra scanner parameters manually set by the user | `VARCHAR` |
#### Reject Errors {#docs:data:csv:reading_faulty_csv_files::reject-errors}
The CSV Reject Errors Table returns the following information:
<div class="narrow_table"></div>
| Column name | Description | Type |
|:--|:-----|:-|
| `scan_id` | The internal ID used in DuckDB to represent that scanner, used to join with reject scans tables | `UBIGINT` |
| `file_id` | The file_id represents a unique file in a scanner, used to join with reject scans tables | `UBIGINT` |
| `line` | Line number, from the CSV File, where the error occurred. | `UBIGINT` |
| `line_byte_position` | Byte Position of the start of the line, where the error occurred. | `UBIGINT` |
| `byte_position` | Byte Position where the error occurred. | `UBIGINT` |
| `column_idx` | If the error happens in a specific column, the index of the column. | `UBIGINT` |
| `column_name` | If the error happens in a specific column, the name of the column. | `VARCHAR` |
| `error_type` | The type of the error that happened. | `ENUM` |
| `csv_line` | The original CSV line. | `VARCHAR` |
| `error_message` | The error message produced by DuckDB. | `VARCHAR` | | 777 |
|
Data Import | CSV Files | Parameters | <div class="narrow_table"></div>
The parameters listed below are used in the `read_csv` function to configure the CSV Rejects Table.
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `store_rejects` | If set to true, any errors in the file will be skipped and stored in the default rejects temporary tables.| `BOOLEAN` | False |
| `rejects_scan` | Name of a temporary table where the information of the scan information of faulty CSV file are stored. | `VARCHAR` | reject_scans |
| `rejects_table` | Name of a temporary table where the information of the faulty lines of a CSV file are stored. | `VARCHAR` | reject_errors |
| `rejects_limit` | Upper limit on the number of faulty records from a CSV file that will be recorded in the rejects table. 0 is used when no limit should be applied. | `BIGINT` | 0 |
To store the information of the faulty CSV lines in a rejects table, the user must simply set the `store_rejects` option to true. For example:
```sql
FROM read_csv(
'faulty.csv',
columns = {'name': 'VARCHAR', 'age': 'INTEGER'},
store_rejects = true
);
```
You can then query both the `reject_scans` and `reject_errors` tables, to retrieve information about the rejected tuples. For example:
```sql
FROM reject_scans;
```
Outputs:
<div class="narrow_table monospace_table"></div>
| scan_id | file_id | file_path | delimiter | quote | escape | newline_delimiter | skip_rows | has_header | columns | date_format | timestamp_format | user_arguments |
|---------|---------|-----------------------------------|-----------|-------|--------|-------------------|-----------|-----------:|--------------------------------------|-------------|------------------|--------------------|
| 5 | 0 | faulty.csv | , | " | " | \n | 0 | false | {'name': 'VARCHAR','age': 'INTEGER'} | | | store_rejects=true |
```sql
FROM reject_errors;
```
Outputs:
<div class="narrow_table monospace_table"></div>
| scan_id | file_id | line | line_byte_position | byte_position | column_idx | column_name | error_type | csv_line | error_message |
|---------|---------|------|--------------------|---------------|------------|-------------|------------|---------------------|------------------------------------------------------------------------------------|
| 5 | 0 | 2 | 10 | 23 | 2 | age | CAST | Oogie Boogie, three | Error when converting column "age". Could not convert string " three" to 'INTEGER' | | 602 |
|
Data Import | CSV Files | CSV Import Tips | Below is a collection of tips to help when attempting to import complex CSV files. In the examples, we use the [`flights.csv`](https://duckdb.org/data/flights.csv) file. | 40 |
|
Data Import | CSV Files | Override the Header Flag if the Header Is Not Correctly Detected | If a file contains only string columns the `header` auto-detection might fail. Provide the `header` option to override this behavior.
```sql
SELECT * FROM read_csv('flights.csv', header = true);
``` | 47 |
|
Data Import | CSV Files | Provide Names if the File Does Not Contain a Header | If the file does not contain a header, names will be auto-generated by default. You can provide your own names with the `names` option.
```sql
SELECT * FROM read_csv('flights.csv', names = ['DateOfFlight', 'CarrierName']);
``` | 56 |
|
Data Import | CSV Files | Override the Types of Specific Columns | The `types` flag can be used to override types of only certain columns by providing a struct of `name` β `type` mappings.
```sql
SELECT * FROM read_csv('flights.csv', types = {'FlightDate': 'DATE'});
``` | 53 |
|
Data Import | CSV Files | Use `COPY` When Loading Data into a Table | The [`COPY` statement](#docs:sql:statements:copy) copies data directly into a table. The CSV reader uses the schema of the table instead of auto-detecting types from the file. This speeds up the auto-detection, and prevents mistakes from being made during auto-detection.
```sql
COPY tbl FROM 'test.csv';
``` | 73 |
|
Data Import | CSV Files | Use `union_by_name` When Loading Files with Different Schemas | The `union_by_name` option can be used to unify the schema of files that have different or missing columns. For files that do not have certain columns, `NULL` values are filled in.
```sql
SELECT * FROM read_csv('flights*.csv', union_by_name = true);
``` | 62 |
|
Data Import | JSON Files | JSON Overview | DuckDB supports SQL functions that are useful for reading values from existing JSON and creating new JSON data.
JSON is supported with the `json` extension which is shipped with most DuckDB distributions and is auto-loaded on first use.
If you would like to install or load it manually, please consult the [βInstalling and Loadingβ page](#docs:data:json:installing_and_loading). | 79 |
|
Data Import | JSON Files | About JSON | JSON is an open standard file format and data interchange format that uses human-readable text to store and transmit data objects consisting of attributeβvalue pairs and arrays (or other serializable values).
While it is not a very efficient format for tabular data, it is very commonly used, especially as a data interchange format. | 63 |
|
Data Import | JSON Files | Indexing | > **Warning. ** Following [PostgreSQL's conventions](#docs:sql:dialect:postgresql_compatibility), DuckDB uses 1-based indexing for its [`ARRAY`](#docs:sql:data_types:array) and [`LIST`](#docs:sql:data_types:list) data types but [0-based indexing for the JSON data type](https://www.postgresql.org/docs/17/functions-json.html#FUNCTIONS-JSON-PROCESSING). | 92 |
|
Data Import | JSON Files | Examples | #### Loading JSON {#docs:data:json:overview::loading-json}
Read a JSON file from disk, auto-infer options:
```sql
SELECT * FROM 'todos.json';
```
Use the `read_json` function with custom options:
```sql
SELECT *
FROM read_json('todos.json',
format = 'array',
columns = {userId: 'UBIGINT',
id: 'UBIGINT',
title: 'VARCHAR',
completed: 'BOOLEAN'});
```
Read a JSON file from stdin, auto-infer options:
```bash
cat data/json/todos.json | duckdb -c "SELECT * FROM read_json('/dev/stdin')"
```
Read a JSON file into a table:
```sql
CREATE TABLE todos (userId UBIGINT, id UBIGINT, title VARCHAR, completed BOOLEAN);
COPY todos FROM 'todos.json';
```
Alternatively, create a table without specifying the schema manually with a [`CREATE TABLE ... AS SELECT` clause](#docs:sql:statements:create_table::create-table--as-select-ctas):
```sql
CREATE TABLE todos AS
SELECT * FROM 'todos.json';
```
#### Writing JSON {#docs:data:json:overview::writing-json}
Write the result of a query to a JSON file:
```sql
COPY (SELECT * FROM todos) TO 'todos.json';
```
#### JSON Data Type {#docs:data:json:overview::json-data-type}
Create a table with a column for storing JSON data and insert data into it:
```sql
CREATE TABLE example (j JSON);
INSERT INTO example VALUES
('{ "family": "anatidae", "species": [ "duck", "goose", "swan", null ] }');
```
#### Retrieving JSON Data {#docs:data:json:overview::retrieving-json-data}
Retrieve the family key's value:
```sql
SELECT j.family FROM example;
```
```text
"anatidae"
```
Extract the family key's value with a [JSONPath](https://goessner.net/articles/JsonPath/) expression:
```sql
SELECT j->'$.family' FROM example;
```
```text
"anatidae"
```
Extract the family key's value with a [JSONPath](https://goessner.net/articles/JsonPath/) expression as a `VARCHAR`:
```sql
SELECT j->>'$.family' FROM example;
```
```text
anatidae
``` | 520 |
|
Data Import | JSON Files | JSON Creation Functions | The following functions are used to create JSON.
<div class="narrow_table"></div>
| Function | Description |
|:--|:----|
| `to_json(any)` | Create `JSON` from a value of `any` type. Our `LIST` is converted to a JSON array, and our `STRUCT` and `MAP` are converted to a JSON object. |
| `json_quote(any)` | Alias for `to_json`. |
| `array_to_json(list)` | Alias for `to_json` that only accepts `LIST`. |
| `row_to_json(list)` | Alias for `to_json` that only accepts `STRUCT`. |
| `json_array([any, ...])` | Create a JSON array from `any` number of values. |
| `json_object([key, value, ...])` | Create a JSON object from any number of `key`, `value` pairs. |
| `json_merge_patch(json, json)` | Merge two JSON documents together. |
Examples:
```sql
SELECT to_json('duck');
```
```text
"duck"
```
```sql
SELECT to_json([1, 2, 3]);
```
```text
[1,2,3]
```
```sql
SELECT to_json({duck : 42});
```
```text
{"duck":42}
```
```sql
SELECT to_json(map(['duck'],[42]));
```
```text
{"duck":42}
```
```sql
SELECT json_array(42, 'duck', NULL);
```
```text
[42,"duck",null]
```
```sql
SELECT json_object('duck', 42);
```
```text
{"duck":42}
```
```sql
SELECT json_merge_patch('{"duck": 42}', '{"goose": 123}');
```
```text
{"goose":123,"duck":42}
``` | 397 |
|
Data Import | JSON Files | Loading JSON | The DuckDB JSON reader can automatically infer which configuration flags to use by analyzing the JSON file. This will work correctly in most situations, and should be the first option attempted. In rare situations where the JSON reader cannot figure out the correct configuration, it is possible to manually configure the JSON reader to correctly parse the JSON file. | 65 |
|
Data Import | JSON Files | JSON Read Functions | The following table functions are used to read JSON:
| Function | Description |
|:---|:---|
| `read_json_objects(filename)` | Read a JSON object from `filename`, where `filename` can also be a list of files or a glob pattern. |
| `read_ndjson_objects(filename)` | Alias for `read_json_objects` with parameter `format` set to `'newline_delimited'`. |
| `read_json_objects_auto(filename)` | Alias for `read_json_objects` with parameter `format` set to `'auto'`. |
These functions have the following parameters:
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `compression` | The compression type for the file. By default this will be detected automatically from the file extension (e.g., `t.json.gz` will use gzip, `t.json` will use none). Options are `'none'`, `'gzip'`, `'zstd'`, and `'auto'`. | `VARCHAR` | `'auto'` |
| `filename` | Whether or not an extra `filename` column should be included in the result. | `BOOL` | `false` |
| `format` | Can be one of `['auto', 'unstructured', 'newline_delimited', 'array']`. | `VARCHAR` | `'array'` |
| `hive_partitioning` | Whether or not to interpret the path as a [Hive partitioned path](#docs:data:partitioning:hive_partitioning). | `BOOL` | `false` |
| `ignore_errors` | Whether to ignore parse errors (only possible when `format` is `'newline_delimited'`). | `BOOL` | `false` |
| `maximum_sample_files` | The maximum number of JSON files sampled for auto-detection. | `BIGINT` | `32` |
| `maximum_object_size` | The maximum size of a JSON object (in bytes). | `UINTEGER` | `16777216` |
The `format` parameter specifies how to read the JSON from a file.
With `'unstructured'`, the top-level JSON is read, e.g.:
```json
{
"duck": 42
}
{
"goose": [1, 2, 3]
}
```
will result in two objects being read.
With `'newline_delimited'`, [NDJSON](http://ndjson.org) is read, where each JSON is separated by a newline (` \n`), e.g.:
```json
{"duck": 42}
{"goose": [1, 2, 3]}
```
will also result in two objects being read.
With `'array'`, each array element is read, e.g.:
```json
[
{
"duck": 42
},
{
"goose": [1, 2, 3]
}
]
```
Again, will result in two objects being read.
Example usage:
```sql
SELECT * FROM read_json_objects('my_file1.json');
```
```text
{"duck":42,"goose":[1,2,3]}
```
```sql
SELECT * FROM read_json_objects(['my_file1.json', 'my_file2.json']);
```
```text
{"duck":42,"goose":[1,2,3]}
{"duck":43,"goose":[4,5,6],"swan":3.3}
```
```sql
SELECT * FROM read_ndjson_objects('*.json.gz');
```
```text
{"duck":42,"goose":[1,2,3]}
{"duck":43,"goose":[4,5,6],"swan":3.3}
```
DuckDB also supports reading JSON as a table, using the following functions:
| Function | Description |
|:----|:-------|
| `read_json(filename)` | Read JSON from `filename`, where `filename` can also be a list of files, or a glob pattern. |
| `read_json_auto(filename)` | Alias for `read_json` with all auto-detection enabled. |
| `read_ndjson(filename)` | Alias for `read_json` with parameter `format` set to `'newline_delimited'`. |
| `read_ndjson_auto(filename)` | Alias for `read_json_auto` with parameter `format` set to `'newline_delimited'`. |
Besides the `maximum_object_size`, `format`, `ignore_errors` and `compression`, these functions have additional parameters:
| Name | Description | Type | Default |
|:--|:------|:-|:-|
| `auto_detect` | Whether to auto-detect the names of the keys and data types of the values automatically | `BOOL` | `false` |
| `columns` | A struct that specifies the key names and value types contained within the JSON file (e.g., `{key1: 'INTEGER', key2: 'VARCHAR'}`). If `auto_detect` is enabled these will be inferred | `STRUCT` | `(empty)` |
| `dateformat` | Specifies the date format to use when parsing dates. See [Date Format](#docs:sql:functions:dateformat) | `VARCHAR` | `'iso'` |
| `maximum_depth` | Maximum nesting depth to which the automatic schema detection detects types. Set to -1 to fully detect nested JSON types | `BIGINT` | `-1` |
| `records` | Can be one of `['auto', 'true', 'false']` | `VARCHAR` | `'records'` |
| `sample_size` | Option to define number of sample objects for automatic JSON type detection. Set to -1 to scan the entire input file | `UBIGINT` | `20480` |
| `timestampformat` | Specifies the date format to use when parsing timestamps. See [Date Format](#docs:sql:functions:dateformat) | `VARCHAR` | `'iso'`|
| `union_by_name` | Whether the schema's of multiple JSON files should be [unified](#docs:data:multiple_files:combining_schemas) | `BOOL` | `false` |
| `map_inference_threshold` | Controls the threshold for number of columns whose schema will be auto-detected; if JSON schema auto-detection would infer a `STRUCT` type for a field that has _more_ than this threshold number of subfields, it infers a `MAP` type instead. Set to -1 to disable `MAP` inference. | `BIGINT` | `24`
Example usage:
```sql
SELECT * FROM read_json('my_file1.json', columns = {duck: 'INTEGER'});
```
<div class="narrow_table monospace_table"></div>
| duck |
|:---|
| 42 |
DuckDB can convert JSON arrays directly to its internal `LIST` type, and missing keys become `NULL`:
```sql
SELECT *
FROM read_json(
['my_file1.json', 'my_file2.json'],
columns = {duck: 'INTEGER', goose: 'INTEGER[]', swan: 'DOUBLE'}
);
```
<div class="narrow_table monospace_table"></div>
| duck | goose | swan |
|:---|:---|:---|
| 42 | [1, 2, 3] | NULL |
| 43 | [4, 5, 6] | 3.3 |
DuckDB can automatically detect the types like so:
```sql
SELECT goose, duck FROM read_json('*.json.gz');
SELECT goose, duck FROM '*.json.gz'; -- equivalent
```
<div class="narrow_table monospace_table"></div>
| goose | duck |
|:---|:---|
| [1, 2, 3] | 42 |
| [4, 5, 6] | 43 |
DuckDB can read (and auto-detect) a variety of formats, specified with the `format` parameter.
Querying a JSON file that contains an `'array'`, e.g.:
```json
[
{
"duck": 42,
"goose": 4.2
},
{
"duck": 43,
"goose": 4.3
}
]
```
Can be queried exactly the same as a JSON file that contains `'unstructured'` JSON, e.g.:
```json
{
"duck": 42,
"goose": 4.2
}
{
"duck": 43,
"goose": 4.3
}
```
Both can be read as the table:
<div class="narrow_table monospace_table"></div>
| duck | goose |
|:---|:---|
| 42 | 4.2 |
| 43 | 4.3 |
If your JSON file does not contain 'records', i.e., any other type of JSON than objects, DuckDB can still read it.
This is specified with the `records` parameter.
The `records` parameter specifies whether the JSON contains records that should be unpacked into individual columns, i.e., reading the following file with `records`:
```json
{"duck": 42, "goose": [1, 2, 3]}
{"duck": 43, "goose": [4, 5, 6]}
```
Results in two columns:
<div class="narrow_table monospace_table"></div>
| duck | goose |
|:---|:---|
| 42 | [1,2,3] |
| 42 | [4,5,6] |
You can read the same file with `records` set to `'false'`, to get a single column, which is a `STRUCT` containing the data:
<div class="narrow_table monospace_table"></div>
| json |
|:---|
| {'duck': 42, 'goose': [1,2,3]} |
| {'duck': 43, 'goose': [4,5,6]} |
For additional examples reading more complex data, please see the [βShredding Deeply Nested JSON, One Vector at a Timeβ blog post](https://duckdb.org/2023/03/03/json). | 2,145 |
|
Data Import | JSON Files | `FORMAT JSON` | When the `json` extension is installed, `FORMAT JSON` is supported for `COPY FROM`, `COPY TO`, `EXPORT DATABASE` and `IMPORT DATABASE`. See the [`COPY` statement](#docs:sql:statements:copy) and the [`IMPORT` / `EXPORT` clauses](#docs:sql:statements:export).
By default, `COPY` expects newline-delimited JSON. If you prefer copying data to/from a JSON array, you can specify `ARRAY true`, e.g.,
```sql
COPY (SELECT * FROM range(5)) TO 'my.json' (ARRAY true);
```
will create the following file:
```json
[
{"range":0},
{"range":1},
{"range":2},
{"range":3},
{"range":4}
]
```
This can be read like so:
```sql
CREATE TABLE test (range BIGINT);
COPY test FROM 'my.json' (ARRAY true);
```
The format can be detected automatically the format like so:
```sql
COPY test FROM 'my.json' (AUTO_DETECT true);
``` | 226 |
|
Data Import | JSON Files | `COPY` Statement | The `COPY` statement can be used to load data from a JSON file into a table. For the `COPY` statement, we must first create a table with the correct schema to load the data into. We then specify the JSON file to load from plus any configuration options separately.
```sql
CREATE TABLE todos (userId UBIGINT, id UBIGINT, title VARCHAR, completed BOOLEAN);
COPY todos FROM 'todos.json';
SELECT * FROM todos LIMIT 5;
```
| userId | id | title | completed |
|--------|----|-----------------------------------------------------------------|-----------|
| 1 | 1 | delectus aut autem | false |
| 1 | 2 | quis ut nam facilis et officia qui | false |
| 1 | 3 | fugiat veniam minus | false |
| 1 | 4 | et porro tempora | true |
| 1 | 5 | laboriosam mollitia et enim quasi adipisci quia provident illum | false |
For more details, see the [page on the `COPY` statement](#docs:sql:statements:copy). | 252 |
|
Data Import | JSON Files | Parameters | | Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `auto_detect` | Whether to auto-detect detect the names of the keys and data types of the values automatically | `BOOL` | `false` |
| `columns` | A struct that specifies the key names and value types contained within the JSON file (e.g., `{key1: 'INTEGER', key2: 'VARCHAR'}`). If `auto_detect` is enabled these will be inferred | `STRUCT` | `(empty)` |
| `compression` | The compression type for the file. By default this will be detected automatically from the file extension (e.g., `t.json.gz` will use gzip, `t.json` will use none). Options are `'uncompressed'`, `'gzip'`, `'zstd'`, and `'auto_detect'`. | `VARCHAR` | `'auto_detect'` |
| `convert_strings_to_integers` | Whether strings representing integer values should be converted to a numerical type. | `BOOL` | `false` |
| `dateformat` | Specifies the date format to use when parsing dates. See [Date Format](#docs:sql:functions:dateformat) | `VARCHAR` | `'iso'` |
| `filename` | Whether or not an extra `filename` column should be included in the result. | `BOOL` | `false` |
| `format` | Can be one of `['auto', 'unstructured', 'newline_delimited', 'array']` | `VARCHAR` | `'array'` |
| `hive_partitioning` | Whether or not to interpret the path as a [Hive partitioned path](#docs:data:partitioning:hive_partitioning). | `BOOL` | `false` |
| `ignore_errors` | Whether to ignore parse errors (only possible when `format` is `'newline_delimited'`) | `BOOL` | `false` |
| `maximum_depth` | Maximum nesting depth to which the automatic schema detection detects types. Set to -1 to fully detect nested JSON types | `BIGINT` | `-1` |
| `maximum_object_size` | The maximum size of a JSON object (in bytes) | `UINTEGER` | `16777216` |
| `records` | Can be one of `['auto', 'true', 'false']` | `VARCHAR` | `'records'` |
| `sample_size` | Option to define number of sample objects for automatic JSON type detection. Set to -1 to scan the entire input file | `UBIGINT` | `20480` |
| `timestampformat` | Specifies the date format to use when parsing timestamps. See [Date Format](#docs:sql:functions:dateformat) | `VARCHAR` | `'iso'`|
| `union_by_name` | Whether the schema's of multiple JSON files should be [unified](#docs:data:multiple_files:combining_schemas). | `BOOL` | `false` | | 623 |
|
Data Import | JSON Files | The `read_json` Function | The `read_json` is the simplest method of loading JSON files: it automatically attempts to figure out the correct configuration of the JSON reader. It also automatically deduces types of columns.
```sql
SELECT *
FROM read_json('todos.json')
LIMIT 5;
```
| userId | id | title | completed |
|-------:|---:|-----------------------------------------------------------------|-----------|
| 1 | 1 | delectus aut autem | false |
| 1 | 2 | quis ut nam facilis et officia qui | false |
| 1 | 3 | fugiat veniam minus | false |
| 1 | 4 | et porro tempora | true |
| 1 | 5 | laboriosam mollitia et enim quasi adipisci quia provident illum | false |
The path can either be a relative path (relative to the current working directory) or an absolute path.
We can use `read_json` to create a persistent table as well:
```sql
CREATE TABLE todos AS
SELECT *
FROM read_json('todos.json');
DESCRIBE todos;
```
<div class="narrow_table monospace_table"></div>
| column_name | column_type | null | key | default | extra |
|-------------|-------------|------|-----|---------|-------|
| userId | UBIGINT | YES | | | |
| id | UBIGINT | YES | | | |
| title | VARCHAR | YES | | | |
| completed | BOOLEAN | YES | | | |
If we specify the columns, we can bypass the automatic detection. Note that not all columns need to be specified:
```sql
SELECT *
FROM read_json('todos.json',
columns = {userId: 'UBIGINT',
completed: 'BOOLEAN'});
```
Multiple files can be read at once by providing a glob or a list of files. Refer to the [multiple files section](#docs:data:multiple_files:overview) for more information. | 451 |
|
Data Import | JSON Files | Writing JSON | The contents of tables or the result of queries can be written directly to a JSON file using the `COPY` statement. See the [`COPY` statement](#docs:sql:statements:copy::copy-to) for more information. | 47 |
|
Data Import | JSON Files | JSON Type | DuckDB supports `json` via the `JSON` logical type.
The `JSON` logical type is interpreted as JSON, i.e., parsed, in JSON functions rather than interpreted as `VARCHAR`, i.e., a regular string (modulo the equality-comparison caveat at the bottom of this page).
All JSON creation functions return values of this type.
We also allow any of DuckDB's types to be casted to JSON, and JSON to be casted back to any of DuckDB's types, for example, to cast `JSON` to DuckDB's `STRUCT` type, run:
```sql
SELECT '{"duck": 42}'::JSON::STRUCT(duck INTEGER);
```
```text
{'duck': 42}
```
And back:
```sql
SELECT {duck: 42}::JSON;
```
```text
{"duck":42}
```
This works for our nested types as shown in the example, but also for non-nested types:
```sql
SELECT '2023-05-12'::DATE::JSON;
```
```text
"2023-05-12"
```
The only exception to this behavior is the cast from `VARCHAR` to `JSON`, which does not alter the data, but instead parses and validates the contents of the `VARCHAR` as JSON. | 275 |
|
Data Import | JSON Files | JSON Extraction Functions | There are two extraction functions, which have their respective operators. The operators can only be used if the string is stored as the `JSON` logical type.
These functions supports the same two location notations as [JSON Scalar functions](#::json-scalar-functions).
| Function | Alias | Operator | Description |
|:---|:---|:-|
| `json_exists(json, path)` | | | Returns `true` if the supplied path exists in the `json`, and `false` otherwise. |
| `json_extract(json, path)` | `json_extract_path` | `->` | Extracts `JSON` from `json` at the given `path`. If `path` is a `LIST`, the result will be a `LIST` of `JSON`. |
| `json_extract_string(json, path)` | `json_extract_path_text` | `->>` | Extracts `VARCHAR` from `json` at the given `path`. If `path` is a `LIST`, the result will be a `LIST` of `VARCHAR`. |
| `json_value(json, path)` | | | Extracts `JSON` from `json` at the given `path`. If the `json` at the supplied path is not a scalar value, it will return `NULL`. |
Note that the equality comparison operator (` =`) has a higher precedence than the `->` JSON extract operator. Therefore, surround the uses of the `->` operator with parentheses when making equality comparisons. For example:
```sql
SELECT ((JSON '{"field": 42}')->'field') = 42;
```
> **Warning. ** DuckDB's JSON data type uses [0-based indexing](#::indexing).
Examples:
```sql
CREATE TABLE example (j JSON);
INSERT INTO example VALUES
('{ "family": "anatidae", "species": [ "duck", "goose", "swan", null ] }');
```
```sql
SELECT json_extract(j, '$.family') FROM example;
```
```text
"anatidae"
```
```sql
SELECT j->'$.family' FROM example;
```
```text
"anatidae"
```
```sql
SELECT j->'$.species[0]' FROM example;
```
```text
"duck"
```
```sql
SELECT j->'$.species[*]' FROM example;
```
```text
["duck", "goose", "swan", null]
```
```sql
SELECT j->>'$.species[*]' FROM example;
```
```text
[duck, goose, swan, null]
```
```sql
SELECT j->'$.species'->0 FROM example;
```
```text
"duck"
```
```sql
SELECT j->'species'->['0','1'] FROM example;
```
```text
["duck", "goose"]
```
```sql
SELECT json_extract_string(j, '$.family') FROM example;
```
```text
anatidae
```
```sql
SELECT j->>'$.family' FROM example;
```
```text
anatidae
```
```sql
SELECT j->>'$.species[0]' FROM example;
```
```text
duck
```
```sql
SELECT j->'species'->>0 FROM example;
```
```text
duck
```
```sql
SELECT j->'species'->>['0','1'] FROM example;
```
```text
[duck, goose]
```
Note that DuckDB's JSON data type uses [0-based indexing](#::indexing).
If multiple values need to be extracted from the same JSON, it is more efficient to extract a list of paths:
The following will cause the JSON to be parsed twice,:
Resulting in a slower query that uses more memory:
```sql
SELECT
json_extract(j, 'family') AS family,
json_extract(j, 'species') AS species
FROM example;
```
<div class="narrow_table monospace_table"></div>
| family | species |
|------------|------------------------------|
| "anatidae" | ["duck","goose","swan",null] |
The following produces the same result but is faster and more memory-efficient:
```sql
WITH extracted AS (
SELECT json_extract(j, ['family', 'species']) AS extracted_list
FROM example
)
SELECT
extracted_list[1] AS family,
extracted_list[2] AS species
FROM extracted;
``` | 950 |
|
Data Import | JSON Files | JSON Scalar Functions | The following scalar JSON functions can be used to gain information about the stored JSON values.
With the exception of `json_valid(json)`, all JSON functions produce an error when invalid JSON is supplied.
We support two kinds of notations to describe locations within JSON: [JSON Pointer](https://datatracker.ietf.org/doc/html/rfc6901) and JSONPath.
| Function | Description |
|:---|:----|
| `json_array_length(json[, path])` | Return the number of elements in the JSON array `json`, or `0` if it is not a JSON array. If `path` is specified, return the number of elements in the JSON array at the given `path`. If `path` is a `LIST`, the result will be `LIST` of array lengths. |
| `json_contains(json_haystack, json_needle)` | Returns `true` if `json_needle` is contained in `json_haystack`. Both parameters are of JSON type, but `json_needle` can also be a numeric value or a string, however the string must be wrapped in double quotes. |
| `json_keys(json[, path])` | Returns the keys of `json` as a `LIST` of `VARCHAR`, if `json` is a JSON object. If `path` is specified, return the keys of the JSON object at the given `path`. If `path` is a `LIST`, the result will be `LIST` of `LIST` of `VARCHAR`. |
| `json_structure(json)` | Return the structure of `json`. Defaults to `JSON` if the structure is inconsistent (e.g., incompatible types in an array). |
| `json_type(json[, path])` | Return the type of the supplied `json`, which is one of `ARRAY`, `BIGINT`, `BOOLEAN`, `DOUBLE`, `OBJECT`, `UBIGINT`, `VARCHAR`, and `NULL`. If `path` is specified, return the type of the element at the given `path`. If `path` is a `LIST`, the result will be `LIST` of types. |
| `json_valid(json)` | Return whether `json` is valid JSON. |
| `json(json)` | Parse and minify `json`. |
The JSONPointer syntax separates each field with a `/`.
For example, to extract the first element of the array with key `duck`, you can do:
```sql
SELECT json_extract('{"duck": [1, 2, 3]}', '/duck/0');
```
```text
1
```
The JSONPath syntax separates fields with a `.`, and accesses array elements with `[i]`, and always starts with `$`. Using the same example, we can do the following:
```sql
SELECT json_extract('{"duck": [1, 2, 3]}', '$.duck[0]');
```
```text
1
```
Note that DuckDB's JSON data type uses [0-based indexing](#::indexing).
JSONPath is more expressive, and can also access from the back of lists:
```sql
SELECT json_extract('{"duck": [1, 2, 3]}', '$.duck[#-1]');
```
```text
3
```
JSONPath also allows escaping syntax tokens, using double quotes:
```sql
SELECT json_extract('{"duck.goose": [1, 2, 3]}', '$."duck.goose"[1]');
```
```text
2
```
Examples using the [anatidae biological family](https://en.wikipedia.org/wiki/Anatidae):
```sql
CREATE TABLE example (j JSON);
INSERT INTO example VALUES
('{ "family": "anatidae", "species": [ "duck", "goose", "swan", null ] }');
```
```sql
SELECT json(j) FROM example;
```
```text
{"family":"anatidae","species":["duck","goose","swan",null]}
```
```sql
SELECT j.family FROM example;
```
```text
"anatidae"
```
```sql
SELECT j.species[0] FROM example;
```
```text
"duck"
```
```sql
SELECT json_valid(j) FROM example;
```
```text
true
```
```sql
SELECT json_valid('{');
```
```text
false
```
```sql
SELECT json_array_length('["duck", "goose", "swan", null]');
```
```text
4
```
```sql
SELECT json_array_length(j, 'species') FROM example;
```
```text
4
```
```sql
SELECT json_array_length(j, '/species') FROM example;
```
```text
4
```
```sql
SELECT json_array_length(j, '$.species') FROM example;
```
```text
4
```
```sql
SELECT json_array_length(j, ['$.species']) FROM example;
```
```text
[4]
```
```sql
SELECT json_type(j) FROM example;
```
```text
OBJECT
```
```sql
SELECT json_keys(j) FROM example;
```
```text
[family, species]
```
```sql
SELECT json_structure(j) FROM example;
```
```text
{"family":"VARCHAR","species":["VARCHAR"]}
```
```sql
SELECT json_structure('["duck", {"family": "anatidae"}]');
```
```text
["JSON"]
```
```sql
SELECT json_contains('{"key": "value"}', '"value"');
```
```text
true
```
```sql
SELECT json_contains('{"key": 1}', '1');
```
```text
true
```
```sql
SELECT json_contains('{"top_key": {"key": "value"}}', '{"key": "value"}');
```
```text
true
``` | 1,235 |
|
Data Import | JSON Files | JSON Aggregate Functions | There are three JSON aggregate functions.
<div class="narrow_table"></div>
| Function | Description |
|:---|:----|
| `json_group_array(any)` | Return a JSON array with all values of `any` in the aggregation. |
| `json_group_object(key, value)` | Return a JSON object with all `key`, `value` pairs in the aggregation. |
| `json_group_structure(json)` | Return the combined `json_structure` of all `json` in the aggregation. |
Examples:
```sql
CREATE TABLE example1 (k VARCHAR, v INTEGER);
INSERT INTO example1 VALUES ('duck', 42), ('goose', 7);
```
```sql
SELECT json_group_array(v) FROM example1;
```
```text
[42, 7]
```
```sql
SELECT json_group_object(k, v) FROM example1;
```
```text
{"duck":42,"goose":7}
```
```sql
CREATE TABLE example2 (j JSON);
INSERT INTO example2 VALUES
('{"family": "anatidae", "species": ["duck", "goose"], "coolness": 42.42}'),
('{"family": "canidae", "species": ["labrador", "bulldog"], "hair": true}');
```
```sql
SELECT json_group_structure(j) FROM example2;
```
```text
{"family":"VARCHAR","species":["VARCHAR"],"coolness":"DOUBLE","hair":"BOOLEAN"}
``` | 315 |
|
Data Import | JSON Files | Transforming JSON to Nested Types | In many cases, it is inefficient to extract values from JSON one-by-one.
Instead, we can βextractβ all values at once, transforming JSON to the nested types `LIST` and `STRUCT`.
<div class="narrow_table"></div>
| Function | Description |
|:---|:---|
| `json_transform(json, structure)` | Transform `json` according to the specified `structure`. |
| `from_json(json, structure)` | Alias for `json_transform`. |
| `json_transform_strict(json, structure)` | Same as `json_transform`, but throws an error when type casting fails. |
| `from_json_strict(json, structure)` | Alias for `json_transform_strict`. |
The `structure` argument is JSON of the same form as returned by `json_structure`.
The `structure` argument can be modified to transform the JSON into the desired structure and types.
It is possible to extract fewer key/value pairs than are present in the JSON, and it is also possible to extract more: missing keys become `NULL`.
Examples:
```sql
CREATE TABLE example (j JSON);
INSERT INTO example VALUES
('{"family": "anatidae", "species": ["duck", "goose"], "coolness": 42.42}'),
('{"family": "canidae", "species": ["labrador", "bulldog"], "hair": true}');
```
```sql
SELECT json_transform(j, '{"family": "VARCHAR", "coolness": "DOUBLE"}') FROM example;
```
```text
{'family': anatidae, 'coolness': 42.420000}
{'family': canidae, 'coolness': NULL}
```
```sql
SELECT json_transform(j, '{"family": "TINYINT", "coolness": "DECIMAL(4, 2)"}') FROM example;
```
```text
{'family': NULL, 'coolness': 42.42}
{'family': NULL, 'coolness': NULL}
```
```sql
SELECT json_transform_strict(j, '{"family": "TINYINT", "coolness": "DOUBLE"}') FROM example;
```
```console
Invalid Input Error: Failed to cast value: "anatidae"
``` | 473 |
|
Data Import | JSON Files | JSON Format Settings | The JSON extension can attempt to determine the format of a JSON file when setting `format` to `auto`.
Here are some example JSON files and the corresponding `format` settings that should be used.
In each of the below cases, the `format` setting was not needed, as DuckDB was able to infer it correctly, but it is included for illustrative purposes.
A query of this shape would work in each case:
```sql
SELECT *
FROM filename.json;
``` | 98 |
|
Data Import | JSON Files | JSON Format Settings | Format: `newline_delimited` | With `format = 'newline_delimited'` newline-delimited JSON can be parsed.
Each line is a JSON.
We use the example file [`records.json`](https://duckdb.org/data/records.json) with the following content:
```json
{"key1":"value1", "key2": "value1"}
{"key1":"value2", "key2": "value2"}
{"key1":"value3", "key2": "value3"}
```
```sql
SELECT *
FROM read_json('records.json', format = 'newline_delimited');
```
<div class="narrow_table monospace_table"></div>
| key1 | key2 |
|--------|--------|
| value1 | value1 |
| value2 | value2 |
| value3 | value3 | | 171 |
Data Import | JSON Files | JSON Format Settings | Format: `array` | If the JSON file contains a JSON array of objects (pretty-printed or not), `array_of_objects` may be used.
To demonstrate its use, we use the example file [`records-in-array.json`](https://duckdb.org/data/records-in-array.json):
```json
[
{"key1":"value1", "key2": "value1"},
{"key1":"value2", "key2": "value2"},
{"key1":"value3", "key2": "value3"}
]
```
```sql
SELECT *
FROM read_json('records-in-array.json', format = 'array');
```
<div class="narrow_table monospace_table"></div>
| key1 | key2 |
|--------|--------|
| value1 | value1 |
| value2 | value2 |
| value3 | value3 | | 178 |
Data Import | JSON Files | JSON Format Settings | Format: `unstructured` | If the JSON file contains JSON that is not newline-delimited or an array, `unstructured` may be used.
To demonstrate its use, we use the example file [`unstructured.json`](https://duckdb.org/data/unstructured.json):
```json
{
"key1":"value1",
"key2":"value1"
}
{
"key1":"value2",
"key2":"value2"
}
{
"key1":"value3",
"key2":"value3"
}
```
```sql
SELECT *
FROM read_json('unstructured.json', format = 'unstructured');
```
<div class="narrow_table monospace_table"></div>
| key1 | key2 |
|--------|--------|
| value1 | value1 |
| value2 | value2 |
| value3 | value3 |
#### Records Settings {#docs:data:json:format_settings::records-settings}
The JSON extension can attempt to determine whether a JSON file contains records when setting `records = auto`.
When `records = true`, the JSON extension expects JSON objects, and will unpack the fields of JSON objects into individual columns.
Continuing with the same example file, [`records.json`](https://duckdb.org/data/records.json):
```json
{"key1":"value1", "key2": "value1"}
{"key1":"value2", "key2": "value2"}
{"key1":"value3", "key2": "value3"}
```
```sql
SELECT *
FROM read_json('records.json', records = true);
```
<div class="narrow_table monospace_table"></div>
| key1 | key2 |
|--------|--------|
| value1 | value1 |
| value2 | value2 |
| value3 | value3 |
When `records = false`, the JSON extension will not unpack the top-level objects, and create `STRUCT`s instead:
```sql
SELECT *
FROM read_json('records.json', records = false);
```
<div class="narrow_table monospace_table"></div>
| json |
|----------------------------------|
| {'key1': value1, 'key2': value1} |
| {'key1': value2, 'key2': value2} |
| {'key1': value3, 'key2': value3} |
This is especially useful if we have non-object JSON, for example, [`arrays.json`](https://duckdb.org/data/arrays.json):
```json
[1, 2, 3]
[4, 5, 6]
[7, 8, 9]
```
```sql
SELECT *
FROM read_json('arrays.json', records = false);
```
<div class="narrow_table monospace_table"></div>
| json |
|-----------|
| [1, 2, 3] |
| [4, 5, 6] |
| [7, 8, 9] | | 627 |
Data Import | JSON Files | Installing and Loading the JSON extension | The `json` extension is shipped by default in DuckDB builds, otherwise, it will be transparently [autoloaded](#docs:extensions:overview::autoloading-extensions) on first use. If you would like to install and load it manually, run:
```sql
INSTALL json;
LOAD json;
``` | 66 |
|
Data Import | JSON Files | SQL to/from JSON | The `json` extension also provides functions to serialize and deserialize `SELECT` statements between SQL and JSON, as well as executing JSON serialized statements.
| Function | Type | Description |
|:------|:-|:---------|
| `json_deserialize_sql(json)` | Scalar | Deserialize one or many `json` serialized statements back to an equivalent SQL string. |
| `json_execute_serialized_sql(varchar)` | Table | Execute `json` serialized statements and return the resulting rows. Only one statement at a time is supported for now. |
| `json_serialize_sql(varchar, skip_empty := boolean, skip_null := boolean, format := boolean)` | Scalar | Serialize a set of semicolon-separated (` ;`) select statements to an equivalent list of `json` serialized statements. |
| `PRAGMA json_execute_serialized_sql(varchar)` | Pragma | Pragma version of the `json_execute_serialized_sql` function. |
The `json_serialize_sql(varchar)` function takes three optional parameters, `skip_empty`, `skip_null`, and `format` that can be used to control the output of the serialized statements.
If you run the `json_execute_serialize_sql(varchar)` table function inside of a transaction the serialized statements will not be able to see any transaction local changes. This is because the statements are executed in a separate query context. You can use the `PRAGMA json_execute_serialize_sql(varchar)` pragma version to execute the statements in the same query context as the pragma, although with the limitation that the serialized JSON must be provided as a constant string, i.e., you cannot do `PRAGMA json_execute_serialize_sql(json_serialize_sql(...))`.
Note that these functions do not preserve syntactic sugar such as `FROM * SELECT ...`, so a statement round-tripped through `json_deserialize_sql(json_serialize_sql(...))` may not be identical to the original statement, but should always be semantically equivalent and produce the same output.
#### Examples {#docs:data:json:sql_to_and_from_json::examples}
Simple example:
```sql
SELECT json_serialize_sql('SELECT 2');
```
```text
'{"error":false,"statements":[{"node":{"type":"SELECT_NODE","modifiers":[],"cte_map":{"map":[]},"select_list":[{"class":"CONSTANT","type":"VALUE_CONSTANT","alias":"","value":{"type":{"id":"INTEGER","type_info":null},"is_null":false,"value":2}}],"from_table":{"type":"EMPTY","alias":"","sample":null},"where_clause":null,"group_expressions":[],"group_sets":[],"aggregate_handling":"STANDARD_HANDLING","having":null,"sample":null,"qualify":null}}]}'
```
Example with multiple statements and skip options:
```sql
SELECT json_serialize_sql('SELECT 1 + 2; SELECT a + b FROM tbl1', skip_empty := true, skip_null := true);
```
```text
'{"error":false,"statements":[{"node":{"type":"SELECT_NODE","select_list":[{"class":"FUNCTION","type":"FUNCTION","function_name":"+","children":[{"class":"CONSTANT","type":"VALUE_CONSTANT","value":{"type":{"id":"INTEGER"},"is_null":false,"value":1}},{"class":"CONSTANT","type":"VALUE_CONSTANT","value":{"type":{"id":"INTEGER"},"is_null":false,"value":2}}],"order_bys":{"type":"ORDER_MODIFIER"},"distinct":false,"is_operator":true,"export_state":false}],"from_table":{"type":"EMPTY"},"aggregate_handling":"STANDARD_HANDLING"}},{"node":{"type":"SELECT_NODE","select_list":[{"class":"FUNCTION","type":"FUNCTION","function_name":"+","children":[{"class":"COLUMN_REF","type":"COLUMN_REF","column_names":["a"]},{"class":"COLUMN_REF","type":"COLUMN_REF","column_names":["b"]}],"order_bys":{"type":"ORDER_MODIFIER"},"distinct":false,"is_operator":true,"export_state":false}],"from_table":{"type":"BASE_TABLE","table_name":"tbl1"},"aggregate_handling":"STANDARD_HANDLING"}}]}'
```
Example with a syntax error:
```sql
SELECT json_serialize_sql('TOTALLY NOT VALID SQL');
```
```text
'{"error":true,"error_type":"parser","error_message":"syntax error at or near \"TOTALLY\"\nLINE 1: TOTALLY NOT VALID SQL\n ^"}'
```
Example with deserialize:
```sql
SELECT json_deserialize_sql(json_serialize_sql('SELECT 1 + 2'));
```
```text
'SELECT (1 + 2)'
```
Example with deserialize and syntax sugar:
```sql
SELECT json_deserialize_sql(json_serialize_sql('FROM x SELECT 1 + 2'));
```
```text
'SELECT (1 + 2) FROM x'
```
Example with execute:
```sql
SELECT * FROM json_execute_serialized_sql(json_serialize_sql('SELECT 1 + 2'));
```
```text
3
```
Example with error:
```sql
SELECT * FROM json_execute_serialized_sql(json_serialize_sql('TOTALLY NOT VALID SQL'));
```
```console
Error: Parser Error: Error parsing json: parser: syntax error at or near "TOTALLY"
``` | 1,083 |
|
Data Import | JSON Files | Equality Comparison | > **Warning. ** Currently, equality comparison of JSON files can differ based on the context. In some cases, it is based on raw text comparison, while in other cases, it uses logical content comparison.
The following query returns true for all fields:
```sql
SELECT
a != b, -- Space is part of physical JSON content. Despite equal logical content, values are treated as not equal.
c != d, -- Same.
c[0] = d[0], -- Equality because space was removed from physical content of fields:
a = c[0], -- Indeed, field is equal to empty list without space...
b != c[0], -- ... but different from empty list with space.
FROM (
SELECT
'[]'::JSON AS a,
'[ ]'::JSON AS b,
'[[]]'::JSON AS c,
'[[ ]]'::JSON AS d
);
```
<div class="narrow_table monospace_table"></div>
| (a != b) | (c != d) | (c[0] = d[0]) | (a = c[0]) | (b != c[0]) |
|----------|----------|---------------|------------|-------------|
| true | true | true | true | true | | 266 |
|
Data Import | Multiple Files | Reading Multiple Files | DuckDB can read multiple files of different types (CSV, Parquet, JSON files) at the same time using either the glob syntax, or by providing a list of files to read.
See the [combining schemas](#docs:data:multiple_files:combining_schemas) page for tips on reading files with different schemas. | 68 |
|
Data Import | Multiple Files | CSV | Read all files with a name ending in `.csv` in the folder `dir`:
```sql
SELECT *
FROM 'dir/*.csv';
```
Read all files with a name ending in `.csv`, two directories deep:
```sql
SELECT *
FROM '*/*/*.csv';
```
Read all files with a name ending in `.csv`, at any depth in the folder `dir`:
```sql
SELECT *
FROM 'dir/**/*.csv';
```
Read the CSV files `flights1.csv` and `flights2.csv`:
```sql
SELECT *
FROM read_csv(['flights1.csv', 'flights2.csv']);
```
Read the CSV files `flights1.csv` and `flights2.csv`, unifying schemas by name and outputting a `filename` column:
```sql
SELECT *
FROM read_csv(['flights1.csv', 'flights2.csv'], union_by_name = true, filename = true);
``` | 197 |
|
Data Import | Multiple Files | Parquet | Read all files that match the glob pattern:
```sql
SELECT *
FROM 'test/*.parquet';
```
Read three Parquet files and treat them as a single table:
```sql
SELECT *
FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']);
```
Read all Parquet files from two specific folders:
```sql
SELECT *
FROM read_parquet(['folder1/*.parquet', 'folder2/*.parquet']);
```
Read all Parquet files that match the glob pattern at any depth:
```sql
SELECT *
FROM read_parquet('dir/**/*.parquet');
``` | 134 |
|
Data Import | Multiple Files | Multi-File Reads and Globs | DuckDB can also read a series of Parquet files and treat them as if they were a single table. Note that this only works if the Parquet files have the same schema. You can specify which Parquet files you want to read using a list parameter, glob pattern matching syntax, or a combination of both.
#### List Parameter {#docs:data:multiple_files:overview::list-parameter}
The `read_parquet` function can accept a list of filenames as the input parameter.
Read three Parquet files and treat them as a single table:
```sql
SELECT *
FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']);
```
#### Glob Syntax {#docs:data:multiple_files:overview::glob-syntax}
Any file name input to the `read_parquet` function can either be an exact filename, or use a glob syntax to read multiple files that match a pattern.
<div class="narrow_table"></div>
| Wildcard | Description |
|------------|-----------------------------------------------------------|
| `*` | matches any number of any characters (including none) |
| `**` | matches any number of subdirectories (including none) |
| `?` | matches any single character |
| `[abc]` | matches one character given in the bracket |
| `[a-z]` | matches one character from the range given in the bracket |
Note that the `?` wildcard in globs is not supported for reads over S3 due to HTTP encoding issues.
Here is an example that reads all the files that end with `.parquet` located in the `test` folder:
Read all files that match the glob pattern:
```sql
SELECT *
FROM read_parquet('test/*.parquet');
```
#### List of Globs {#docs:data:multiple_files:overview::list-of-globs}
The glob syntax and the list input parameter can be combined to scan files that meet one of multiple patterns.
Read all Parquet files from 2 specific folders.
```sql
SELECT *
FROM read_parquet(['folder1/*.parquet', 'folder2/*.parquet']);
```
DuckDB can read multiple CSV files at the same time using either the glob syntax, or by providing a list of files to read. | 491 |
|
Data Import | Multiple Files | Filename | The `filename` argument can be used to add an extra `filename` column to the result that indicates which row came from which file. For example:
```sql
SELECT *
FROM read_csv(['flights1.csv', 'flights2.csv'], union_by_name = true, filename = true);
```
<div class="narrow_table"></div>
| FlightDate | OriginCityName | DestCityName | UniqueCarrier | filename |
|------------|----------------|-----------------|---------------|--------------|
| 1988-01-01 | New York, NY | Los Angeles, CA | NULL | flights1.csv |
| 1988-01-02 | New York, NY | Los Angeles, CA | NULL | flights1.csv |
| 1988-01-03 | New York, NY | Los Angeles, CA | AA | flights2.csv | | 188 |
|
Data Import | Multiple Files | Glob Function to Find Filenames | The glob pattern matching syntax can also be used to search for filenames using the `glob` table function.
It accepts one parameter: the path to search (which may include glob patterns).
Search the current directory for all files.
```sql
SELECT *
FROM glob('*');
```
<div class="narrow_table"></div>
| file |
|---------------|
| test.csv |
| test.json |
| test.parquet |
| test2.csv |
| test2.parquet |
| todos.json | | 109 |
|
Data Import | Multiple Files | Combining Schemas | <!-- markdownlint-disable MD036 --> | 7 |
|
Data Import | Multiple Files | Examples | Read a set of CSV files combining columns by position:
```sql
SELECT * FROM read_csv('flights*.csv');
```
Read a set of CSV files combining columns by name:
```sql
SELECT * FROM read_csv('flights*.csv', union_by_name = true);
``` | 61 |
|
Data Import | Multiple Files | Combining Schemas | When reading from multiple files, we have to **combine schemas** from those files. That is because each file has its own schema that can differ from the other files. DuckDB offers two ways of unifying schemas of multiple files: **by column position** and **by column name**.
By default, DuckDB reads the schema of the first file provided, and then unifies columns in subsequent files by column position. This works correctly as long as all files have the same schema. If the schema of the files differs, you might want to use the `union_by_name` option to allow DuckDB to construct the schema by reading all of the names instead.
Below is an example of how both methods work. | 147 |
|
Data Import | Multiple Files | Union by Position | By default, DuckDB unifies the columns of these different files **by position**. This means that the first column in each file is combined together, as well as the second column in each file, etc. For example, consider the following two files.
[`flights1.csv`](https://duckdb.org/data/flights1.csv):
```csv
FlightDate|UniqueCarrier|OriginCityName|DestCityName
1988-01-01|AA|New York, NY|Los Angeles, CA
1988-01-02|AA|New York, NY|Los Angeles, CA
```
[`flights2.csv`](https://duckdb.org/data/flights2.csv):
```csv
FlightDate|UniqueCarrier|OriginCityName|DestCityName
1988-01-03|AA|New York, NY|Los Angeles, CA
```
Reading the two files at the same time will produce the following result set:
<div class="narrow_table"></div>
| FlightDate | UniqueCarrier | OriginCityName | DestCityName |
|------------|---------------|----------------|-----------------|
| 1988-01-01 | AA | New York, NY | Los Angeles, CA |
| 1988-01-02 | AA | New York, NY | Los Angeles, CA |
| 1988-01-03 | AA | New York, NY | Los Angeles, CA |
This is equivalent to the SQL construct [`UNION ALL`](#docs:sql:query_syntax:setops::union-all). | 332 |
|
Data Import | Multiple Files | Union by Name | If you are processing multiple files that have different schemas, perhaps because columns have been added or renamed, it might be desirable to unify the columns of different files **by name** instead. This can be done by providing the `union_by_name` option. For example, consider the following two files, where `flights4.csv` has an extra column (` UniqueCarrier`).
[`flights3.csv`](https://duckdb.org/data/flights3.csv):
```csv
FlightDate|OriginCityName|DestCityName
1988-01-01|New York, NY|Los Angeles, CA
1988-01-02|New York, NY|Los Angeles, CA
```
[`flights4.csv`](https://duckdb.org/data/flights4.csv):
```csv
FlightDate|UniqueCarrier|OriginCityName|DestCityName
1988-01-03|AA|New York, NY|Los Angeles, CA
```
Reading these when unifying column names **by position** results in an error β as the two files have a different number of columns. When specifying the `union_by_name` option, the columns are correctly unified, and any missing values are set to `NULL`.
```sql
SELECT * FROM read_csv(['flights3.csv', 'flights4.csv'], union_by_name = true);
```
<div class="narrow_table"></div>
| FlightDate | OriginCityName | DestCityName | UniqueCarrier |
|------------|----------------|-----------------|---------------|
| 1988-01-01 | New York, NY | Los Angeles, CA | NULL |
| 1988-01-02 | New York, NY | Los Angeles, CA | NULL |
| 1988-01-03 | New York, NY | Los Angeles, CA | AA |
This is equivalent to the SQL construct [`UNION ALL BY NAME`](#docs:sql:query_syntax:setops::union-all-by-name).
> Using the `union_by_name` option increases memory consumption. | 432 |
|
Data Import | Parquet Files | Examples | Read a single Parquet file:
```sql
SELECT * FROM 'test.parquet';
```
Figure out which columns/types are in a Parquet file:
```sql
DESCRIBE SELECT * FROM 'test.parquet';
```
Create a table from a Parquet file:
```sql
CREATE TABLE test AS
SELECT * FROM 'test.parquet';
```
If the file does not end in `.parquet`, use the `read_parquet` function:
```sql
SELECT *
FROM read_parquet('test.parq');
```
Use list parameter to read three Parquet files and treat them as a single table:
```sql
SELECT *
FROM read_parquet(['file1.parquet', 'file2.parquet', 'file3.parquet']);
```
Read all files that match the glob pattern:
```sql
SELECT *
FROM 'test/*.parquet';
```
Read all files that match the glob pattern, and include a `filename` column:
That specifies which file each row came from:
```sql
SELECT *
FROM read_parquet('test/*.parquet', filename = true);
```
Use a list of globs to read all Parquet files from two specific folders:
```sql
SELECT *
FROM read_parquet(['folder1/*.parquet', 'folder2/*.parquet']);
```
Read over HTTPS:
```sql
SELECT *
FROM read_parquet('https://some.url/some_file.parquet');
```
Query the [metadata of a Parquet file](#docs:data:parquet:metadata::parquet-metadata):
```sql
SELECT *
FROM parquet_metadata('test.parquet');
```
Query the [file metadata of a Parquet file](#docs:data:parquet:metadata::parquet-file-metadata):
```sql
SELECT *
FROM parquet_file_metadata('test.parquet');
```
Query the [key-value metadata of a Parquet file](#docs:data:parquet:metadata::parquet-key-value-metadata):
```sql
SELECT *
FROM parquet_kv_metadata('test.parquet');
```
Query the [schema of a Parquet file](#docs:data:parquet:metadata::parquet-schema):
```sql
SELECT *
FROM parquet_schema('test.parquet');
```
Write the results of a query to a Parquet file using the default compression (Snappy):
```sql
COPY
(SELECT * FROM tbl)
TO 'result-snappy.parquet'
(FORMAT 'parquet');
```
Write the results from a query to a Parquet file with specific compression and row group size:
```sql
COPY
(FROM generate_series(100_000))
TO 'test.parquet'
(FORMAT 'parquet', COMPRESSION 'zstd', ROW_GROUP_SIZE 100_000);
```
Export the table contents of the entire database as parquet:
```sql
EXPORT DATABASE 'target_directory' (FORMAT PARQUET);
``` | 610 |
|
Data Import | Parquet Files | Parquet Files | Parquet files are compressed columnar files that are efficient to load and process. DuckDB provides support for both reading and writing Parquet files in an efficient manner, as well as support for pushing filters and projections into the Parquet file scans.
> Parquet data sets differ based on the number of files, the size of individual files, the compression algorithm used row group size, etc. These have a significant effect on performance. Please consult the [Performance Guide](#docs:guides:performance:file_formats) for details. | 108 |
|
Data Import | Parquet Files | `read_parquet` Function | | Function | Description | Example |
|:--|:--|:-----|
| `read_parquet(path_or_list_of_paths)` | Read Parquet file(s) | `SELECT * FROM read_parquet('test.parquet');` |
| `parquet_scan(path_or_list_of_paths)` | Alias for `read_parquet` | `SELECT * FROM parquet_scan('test.parquet');` |
If your file ends in `.parquet`, the function syntax is optional. The system will automatically infer that you are reading a Parquet file:
```sql
SELECT * FROM 'test.parquet';
```
Multiple files can be read at once by providing a glob or a list of files. Refer to the [multiple files section](#docs:data:multiple_files:overview) for more information.
#### Parameters {#docs:data:parquet:overview::parameters}
There are a number of options exposed that can be passed to the `read_parquet` function or the [`COPY` statement](#docs:sql:statements:copy).
| Name | Description | Type | Default |
|:--|:-----|:-|:-|
| `binary_as_string` | Parquet files generated by legacy writers do not correctly set the `UTF8` flag for strings, causing string columns to be loaded as `BLOB` instead. Set this to true to load binary columns as strings. | `BOOL` | `false` |
| `encryption_config` | Configuration for [Parquet encryption](#docs:data:parquet:encryption). | `STRUCT` | - |
| `filename` | Whether or not an extra `filename` column should be included in the result. | `BOOL` | `false` |
| `file_row_number` | Whether or not to include the `file_row_number` column. | `BOOL` | `false` |
| `hive_partitioning` | Whether or not to interpret the path as a [Hive partitioned path](#docs:data:partitioning:hive_partitioning). | `BOOL` | `true` |
| `union_by_name` | Whether the columns of multiple schemas should be [unified by name](#docs:data:multiple_files:combining_schemas), rather than by position. | `BOOL` | `false` | | 477 |
|
Data Import | Parquet Files | Partial Reading | DuckDB supports projection pushdown into the Parquet file itself. That is to say, when querying a Parquet file, only the columns required for the query are read. This allows you to read only the part of the Parquet file that you are interested in. This will be done automatically by DuckDB.
DuckDB also supports filter pushdown into the Parquet reader. When you apply a filter to a column that is scanned from a Parquet file, the filter will be pushed down into the scan, and can even be used to skip parts of the file using the built-in zonemaps. Note that this will depend on whether or not your Parquet file contains zonemaps.
Filter and projection pushdown provide significant performance benefits. See [our blog post βQuerying Parquet with Precision Using DuckDBβ](https://duckdb.org/2021/06/25/querying-parquet) for more information. | 194 |
|
Data Import | Parquet Files | Inserts and Views | You can also insert the data into a table or create a table from the Parquet file directly. This will load the data from the Parquet file and insert it into the database:
Insert the data from the Parquet file in the table:
```sql
INSERT INTO people
SELECT * FROM read_parquet('test.parquet');
```
Create a table directly from a Parquet file:
```sql
CREATE TABLE people AS
SELECT * FROM read_parquet('test.parquet');
```
If you wish to keep the data stored inside the Parquet file, but want to query the Parquet file directly, you can create a view over the `read_parquet` function. You can then query the Parquet file as if it were a built-in table:
Create a view over the Parquet file:
```sql
CREATE VIEW people AS
SELECT * FROM read_parquet('test.parquet');
```
Query the Parquet file:
```sql
SELECT * FROM people;
``` | 206 |
|
Data Import | Parquet Files | Writing to Parquet Files | DuckDB also has support for writing to Parquet files using the `COPY` statement syntax. See the [`COPY` Statement page](#docs:sql:statements:copy) for details, including all possible parameters for the `COPY` statement.
Write a query to a snappy compressed Parquet file:
```sql
COPY
(SELECT * FROM tbl)
TO 'result-snappy.parquet'
(FORMAT 'parquet');
```
Write `tbl` to a zstd-compressed Parquet file:
```sql
COPY tbl
TO 'result-zstd.parquet'
(FORMAT 'parquet', CODEC 'zstd');
```
Write `tbl` to a zstd-compressed Parquet file with the lowest compression level yielding the fastest compression:
```sql
COPY tbl
TO 'result-zstd.parquet'
(FORMAT 'parquet', CODEC 'zstd', COMPRESSION_LEVEL 1);
```
Write to Parquet file with [key-value metadata](#docs:data:parquet:metadata):
```sql
COPY (
SELECT
42 AS number,
true AS is_even
) TO 'kv_metadata.parquet' (
FORMAT PARQUET,
KV_METADATA {
number: 'Answer to life, universe, and everything',
is_even: 'not ''odd''' -- single quotes in values must be escaped
}
);
```
Write a CSV file to an uncompressed Parquet file:
```sql
COPY
'test.csv'
TO 'result-uncompressed.parquet'
(FORMAT 'parquet', CODEC 'uncompressed');
```
Write a query to a Parquet file with zstd-compression (same as `CODEC`) and row group size:
```sql
COPY
(FROM generate_series(100_000))
TO 'row-groups-zstd.parquet'
(FORMAT PARQUET, COMPRESSION ZSTD, ROW_GROUP_SIZE 100_000);
```
> LZ4 compression is currently only available in the nightly and source builds:
Write a CSV file to an `LZ4_RAW`-compressed Parquet file:
```sql
COPY
(FROM generate_series(100_000))
TO 'result-lz4.parquet'
(FORMAT PARQUET, COMPRESSION LZ4);
```
Or:
```sql
COPY
(FROM generate_series(100_000))
TO 'result-lz4.parquet'
(FORMAT PARQUET, COMPRESSION LZ4_RAW);
```
DuckDB's `EXPORT` command can be used to export an entire database to a series of Parquet files. See the [Export statement documentation](#docs:sql:statements:export) for more details:
Export the table contents of the entire database as Parquet:
```sql
EXPORT DATABASE 'target_directory' (FORMAT PARQUET);
``` | 577 |
|
Data Import | Parquet Files | Encryption | DuckDB supports reading and writing [encrypted Parquet files](#docs:data:parquet:encryption). | 22 |
|
Data Import | Parquet Files | Installing and Loading the Parquet Extension | The support for Parquet files is enabled via extension. The `parquet` extension is bundled with almost all clients. However, if your client does not bundle the `parquet` extension, the extension must be installed separately:
```sql
INSTALL parquet;
``` | 55 |
|
Data Import | Parquet Files | Parquet Metadata | The `parquet_metadata` function can be used to query the metadata contained within a Parquet file, which reveals various internal details of the Parquet file such as the statistics of the different columns. This can be useful for figuring out what kind of skipping is possible in Parquet files, or even to obtain a quick overview of what the different columns contain:
```sql
SELECT *
FROM parquet_metadata('test.parquet');
```
Below is a table of the columns returned by `parquet_metadata`.
<div class="narrow_table monospace_table"></div>
| Field | Type |
| ----------------------- | --------------- |
| file_name | VARCHAR |
| row_group_id | BIGINT |
| row_group_num_rows | BIGINT |
| row_group_num_columns | BIGINT |
| row_group_bytes | BIGINT |
| column_id | BIGINT |
| file_offset | BIGINT |
| num_values | BIGINT |
| path_in_schema | VARCHAR |
| type | VARCHAR |
| stats_min | VARCHAR |
| stats_max | VARCHAR |
| stats_null_count | BIGINT |
| stats_distinct_count | BIGINT |
| stats_min_value | VARCHAR |
| stats_max_value | VARCHAR |
| compression | VARCHAR |
| encodings | VARCHAR |
| index_page_offset | BIGINT |
| dictionary_page_offset | BIGINT |
| data_page_offset | BIGINT |
| total_compressed_size | BIGINT |
| total_uncompressed_size | BIGINT |
| key_value_metadata | MAP(BLOB, BLOB) | | 359 |
|
Data Import | Parquet Files | Parquet Schema | The `parquet_schema` function can be used to query the internal schema contained within a Parquet file. Note that this is the schema as it is contained within the metadata of the Parquet file. If you want to figure out the column names and types contained within a Parquet file it is easier to use `DESCRIBE`.
Fetch the column names and column types:
```sql
DESCRIBE SELECT * FROM 'test.parquet';
```
Fetch the internal schema of a Parquet file:
```sql
SELECT *
FROM parquet_schema('test.parquet');
```
Below is a table of the columns returned by `parquet_schema`.
<div class="narrow_table monospace_table"></div>
| Field | Type |
| --------------- | ------- |
| file_name | VARCHAR |
| name | VARCHAR |
| type | VARCHAR |
| type_length | VARCHAR |
| repetition_type | VARCHAR |
| num_children | BIGINT |
| converted_type | VARCHAR |
| scale | BIGINT |
| precision | BIGINT |
| field_id | BIGINT |
| logical_type | VARCHAR | | 242 |
|
Data Import | Parquet Files | Parquet File Metadata | The `parquet_file_metadata` function can be used to query file-level metadata such as the format version and the encryption algorithm used:
```sql
SELECT *
FROM parquet_file_metadata('test.parquet');
```
Below is a table of the columns returned by `parquet_file_metadata`.
<div class="narrow_table monospace_table"></div>
| Field | Type |
| ----------------------------| ------- |
| file_name | VARCHAR |
| created_by | VARCHAR |
| num_rows | BIGINT |
| num_row_groups | BIGINT |
| format_version | BIGINT |
| encryption_algorithm | VARCHAR |
| footer_signing_key_metadata | VARCHAR | | 145 |
|
Data Import | Parquet Files | Parquet Key-Value Metadata | The `parquet_kv_metadata` function can be used to query custom metadata defined as key-value pairs:
```sql
SELECT *
FROM parquet_kv_metadata('test.parquet');
```
Below is a table of the columns returned by `parquet_kv_metadata`.
<div class="narrow_table monospace_table"></div>
| Field | Type |
| --------- | ------- |
| file_name | VARCHAR |
| key | BLOB |
| value | BLOB | | 102 |
|
Data Import | Parquet Files | Parquet Encryption | Starting with version 0.10.0, DuckDB supports reading and writing encrypted Parquet files.
DuckDB broadly follows the [Parquet Modular Encryption specification](https://github.com/apache/parquet-format/blob/master/Encryption.md) with some [limitations](#::limitations). | 58 |
|
Data Import | Parquet Files | Reading and Writing Encrypted Files | Using the `PRAGMA add_parquet_key` function, named encryption keys of 128, 192, or 256 bits can be added to a session. These keys are stored in-memory:
```sql
PRAGMA add_parquet_key('key128', '0123456789112345');
PRAGMA add_parquet_key('key192', '012345678911234501234567');
PRAGMA add_parquet_key('key256', '01234567891123450123456789112345');
```
#### Writing Encrypted Parquet Files {#docs:data:parquet:encryption::writing-encrypted-parquet-files}
After specifying the key (e.g., `key256`), files can be encrypted as follows:
```sql
COPY tbl TO 'tbl.parquet' (ENCRYPTION_CONFIG {footer_key: 'key256'});
```
#### Reading Encrypted Parquet Files {#docs:data:parquet:encryption::reading-encrypted-parquet-files}
An encrypted Parquet file using a specific key (e.g., `key256`), can then be read as follows:
```sql
COPY tbl FROM 'tbl.parquet' (ENCRYPTION_CONFIG {footer_key: 'key256'});
```
Or:
```sql
SELECT *
FROM read_parquet('tbl.parquet', encryption_config = {footer_key: 'key256'});
``` | 285 |
|
Data Import | Parquet Files | Limitations | DuckDB's Parquet encryption currently has the following limitations.
1. It is not compatible with the encryption of, e.g., PyArrow, until the missing details are implemented.
2. DuckDB encrypts the footer and all columns using the `footer_key`. The Parquet specification allows encryption of individual columns with different keys, e.g.:
```sql
COPY tbl TO 'tbl.parquet'
(ENCRYPTION_CONFIG {
footer_key: 'key256',
column_keys: {key256: ['col0', 'col1']}
});
```
However, this is unsupported at the moment and will cause an error to be thrown (for now):
```console
Not implemented Error: Parquet encryption_config column_keys not yet implemented
``` | 154 |
|
Data Import | Parquet Files | Performance Implications | Note that encryption has some performance implications.
Without encryption, reading/writing the `lineitem` table from [`TPC-H`](#docs:extensions:tpch) at SF1, which is 6M rows and 15 columns, from/to a Parquet file takes 0.26 and 0.99 seconds, respectively.
With encryption, this takes 0.64 and 2.21 seconds, both approximately 2.5Γ slower than the unencrypted version. | 99 |
|
Data Import | Parquet Files | Parquet Tips | Below is a collection of tips to help when dealing with Parquet files. | 15 |
|
Data Import | Parquet Files | Tips for Reading Parquet Files | #### Use `union_by_name` When Loading Files with Different Schemas {#docs:data:parquet:tips::use-union_by_name-when-loading-files-with-different-schemas}
The `union_by_name` option can be used to unify the schema of files that have different or missing columns. For files that do not have certain columns, `NULL` values are filled in:
```sql
SELECT *
FROM read_parquet('flights*.parquet', union_by_name = true);
``` | 104 |
|
Data Import | Parquet Files | Tips for Writing Parquet Files | Using a [glob pattern](#docs:data:multiple_files:overview::glob-syntax) upon read or a [Hive partitioning](#docs:data:partitioning:hive_partitioning) structure are good ways to transparently handle multiple files.
#### Enabling `PER_THREAD_OUTPUT` {#docs:data:parquet:tips::enabling-per_thread_output}
If the final number of Parquet files is not important, writing one file per thread can significantly improve performance:
```sql
COPY
(FROM generate_series(10_000_000))
TO 'test.parquet'
(FORMAT PARQUET, PER_THREAD_OUTPUT);
```
#### Selecting a `ROW_GROUP_SIZE` {#docs:data:parquet:tips::selecting-a-row_group_size}
The `ROW_GROUP_SIZE` parameter specifies the minimum number of rows in a Parquet row group, with a minimum value equal to DuckDB's vector size, 2,048, and a default of 122,880.
A Parquet row group is a partition of rows, consisting of a column chunk for each column in the dataset.
Compression algorithms are only applied per row group, so the larger the row group size, the more opportunities to compress the data.
DuckDB can read Parquet row groups in parallel even within the same file and uses predicate pushdown to only scan the row groups whose metadata ranges match the `WHERE` clause of the query.
However there is some overhead associated with reading the metadata in each group.
A good approach would be to ensure that within each file, the total number of row groups is at least as large as the number of CPU threads used to query that file.
More row groups beyond the thread count would improve the speed of highly selective queries, but slow down queries that must scan the whole file like aggregations.
To write a query to a Parquet file with a different row group size, run:
```sql
COPY
(FROM generate_series(100_000))
TO 'row-groups.parquet'
(FORMAT PARQUET, ROW_GROUP_SIZE 100_000);
```
#### The `ROW_GROUPS_PER_FILE` Option {#docs:data:parquet:tips::the-row_groups_per_file-option}
The `ROW_GROUPS_PER_FILE` parameter creates a new Parquet file if the current one has a specified number of row groups.
```sql
COPY
(FROM generate_series(100_000))
TO 'output-directory'
(FORMAT PARQUET, ROW_GROUP_SIZE 20_000, ROW_GROUPS_PER_FILE 2);
```
> If multiple threads are active, the number of row groups in a file may slightly exceed the specified number of row groups to limit the amount of locking β similarly to the behaviour of [`FILE_SIZE_BYTES`](#..:..:sql:statements:copy::copy--to-options).
> However, if `PER_THREAD_OUTPUT` is set, only one thread writes to each file, and it becomes accurate again.
See the [Performance Guide on βFile Formatsβ](#docs:guides:performance:file_formats::parquet-file-sizes) for more tips. | 643 |
|
Data Import | Partitioning | Examples | Read data from a Hive partitioned data set:
```sql
SELECT *
FROM read_parquet('orders/*/*/*.parquet', hive_partitioning = true);
```
Write a table to a Hive partitioned data set:
```sql
COPY orders
TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
```
Note that the `PARTITION_BY` options cannot use expressions. You can produce columns on the fly using the following syntax:
```sql
COPY (SELECT *, year(timestamp) AS year, month(timestamp) AS month FROM services)
TO 'test' (PARTITION_BY (year, month));
```
When reading, the partition columns are read from the directory structure and
can be can be included or excluded depending on the `hive_partitioning` parameter.
```sql
FROM read_parquet('test/*/*/*.parquet', hive_partitioning = true); -- will include year, month partition columns
FROM read_parquet('test/*/*/*.parquet', hive_partitioning = false); -- will not include year, month columns
``` | 227 |
|
Data Import | Partitioning | Hive Partitioning | Hive partitioning is a [partitioning strategy](https://en.wikipedia.org/wiki/Partition_(database)) that is used to split a table into multiple files based on **partition keys**. The files are organized into folders. Within each folder, the **partition key** has a value that is determined by the name of the folder.
Below is an example of a Hive partitioned file hierarchy. The files are partitioned on two keys (` year` and `month`).
```text
orders
βββ year=2021
β βββ month=1
β β βββ file1.parquet
β β βββ file2.parquet
β βββ month=2
β βββ file3.parquet
βββ year=2022
βββ month=11
β βββ file4.parquet
β βββ file5.parquet
βββ month=12
βββ file6.parquet
```
Files stored in this hierarchy can be read using the `hive_partitioning` flag.
```sql
SELECT *
FROM read_parquet('orders/*/*/*.parquet', hive_partitioning = true);
```
When we specify the `hive_partitioning` flag, the values of the columns will be read from the directories.
#### Filter Pushdown {#docs:data:partitioning:hive_partitioning::filter-pushdown}
Filters on the partition keys are automatically pushed down into the files. This way the system skips reading files that are not necessary to answer a query. For example, consider the following query on the above dataset:
```sql
SELECT *
FROM read_parquet('orders/*/*/*.parquet', hive_partitioning = true)
WHERE year = 2022
AND month = 11;
```
When executing this query, only the following files will be read:
```text
orders
βββ year=2022
βββ month=11
βββ file4.parquet
βββ file5.parquet
```
#### Autodetection {#docs:data:partitioning:hive_partitioning::autodetection}
By default the system tries to infer if the provided files are in a hive partitioned hierarchy. And if so, the `hive_partitioning` flag is enabled automatically. The autodetection will look at the names of the folders and search for a `'key' = 'value'` pattern. This behavior can be overridden by using the `hive_partitioning` configuration option:
```sql
SET hive_partitioning = false;
```
#### Hive Types {#docs:data:partitioning:hive_partitioning::hive-types}
`hive_types` is a way to specify the logical types of the hive partitions in a struct:
```sql
SELECT *
FROM read_parquet(
'dir/**/*.parquet',
hive_partitioning = true,
hive_types = {'release': DATE, 'orders': BIGINT}
);
```
`hive_types` will be autodetected for the following types: `DATE`, `TIMESTAMP` and `BIGINT`. To switch off the autodetection, the flag `hive_types_autocast = 0` can be set.
#### Writing Partitioned Files {#docs:data:partitioning:hive_partitioning::writing-partitioned-files}
See the [Partitioned Writes](#docs:data:partitioning:partitioned_writes) section. | 714 |
|
Data Import | Partitioning | Examples | Write a table to a Hive partitioned data set of Parquet files:
```sql
COPY orders TO 'orders' (FORMAT PARQUET, PARTITION_BY (year, month));
```
Write a table to a Hive partitioned data set of CSV files, allowing overwrites:
```sql
COPY orders TO 'orders' (FORMAT CSV, PARTITION_BY (year, month), OVERWRITE_OR_IGNORE);
```
Write a table to a Hive partitioned data set of GZIP-compressed CSV files, setting explicit data files' extension:
```sql
COPY orders TO 'orders' (FORMAT CSV, PARTITION_BY (year, month), COMPRESSION GZIP, FILE_EXTENSION 'csv.gz');
``` | 147 |
|
Data Import | Partitioning | Partitioned Writes | When the `PARTITION_BY` clause is specified for the [`COPY` statement](#docs:sql:statements:copy), the files are written in a [Hive partitioned](#docs:data:partitioning:hive_partitioning) folder hierarchy. The target is the name of the root directory (in the example above: `orders`). The files are written in-order in the file hierarchy. Currently, one file is written per thread to each directory.
```text
orders
βββ year=2021
β βββ month=1
β β βββ data_1.parquet
β β βββ data_2.parquet
β βββ month=2
β βββ data_1.parquet
βββ year=2022
βββ month=11
β βββ data_1.parquet
β βββ data_2.parquet
βββ month=12
βββ data_1.parquet
```
The values of the partitions are automatically extracted from the data. Note that it can be very expensive to write many partitions as many files will be created. The ideal partition count depends on how large your data set is.
> **Best practice. ** Writing data into many small partitions is expensive. It is generally recommended to have at least `100 MB` of data per partition.
#### Overwriting {#docs:data:partitioning:partitioned_writes::overwriting}
By default the partitioned write will not allow overwriting existing directories. Use the `OVERWRITE_OR_IGNORE` option to allow overwriting an existing directory.
#### Filename Pattern {#docs:data:partitioning:partitioned_writes::filename-pattern}
By default, files will be named `data_0.parquet` or `data_0.csv`. With the flag `FILENAME_PATTERN` a pattern with `{i}` or `{uuid}` can be defined to create specific filenames:
* `{i}` will be replaced by an index
* `{uuid}` will be replaced by a 128 bits long UUID
Write a table to a Hive partitioned data set of .parquet files, with an index in the filename:
```sql
COPY orders TO 'orders'
(FORMAT PARQUET, PARTITION_BY (year, month), OVERWRITE_OR_IGNORE, FILENAME_PATTERN 'orders_{i}');
```
Write a table to a Hive partitioned data set of .parquet files, with unique filenames:
```sql
COPY orders TO 'orders'
(FORMAT PARQUET, PARTITION_BY (year, month), OVERWRITE_OR_IGNORE, FILENAME_PATTERN 'file_{uuid}');
``` | 554 |
|
Data Import | Appender | The Appender can be used to load bulk data into a DuckDB database. It is currently available in the [C, C++, Go, Java, and Rust APIs](#::appender-support-in-other-clients). The Appender is tied to a connection, and will use the transaction context of that connection when appending. An Appender always appends to a single table in the database file.
In the [C++ API](#docs:api:cpp), the Appender works as follows:
```cpp
DuckDB db;
Connection con(db);
// create the table
con.Query("CREATE TABLE people (id INTEGER, name VARCHAR)");
// initialize the appender
Appender appender(con, "people");
```
The `AppendRow` function is the easiest way of appending data. It uses recursive templates to allow you to put all the values of a single row within one function call, as follows:
```cpp
appender.AppendRow(1, "Mark");
```
Rows can also be individually constructed using the `BeginRow`, `EndRow` and `Append` methods. This is done internally by `AppendRow`, and hence has the same performance characteristics.
```cpp
appender.BeginRow();
appender.Append<int32_t>(2);
appender.Append<string>("Hannes");
appender.EndRow();
```
Any values added to the Appender are cached prior to being inserted into the database system
for performance reasons. That means that, while appending, the rows might not be immediately visible in the system. The cache is automatically flushed when the Appender goes out of scope or when `appender.Close()` is called. The cache can also be manually flushed using the `appender.Flush()` method. After either `Flush` or `Close` is called, all the data has been written to the database system. | 379 |
||
Data Import | Appender | Date, Time and Timestamps | While numbers and strings are rather self-explanatory, dates, times and timestamps require some explanation. They can be directly appended using the methods provided by `duckdb::Date`, `duckdb::Time` or `duckdb::Timestamp`. They can also be appended using the internal `duckdb::Value` type, however, this adds some additional overheads and should be avoided if possible.
Below is a short example:
```cpp
con.Query("CREATE TABLE dates (d DATE, t TIME, ts TIMESTAMP)");
Appender appender(con, "dates");
// construct the values using the Date/Time/Timestamp types
// (this is the most efficient approach)
appender.AppendRow(
Date::FromDate(1992, 1, 1),
Time::FromTime(1, 1, 1, 0),
Timestamp::FromDatetime(Date::FromDate(1992, 1, 1), Time::FromTime(1, 1, 1, 0))
);
// construct duckdb::Value objects
appender.AppendRow(
Value::DATE(1992, 1, 1),
Value::TIME(1, 1, 1, 0),
Value::TIMESTAMP(1992, 1, 1, 1, 1, 1, 0)
);
``` | 275 |
|
Data Import | Appender | Commit Frequency | By default, the appender performs a commits every 204,800 rows.
You can change this by explicitly using [transactions](#docs:sql:statements:transactions) and surrounding your batches of `AppendRow` calls by `BEGIN TRANSACTION` and `COMMIT` statements. | 57 |
|
Data Import | Appender | Handling Constraint Violations | If the Appender encounters a `PRIMARY KEY` conflict or a `UNIQUE` constraint violation, it fails and returns the following error:
```console
Constraint Error: PRIMARY KEY or UNIQUE constraint violated: duplicate key "..."
```
In this case, the entire append operation fails and no rows are inserted. | 63 |
End of preview. Expand
in Dataset Viewer.
No dataset card yet
New: Create and edit this dataset card directly on the website!
Contribute a Dataset Card- Downloads last month
- 11