h1
stringclasses 12
values | h2
stringclasses 93
values | h3
stringclasses 960
values | h5
stringlengths 0
79
| content
stringlengths 24
533k
| tokens
int64 7
158k
|
---|---|---|---|---|---|
Client APIs | C | API Reference Overview | `duckdb_query_arrow_array` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Fetch an internal arrow struct array from the arrow result. Remember to call release on the respective
ArrowArray object.
This function can be called multiple time to get next chunks, which will free the previous out_array.
So consume the out_array before calling this function again.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_state duckdb_query_arrow_array(
duckdb_arrow result,
duckdb_arrow_array *out_array
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result to fetch the array from.
* `out_array`: The output array.
###### Return Value {#docs:api:c:api::return-value}
`DuckDBSuccess` on success or `DuckDBError` on failure.
<br> | 194 |
Client APIs | C | API Reference Overview | `duckdb_arrow_column_count` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Returns the number of columns present in the arrow result object.
###### Syntax {#docs:api:c:api::syntax}
```c
idx_t duckdb_arrow_column_count(
duckdb_arrow result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object.
###### Return Value {#docs:api:c:api::return-value}
The number of columns present in the result object.
<br> | 121 |
Client APIs | C | API Reference Overview | `duckdb_arrow_row_count` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Returns the number of rows present in the arrow result object.
###### Syntax {#docs:api:c:api::syntax}
```c
idx_t duckdb_arrow_row_count(
duckdb_arrow result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object.
###### Return Value {#docs:api:c:api::return-value}
The number of rows present in the result object.
<br> | 121 |
Client APIs | C | API Reference Overview | `duckdb_arrow_rows_changed` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Returns the number of rows changed by the query stored in the arrow result. This is relevant only for
INSERT/UPDATE/DELETE queries. For other queries the rows_changed will be 0.
###### Syntax {#docs:api:c:api::syntax}
```c
idx_t duckdb_arrow_rows_changed(
duckdb_arrow result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object.
###### Return Value {#docs:api:c:api::return-value}
The number of rows changed.
<br> | 144 |
Client APIs | C | API Reference Overview | `duckdb_query_arrow_error` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Returns the error message contained within the result. The error is only set if `duckdb_query_arrow` returns
`DuckDBError`.
The error message should not be freed. It will be de-allocated when `duckdb_destroy_arrow` is called.
###### Syntax {#docs:api:c:api::syntax}
```c
const char *duckdb_query_arrow_error(
duckdb_arrow result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object to fetch the error from.
###### Return Value {#docs:api:c:api::return-value}
The error of the result.
<br> | 165 |
Client APIs | C | API Reference Overview | `duckdb_destroy_arrow` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Closes the result and de-allocates all memory allocated for the arrow result.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_destroy_arrow(
duckdb_arrow *result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result to destroy.
<br> | 99 |
Client APIs | C | API Reference Overview | `duckdb_destroy_arrow_stream` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Releases the arrow array stream and de-allocates its memory.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_destroy_arrow_stream(
duckdb_arrow_stream *stream_p
);
```
###### Parameters {#docs:api:c:api::parameters}
* `stream_p`: The arrow array stream to destroy.
<br> | 102 |
Client APIs | C | API Reference Overview | `duckdb_execute_prepared_arrow` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Executes the prepared statement with the given bound parameters, and returns an arrow query result.
Note that after running `duckdb_execute_prepared_arrow`, `duckdb_destroy_arrow` must be called on the result object.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_state duckdb_execute_prepared_arrow(
duckdb_prepared_statement prepared_statement,
duckdb_arrow *out_result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `prepared_statement`: The prepared statement to execute.
* `out_result`: The query result.
###### Return Value {#docs:api:c:api::return-value}
`DuckDBSuccess` on success or `DuckDBError` on failure.
<br> | 185 |
Client APIs | C | API Reference Overview | `duckdb_arrow_scan` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Scans the Arrow stream and creates a view with the given name.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_state duckdb_arrow_scan(
duckdb_connection connection,
const char *table_name,
duckdb_arrow_stream arrow
);
```
###### Parameters {#docs:api:c:api::parameters}
* `connection`: The connection on which to execute the scan.
* `table_name`: Name of the temporary view to create.
* `arrow`: Arrow stream wrapper.
###### Return Value {#docs:api:c:api::return-value}
`DuckDBSuccess` on success or `DuckDBError` on failure.
<br> | 169 |
Client APIs | C | API Reference Overview | `duckdb_arrow_array_scan` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Scans the Arrow array and creates a view with the given name.
Note that after running `duckdb_arrow_array_scan`, `duckdb_destroy_arrow_stream` must be called on the out stream.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_state duckdb_arrow_array_scan(
duckdb_connection connection,
const char *table_name,
duckdb_arrow_schema arrow_schema,
duckdb_arrow_array arrow_array,
duckdb_arrow_stream *out_stream
);
```
###### Parameters {#docs:api:c:api::parameters}
* `connection`: The connection on which to execute the scan.
* `table_name`: Name of the temporary view to create.
* `arrow_schema`: Arrow schema wrapper.
* `arrow_array`: Arrow array wrapper.
* `out_stream`: Output array stream that wraps around the passed schema, for releasing/deleting once done.
###### Return Value {#docs:api:c:api::return-value}
`DuckDBSuccess` on success or `DuckDBError` on failure.
<br> | 244 |
Client APIs | C | API Reference Overview | `duckdb_execute_tasks` | Execute DuckDB tasks on this thread.
Will return after `max_tasks` have been executed, or if there are no more tasks present.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_execute_tasks(
duckdb_database database,
idx_t max_tasks
);
```
###### Parameters {#docs:api:c:api::parameters}
* `database`: The database object to execute tasks for
* `max_tasks`: The maximum amount of tasks to execute
<br> | 107 |
Client APIs | C | API Reference Overview | `duckdb_create_task_state` | Creates a task state that can be used with duckdb_execute_tasks_state to execute tasks until
`duckdb_finish_execution` is called on the state.
`duckdb_destroy_state` must be called on the result.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_task_state duckdb_create_task_state(
duckdb_database database
);
```
###### Parameters {#docs:api:c:api::parameters}
* `database`: The database object to create the task state for
###### Return Value {#docs:api:c:api::return-value}
The task state that can be used with duckdb_execute_tasks_state.
<br> | 142 |
Client APIs | C | API Reference Overview | `duckdb_execute_tasks_state` | Execute DuckDB tasks on this thread.
The thread will keep on executing tasks forever, until duckdb_finish_execution is called on the state.
Multiple threads can share the same duckdb_task_state.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_execute_tasks_state(
duckdb_task_state state
);
```
###### Parameters {#docs:api:c:api::parameters}
* `state`: The task state of the executor
<br> | 101 |
Client APIs | C | API Reference Overview | `duckdb_execute_n_tasks_state` | Execute DuckDB tasks on this thread.
The thread will keep on executing tasks until either duckdb_finish_execution is called on the state,
max_tasks tasks have been executed or there are no more tasks to be executed.
Multiple threads can share the same duckdb_task_state.
###### Syntax {#docs:api:c:api::syntax}
```c
idx_t duckdb_execute_n_tasks_state(
duckdb_task_state state,
idx_t max_tasks
);
```
###### Parameters {#docs:api:c:api::parameters}
* `state`: The task state of the executor
* `max_tasks`: The maximum amount of tasks to execute
###### Return Value {#docs:api:c:api::return-value}
The amount of tasks that have actually been executed
<br> | 163 |
Client APIs | C | API Reference Overview | `duckdb_finish_execution` | Finish execution on a specific task.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_finish_execution(
duckdb_task_state state
);
```
###### Parameters {#docs:api:c:api::parameters}
* `state`: The task state to finish execution
<br> | 67 |
Client APIs | C | API Reference Overview | `duckdb_task_state_is_finished` | Check if the provided duckdb_task_state has finished execution
###### Syntax {#docs:api:c:api::syntax}
```c
bool duckdb_task_state_is_finished(
duckdb_task_state state
);
```
###### Parameters {#docs:api:c:api::parameters}
* `state`: The task state to inspect
###### Return Value {#docs:api:c:api::return-value}
Whether or not duckdb_finish_execution has been called on the task state
<br> | 103 |
Client APIs | C | API Reference Overview | `duckdb_destroy_task_state` | Destroys the task state returned from duckdb_create_task_state.
Note that this should not be called while there is an active duckdb_execute_tasks_state running
on the task state.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_destroy_task_state(
duckdb_task_state state
);
```
###### Parameters {#docs:api:c:api::parameters}
* `state`: The task state to clean up
<br> | 99 |
Client APIs | C | API Reference Overview | `duckdb_execution_is_finished` | Returns true if the execution of the current query is finished.
###### Syntax {#docs:api:c:api::syntax}
```c
bool duckdb_execution_is_finished(
duckdb_connection con
);
```
###### Parameters {#docs:api:c:api::parameters}
* `con`: The connection on which to check
<br> | 72 |
Client APIs | C | API Reference Overview | `duckdb_stream_fetch_chunk` | > **Warning. ** Deprecation notice. This method is scheduled for removal in a future release.
Fetches a data chunk from the (streaming) duckdb_result. This function should be called repeatedly until the result is
exhausted.
The result must be destroyed with `duckdb_destroy_data_chunk`.
This function can only be used on duckdb_results created with 'duckdb_pending_prepared_streaming'
If this function is used, none of the other result functions can be used and vice versa (i.e., this function cannot be
mixed with the legacy result functions or the materialized result functions).
It is not known beforehand how many chunks will be returned by this result.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_data_chunk duckdb_stream_fetch_chunk(
duckdb_result result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object to fetch the data chunk from.
###### Return Value {#docs:api:c:api::return-value}
The resulting data chunk. Returns `NULL` if the result has an error.
<br> | 246 |
Client APIs | C | API Reference Overview | `duckdb_fetch_chunk` | Fetches a data chunk from a duckdb_result. This function should be called repeatedly until the result is exhausted.
The result must be destroyed with `duckdb_destroy_data_chunk`.
It is not known beforehand how many chunks will be returned by this result.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_data_chunk duckdb_fetch_chunk(
duckdb_result result
);
```
###### Parameters {#docs:api:c:api::parameters}
* `result`: The result object to fetch the data chunk from.
###### Return Value {#docs:api:c:api::return-value}
The resulting data chunk. Returns `NULL` if the result has an error.
<br> | 152 |
Client APIs | C | API Reference Overview | `duckdb_create_cast_function` | Creates a new cast function object.
###### Return Value {#docs:api:c:api::return-value}
The cast function object.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_cast_function duckdb_create_cast_function(
);
```
<br> | 62 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_source_type` | Sets the source type of the cast function.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_source_type(
duckdb_cast_function cast_function,
duckdb_logical_type source_type
);
```
###### Parameters {#docs:api:c:api::parameters}
* `cast_function`: The cast function object.
* `source_type`: The source type to set.
<br> | 91 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_target_type` | Sets the target type of the cast function.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_target_type(
duckdb_cast_function cast_function,
duckdb_logical_type target_type
);
```
###### Parameters {#docs:api:c:api::parameters}
* `cast_function`: The cast function object.
* `target_type`: The target type to set.
<br> | 91 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_implicit_cast_cost` | Sets the "cost" of implicitly casting the source type to the target type using this function.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_implicit_cast_cost(
duckdb_cast_function cast_function,
int64_t cost
);
```
###### Parameters {#docs:api:c:api::parameters}
* `cast_function`: The cast function object.
* `cost`: The cost to set.
<br> | 99 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_function` | Sets the actual cast function to use.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_function(
duckdb_cast_function cast_function,
duckdb_cast_function_t function
);
```
###### Parameters {#docs:api:c:api::parameters}
* `cast_function`: The cast function object.
* `function`: The function to set.
<br> | 87 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_extra_info` | Assigns extra information to the cast function that can be fetched during execution, etc.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_extra_info(
duckdb_cast_function cast_function,
void *extra_info,
duckdb_delete_callback_t destroy
);
```
###### Parameters {#docs:api:c:api::parameters}
* `extra_info`: The extra information
* `destroy`: The callback that will be called to destroy the extra information (if any)
<br> | 111 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_get_extra_info` | Retrieves the extra info of the function as set in `duckdb_cast_function_set_extra_info`.
###### Syntax {#docs:api:c:api::syntax}
```c
void *duckdb_cast_function_get_extra_info(
duckdb_function_info info
);
```
###### Parameters {#docs:api:c:api::parameters}
* `info`: The info object.
###### Return Value {#docs:api:c:api::return-value}
The extra info.
<br> | 104 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_get_cast_mode` | Get the cast execution mode from the given function info.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_cast_mode duckdb_cast_function_get_cast_mode(
duckdb_function_info info
);
```
###### Parameters {#docs:api:c:api::parameters}
* `info`: The info object.
###### Return Value {#docs:api:c:api::return-value}
The cast mode.
<br> | 96 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_error` | Report that an error has occurred while executing the cast function.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_error(
duckdb_function_info info,
const char *error
);
```
###### Parameters {#docs:api:c:api::parameters}
* `info`: The info object.
* `error`: The error message.
<br> | 85 |
Client APIs | C | API Reference Overview | `duckdb_cast_function_set_row_error` | Report that an error has occurred while executing the cast function, setting the corresponding output row to NULL.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_cast_function_set_row_error(
duckdb_function_info info,
const char *error,
idx_t row,
duckdb_vector output
);
```
###### Parameters {#docs:api:c:api::parameters}
* `info`: The info object.
* `error`: The error message.
* `row`: The index of the row within the output vector to set to NULL.
* `output`: The output vector.
<br> | 129 |
Client APIs | C | API Reference Overview | `duckdb_register_cast_function` | Registers a cast function within the given connection.
###### Syntax {#docs:api:c:api::syntax}
```c
duckdb_state duckdb_register_cast_function(
duckdb_connection con,
duckdb_cast_function cast_function
);
```
###### Parameters {#docs:api:c:api::parameters}
* `con`: The connection to use.
* `cast_function`: The cast function to register.
###### Return Value {#docs:api:c:api::return-value}
Whether or not the registration was successful.
<br> | 113 |
Client APIs | C | API Reference Overview | `duckdb_destroy_cast_function` | Destroys the cast function object.
###### Syntax {#docs:api:c:api::syntax}
```c
void duckdb_destroy_cast_function(
duckdb_cast_function *cast_function
);
```
###### Parameters {#docs:api:c:api::parameters}
* `cast_function`: The cast function object.
<br> | 70 |
Client APIs | C++ API | > **Warning. ** DuckDB's C++ API is internal.
> It is not guaranteed to be stable and can change without notice.
> If you would like to build an application on DuckDB, we recommend using the [C API](#docs:api:c:overview). | 58 |
||
Client APIs | C++ API | Installation | The DuckDB C++ API can be installed as part of the `libduckdb` packages. Please see the [installation page](https://duckdb.org/docs/installation/) for details. | 39 |
|
Client APIs | C++ API | Basic API Usage | DuckDB implements a custom C++ API. This is built around the abstractions of a database instance (` DuckDB` class), multiple `Connection`s to the database instance and `QueryResult` instances as the result of queries. The header file for the C++ API is `duckdb.hpp`.
#### Startup & Shutdown {#docs:api:cpp::startup--shutdown}
To use DuckDB, you must first initialize a `DuckDB` instance using its constructor. `DuckDB()` takes as parameter the database file to read and write from. The special value `nullptr` can be used to create an **in-memory database**. Note that for an in-memory database no data is persisted to disk (i.e., all data is lost when you exit the process). The second parameter to the `DuckDB` constructor is an optional `DBConfig` object. In `DBConfig`, you can set various database parameters, for example the read/write mode or memory limits. The `DuckDB` constructor may throw exceptions, for example if the database file is not usable.
With the `DuckDB` instance, you can create one or many `Connection` instances using the `Connection()` constructor. While connections should be thread-safe, they will be locked during querying. It is therefore recommended that each thread uses its own connection if you are in a multithreaded environment.
```cpp
DuckDB db(nullptr);
Connection con(db);
```
#### Querying {#docs:api:cpp::querying}
Connections expose the `Query()` method to send a SQL query string to DuckDB from C++. `Query()` fully materializes the query result as a `MaterializedQueryResult` in memory before returning at which point the query result can be consumed. There is also a streaming API for queries, see further below.
```cpp
// create a table
con.Query("CREATE TABLE integers (i INTEGER, j INTEGER)");
// insert three rows into the table
con.Query("INSERT INTO integers VALUES (3, 4), (5, 6), (7, NULL)");
auto result = con.Query("SELECT * FROM integers");
if (result->HasError()) {
cerr << result->GetError() << endl;
} else {
cout << result->ToString() << endl;
}
```
The `MaterializedQueryResult` instance contains firstly two fields that indicate whether the query was successful. `Query` will not throw exceptions under normal circumstances. Instead, invalid queries or other issues will lead to the `success` Boolean field in the query result instance to be set to `false`. In this case an error message may be available in `error` as a string. If successful, other fields are set: the type of statement that was just executed (e.g., `StatementType::INSERT_STATEMENT`) is contained in `statement_type`. The high-level (“Logical type”/“SQL type”) types of the result set columns are in `types`. The names of the result columns are in the `names` string vector. In case multiple result sets are returned, for example because the result set contained multiple statements, the result set can be chained using the `next` field.
DuckDB also supports prepared statements in the C++ API with the `Prepare()` method. This returns an instance of `PreparedStatement`. This instance can be used to execute the prepared statement with parameters. Below is an example:
```cpp
std::unique_ptr<PreparedStatement> prepare = con.Prepare("SELECT count(*) FROM a WHERE i = $1");
std::unique_ptr<QueryResult> result = prepare->Execute(12);
```
> **Warning. ** Do **not** use prepared statements to insert large amounts of data into DuckDB. See [the data import documentation](#docs:data:overview) for better options.
#### UDF API {#docs:api:cpp::udf-api}
The UDF API allows the definition of user-defined functions. It is exposed in `duckdb:Connection` through the methods: `CreateScalarFunction()`, `CreateVectorizedFunction()`, and variants.
These methods created UDFs into the temporary schema (` TEMP_SCHEMA`) of the owner connection that is the only one allowed to use and change them. | 881 |
|
Client APIs | C++ API | Basic API Usage | CreateScalarFunction | The user can code an ordinary scalar function and invoke the `CreateScalarFunction()` to register and afterward use the UDF in a `SELECT` statement, for instance:
```cpp
bool bigger_than_four(int value) {
return value > 4;
}
connection.CreateScalarFunction<bool, int>("bigger_than_four", &bigger_than_four);
connection.Query("SELECT bigger_than_four(i) FROM (VALUES(3), (5)) tbl(i)")->Print();
```
The `CreateScalarFunction()` methods automatically creates vectorized scalar UDFs so they are as efficient as built-in functions, we have two variants of this method interface as follows:
**1.**
```cpp
template<typename TR, typename... Args>
void CreateScalarFunction(string name, TR (*udf_func)(Args…))
```
- template parameters:
- **TR** is the return type of the UDF function;
- **Args** are the arguments up to 3 for the UDF function (this method only supports until ternary functions);
- **name**: is the name to register the UDF function;
- **udf_func**: is a pointer to the UDF function.
This method automatically discovers from the template typenames the corresponding LogicalTypes:
- `bool` → `LogicalType::BOOLEAN`
- `int8_t` → `LogicalType::TINYINT`
- `int16_t` → `LogicalType::SMALLINT`
- `int32_t` → `LogicalType::INTEGER`
- `int64_t` →` LogicalType::BIGINT`
- `float` → `LogicalType::FLOAT`
- `double` → `LogicalType::DOUBLE`
- `string_t` → `LogicalType::VARCHAR`
In DuckDB some primitive types, e.g., `int32_t`, are mapped to the same `LogicalType`: `INTEGER`, `TIME` and `DATE`, then for disambiguation the users can use the following overloaded method.
**2.**
```cpp
template<typename TR, typename... Args>
void CreateScalarFunction(string name, vector<LogicalType> args, LogicalType ret_type, TR (*udf_func)(Args…))
```
An example of use would be:
```cpp
int32_t udf_date(int32_t a) {
return a;
}
con.Query("CREATE TABLE dates (d DATE)");
con.Query("INSERT INTO dates VALUES ('1992-01-01')");
con.CreateScalarFunction<int32_t, int32_t>("udf_date", {LogicalType::DATE}, LogicalType::DATE, &udf_date);
con.Query("SELECT udf_date(d) FROM dates")->Print();
```
- template parameters:
- **TR** is the return type of the UDF function;
- **Args** are the arguments up to 3 for the UDF function (this method only supports until ternary functions);
- **name**: is the name to register the UDF function;
- **args**: are the LogicalType arguments that the function uses, which should match with the template Args types;
- **ret_type**: is the LogicalType of return of the function, which should match with the template TR type;
- **udf_func**: is a pointer to the UDF function.
This function checks the template types against the LogicalTypes passed as arguments and they must match as follow:
- LogicalTypeId::BOOLEAN → bool
- LogicalTypeId::TINYINT → int8_t
- LogicalTypeId::SMALLINT → int16_t
- LogicalTypeId::DATE, LogicalTypeId::TIME, LogicalTypeId::INTEGER → int32_t
- LogicalTypeId::BIGINT, LogicalTypeId::TIMESTAMP → int64_t
- LogicalTypeId::FLOAT, LogicalTypeId::DOUBLE, LogicalTypeId::DECIMAL → double
- LogicalTypeId::VARCHAR, LogicalTypeId::CHAR, LogicalTypeId::BLOB → string_t
- LogicalTypeId::VARBINARY → blob_t | 819 |
Client APIs | C++ API | Basic API Usage | CreateVectorizedFunction | The `CreateVectorizedFunction()` methods register a vectorized UDF such as:
```cpp
/*
* This vectorized function copies the input values to the result vector
*/
template<typename TYPE>
static void udf_vectorized(DataChunk &args, ExpressionState &state, Vector &result) {
// set the result vector type
result.vector_type = VectorType::FLAT_VECTOR;
// get a raw array from the result
auto result_data = FlatVector::GetData<TYPE>(result);
// get the solely input vector
auto &input = args.data[0];
// now get an orrified vector
VectorData vdata;
input.Orrify(args.size(), vdata);
// get a raw array from the orrified input
auto input_data = (TYPE *)vdata.data;
// handling the data
for (idx_t i = 0; i < args.size(); i++) {
auto idx = vdata.sel->get_index(i);
if ((*vdata.nullmask)[idx]) {
continue;
}
result_data[i] = input_data[idx];
}
}
con.Query("CREATE TABLE integers (i INTEGER)");
con.Query("INSERT INTO integers VALUES (1), (2), (3), (999)");
con.CreateVectorizedFunction<int, int>("udf_vectorized_int", &&udf_vectorized<int>);
con.Query("SELECT udf_vectorized_int(i) FROM integers")->Print();
```
The Vectorized UDF is a pointer of the type _scalar_function_t_:
```cpp
typedef std::function<void(DataChunk &args, ExpressionState &expr, Vector &result)> scalar_function_t;
```
- **args** is a [DataChunk](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/common/types/data_chunk.hpp) that holds a set of input vectors for the UDF that all have the same length;
- **expr** is an [ExpressionState](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/execution/expression_executor_state.hpp) that provides information to the query's expression state;
- **result**: is a [Vector](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/common/types/vector.hpp) to store the result values.
There are different vector types to handle in a Vectorized UDF:
- ConstantVector;
- DictionaryVector;
- FlatVector;
- ListVector;
- StringVector;
- StructVector;
- SequenceVector.
The general API of the `CreateVectorizedFunction()` method is as follows:
**1.**
```cpp
template<typename TR, typename... Args>
void CreateVectorizedFunction(string name, scalar_function_t udf_func, LogicalType varargs = LogicalType::INVALID)
```
- template parameters:
- **TR** is the return type of the UDF function;
- **Args** are the arguments up to 3 for the UDF function.
- **name** is the name to register the UDF function;
- **udf_func** is a _vectorized_ UDF function;
- **varargs** The type of varargs to support, or LogicalTypeId::INVALID (default value) if the function does not accept variable length arguments.
This method automatically discovers from the template typenames the corresponding LogicalTypes:
- bool → LogicalType::BOOLEAN;
- int8_t → LogicalType::TINYINT;
- int16_t → LogicalType::SMALLINT
- int32_t → LogicalType::INTEGER
- int64_t → LogicalType::BIGINT
- float → LogicalType::FLOAT
- double → LogicalType::DOUBLE
- string_t → LogicalType::VARCHAR
**2.**
```cpp
template<typename TR, typename... Args>
void CreateVectorizedFunction(string name, vector<LogicalType> args, LogicalType ret_type, scalar_function_t udf_func, LogicalType varargs = LogicalType::INVALID)
``` | 820 |
Client APIs | CLI | Installation | The DuckDB CLI (Command Line Interface) is a single, dependency-free executable. It is precompiled for Windows, Mac, and Linux for both the stable version and for nightly builds produced by GitHub Actions. Please see the [installation page](https://duckdb.org/docs/installation/) under the CLI tab for download links.
The DuckDB CLI is based on the SQLite command line shell, so CLI-client-specific functionality is similar to what is described in the [SQLite documentation](https://www.sqlite.org/cli.html) (although DuckDB's SQL syntax follows PostgreSQL conventions with a [few exceptions](#docs:sql:dialect:postgresql_compatibility)).
> DuckDB has a [tldr page](https://tldr.inbrowser.app/pages/common/duckdb), which summarizes the most common uses of the CLI client.
> If you have [tldr](https://github.com/tldr-pages/tldr) installed, you can display it by running `tldr duckdb`. | 200 |
|
Client APIs | CLI | Getting Started | Once the CLI executable has been downloaded, unzip it and save it to any directory.
Navigate to that directory in a terminal and enter the command `duckdb` to run the executable.
If in a PowerShell or POSIX shell environment, use the command `./duckdb` instead. | 56 |
|
Client APIs | CLI | Usage | The typical usage of the `duckdb` command is the following:
```bash
duckdb [OPTIONS] [FILENAME]
```
#### Options {#docs:api:cli:overview::options}
The `[OPTIONS]` part encodes [arguments for the CLI client](#docs:api:cli:arguments). Common options include:
* `-csv`: sets the output mode to CSV
* `-json`: sets the output mode to JSON
* `-readonly`: open the database in read-only mode (see [concurrency in DuckDB](#docs:connect:concurrency::handling-concurrency))
For a full list of options, see the [command line arguments page](#docs:api:cli:arguments).
#### In-Memory vs. Persistent Database {#docs:api:cli:overview::in-memory-vs-persistent-database}
When no `[FILENAME]` argument is provided, the DuckDB CLI will open a temporary [in-memory database](#docs:connect:overview::in-memory-database).
You will see DuckDB's version number, the information on the connection and a prompt starting with a `D`.
```bash
duckdb
```
```text
v{{ site.currentduckdbversion }} {{ site.currentduckdbhash }}
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
D
```
To open or create a [persistent database](#docs:connect:overview::persistent-database), simply include a path as a command line argument:
```bash
duckdb my_database.duckdb
```
#### Running SQL Statements in the CLI {#docs:api:cli:overview::running-sql-statements-in-the-cli}
Once the CLI has been opened, enter a SQL statement followed by a semicolon, then hit enter and it will be executed. Results will be displayed in a table in the terminal. If a semicolon is omitted, hitting enter will allow for multi-line SQL statements to be entered.
```sql
SELECT 'quack' AS my_column;
```
| my_column |
|-----------|
| quack |
The CLI supports all of DuckDB's rich [SQL syntax](#docs:sql:introduction) including `SELECT`, `CREATE`, and `ALTER` statements.
#### Editor Features {#docs:api:cli:overview::editor-features}
The CLI supports [autocompletion](#docs:api:cli:autocomplete), and has sophisticated [editor features](#docs:api:cli:editing) and [syntax highlighting](#docs:api:cli:syntax_highlighting) on certain platforms.
#### Exiting the CLI {#docs:api:cli:overview::exiting-the-cli}
To exit the CLI, press `Ctrl`+`D` if your platform supports it. Otherwise, press `Ctrl`+`C` or use the `.exit` command. If used a persistent database, DuckDB will automatically checkpoint (save the latest edits to disk) and close. This will remove the `.wal` file (the [write-ahead log](https://en.wikipedia.org/wiki/Write-ahead_logging)) and consolidate all of your data into the single-file database.
#### Dot Commands {#docs:api:cli:overview::dot-commands}
In addition to SQL syntax, special [dot commands](#docs:api:cli:dot_commands) may be entered into the CLI client. To use one of these commands, begin the line with a period (` .`) immediately followed by the name of the command you wish to execute. Additional arguments to the command are entered, space separated, after the command. If an argument must contain a space, either single or double quotes may be used to wrap that parameter. Dot commands must be entered on a single line, and no whitespace may occur before the period. No semicolon is required at the end of the line.
Frequently-used configurations can be stored in the file `~/.duckdbrc`, which will be loaded when starting the CLI client. See the [Configuring the CLI](#::configuring-the-cli) section below for further information on these options.
Below, we summarize a few important dot commands. To see all available commands, see the [dot commands page](#docs:api:cli:dot_commands) or use the `.help` command. | 914 |
|
Client APIs | CLI | Usage | Opening Database Files | In addition to connecting to a database when opening the CLI, a new database connection can be made by using the `.open` command. If no additional parameters are supplied, a new in-memory database connection is created. This database will not be persisted when the CLI connection is closed.
```text
.open
```
The `.open` command optionally accepts several options, but the final parameter can be used to indicate a path to a persistent database (or where one should be created). The special string `:memory:` can also be used to open a temporary in-memory database.
```text
.open persistent.duckdb
```
> **Warning. ** `.open` closes the current database.
> To keep the current database, while adding a new database, use the [`ATTACH` statement](#docs:sql:statements:attach).
One important option accepted by `.open` is the `--readonly` flag. This disallows any editing of the database. To open in read only mode, the database must already exist. This also means that a new in-memory database can't be opened in read only mode since in-memory databases are created upon connection.
```text
.open --readonly preexisting.duckdb
``` | 251 |
Client APIs | CLI | Usage | Output Formats | The `.mode` [dot command](#docs:api:cli:dot_commands::mode) may be used to change the appearance of the tables returned in the terminal output.
These include the default `duckbox` mode, `csv` and `json` mode for ingestion by other tools, `markdown` and `latex` for documents, and `insert` mode for generating SQL statements. | 80 |
Client APIs | CLI | Usage | Writing Results to a File | By default, the DuckDB CLI sends results to the terminal's standard output. However, this can be modified using either the `.output` or `.once` commands.
For details, see the documentation for the [output dot command](#docs:api:cli:dot_commands::output-writing-results-to-a-file). | 64 |
Client APIs | CLI | Usage | Reading SQL from a File | The DuckDB CLI can read both SQL commands and dot commands from an external file instead of the terminal using the `.read` command. This allows for a number of commands to be run in sequence and allows command sequences to be saved and reused.
The `.read` command requires only one argument: the path to the file containing the SQL and/or commands to execute. After running the commands in the file, control will revert back to the terminal. Output from the execution of that file is governed by the same `.output` and `.once` commands that have been discussed previously. This allows the output to be displayed back to the terminal, as in the first example below, or out to another file, as in the second example.
In this example, the file `select_example.sql` is located in the same directory as duckdb.exe and contains the following SQL statement:
```sql
SELECT *
FROM generate_series(5);
```
To execute it from the CLI, the `.read` command is used.
```text
.read select_example.sql
```
The output below is returned to the terminal by default. The formatting of the table can be adjusted using the `.output` or `.once` commands.
```text
| generate_series |
|----------------:|
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
```
Multiple commands, including both SQL and dot commands, can also be run in a single `.read` command. In this example, the file `write_markdown_to_file.sql` is located in the same directory as duckdb.exe and contains the following commands:
```sql
.mode markdown
.output series.md
SELECT *
FROM generate_series(5);
```
To execute it from the CLI, the `.read` command is used as before.
```text
.read write_markdown_to_file.sql
```
In this case, no output is returned to the terminal. Instead, the file `series.md` is created (or replaced if it already existed) with the markdown-formatted results shown here:
```text
| generate_series |
|----------------:|
| 0 |
| 1 |
| 2 |
| 3 |
| 4 |
| 5 |
```
<!-- The edit function does not appear to work --> | 487 |
Client APIs | CLI | Configuring the CLI | Several dot commands can be used to configure the CLI.
On startup, the CLI reads and executes all commands in the file `~/.duckdbrc`, including dot commands and SQL statements.
This allows you to store the configuration state of the CLI.
You may also point to a different initialization file using the `-init`.
#### Setting a Custom Prompt {#docs:api:cli:overview::setting-a-custom-prompt}
As an example, a file in the same directory as the DuckDB CLI named `prompt.sql` will change the DuckDB prompt to be a duck head and run a SQL statement.
Note that the duck head is built with Unicode characters and does not work in all terminal environments (e.g., in Windows, unless running with WSL and using the Windows Terminal).
```text
.prompt '⚫◗ '
```
To invoke that file on initialization, use this command:
```bash
duckdb -init prompt.sql
```
This outputs:
```text
-- Loading resources from prompt.sql
v⟨version⟩ ⟨git hash⟩
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
⚫◗
``` | 258 |
|
Client APIs | CLI | Non-Interactive Usage | To read/process a file and exit immediately, pipe the file contents in to `duckdb`:
```bash
duckdb < select_example.sql
```
To execute a command with SQL text passed in directly from the command line, call `duckdb` with two arguments: the database location (or `:memory:`), and a string with the SQL statement to execute.
```bash
duckdb :memory: "SELECT 42 AS the_answer"
``` | 94 |
|
Client APIs | CLI | Loading Extensions | To load extensions, use DuckDB's SQL `INSTALL` and `LOAD` commands as you would other SQL statements.
```sql
INSTALL fts;
LOAD fts;
```
For details, see the [Extension docs](#docs:extensions:overview). | 54 |
|
Client APIs | CLI | Reading from stdin and Writing to stdout | When in a Unix environment, it can be useful to pipe data between multiple commands.
DuckDB is able to read data from stdin as well as write to stdout using the file location of stdin (` /dev/stdin`) and stdout (` /dev/stdout`) within SQL commands, as pipes act very similarly to file handles.
This command will create an example CSV:
```sql
COPY (SELECT 42 AS woot UNION ALL SELECT 43 AS woot) TO 'test.csv' (HEADER);
```
First, read a file and pipe it to the `duckdb` CLI executable. As arguments to the DuckDB CLI, pass in the location of the database to open, in this case, an in-memory database, and a SQL command that utilizes `/dev/stdin` as a file location.
```bash
cat test.csv | duckdb -c "SELECT * FROM read_csv('/dev/stdin')"
```
| woot |
|-----:|
| 42 |
| 43 |
To write back to stdout, the copy command can be used with the `/dev/stdout` file location.
```bash
cat test.csv | \
duckdb -c "COPY (SELECT * FROM read_csv('/dev/stdin')) TO '/dev/stdout' WITH (FORMAT 'csv', HEADER)"
```
```csv
woot
42
43
``` | 285 |
|
Client APIs | CLI | Reading Environment Variables | The `getenv` function can read environment variables.
#### Examples {#docs:api:cli:overview::examples}
To retrieve the home directory's path from the `HOME` environment variable, use:
```sql
SELECT getenv('HOME') AS home;
```
| home |
|------------------|
| /Users/user_name |
The output of the `getenv` function can be used to set [configuration options](#docs:configuration:overview). For example, to set the `NULL` order based on the environment variable `DEFAULT_NULL_ORDER`, use:
```sql
SET default_null_order = getenv('DEFAULT_NULL_ORDER');
```
#### Restrictions for Reading Environment Variables {#docs:api:cli:overview::restrictions-for-reading-environment-variables}
The `getenv` function can only be run when the [`enable_external_access`](#docs:configuration:overview::configuration-reference) is set to `true` (the default setting).
It is only available in the CLI client and is not supported in other DuckDB clients. | 221 |
|
Client APIs | CLI | Prepared Statements | The DuckDB CLI supports executing [prepared statements](#docs:sql:query_syntax:prepared_statements) in addition to regular `SELECT` statements.
To create and execute a prepared statement in the CLI client, use the `PREPARE` clause and the `EXECUTE` statement. | 59 |
|
Client APIs | CLI | Command Line Arguments | The table below summarizes DuckDB's command line options.
To list all command line options, use the command:
```bash
duckdb -help
```
For a list of dot commands available in the CLI shell, see the [Dot Commands page](#docs:api:cli:dot_commands).
<div class="narrow_table"></div>
<!-- markdownlint-disable MD056 -->
| Argument | Description |
|---|-------|
| `-append` | Append the database to the end of the file |
| `-ascii` | Set [output mode](#docs:api:cli:output_formats) to `ascii` |
| `-bail` | Stop after hitting an error |
| `-batch` | Force batch I/O |
| `-box` | Set [output mode](#docs:api:cli:output_formats) to `box` |
| `-column` | Set [output mode](#docs:api:cli:output_formats) to `column` |
| `-cmd COMMAND` | Run `COMMAND` before reading `stdin` |
| `-c COMMAND` | Run `COMMAND` and exit |
| `-csv` | Set [output mode](#docs:api:cli:output_formats) to `csv` |
| `-echo` | Print commands before execution |
| `-init FILENAME` | Run the script in `FILENAME` upon startup (instead of `~/.duckdbrc`) |
| `-header` | Turn headers on |
| `-help` | Show this message |
| `-html` | Set [output mode](#docs:api:cli:output_formats) to HTML |
| `-interactive` | Force interactive I/O |
| `-json` | Set [output mode](#docs:api:cli:output_formats) to `json` |
| `-line` | Set [output mode](#docs:api:cli:output_formats) to `line` |
| `-list` | Set [output mode](#docs:api:cli:output_formats) to `list` |
| `-markdown` | Set [output mode](#docs:api:cli:output_formats) to `markdown` |
| `-newline SEP` | Set output row separator. Default: `\n` |
| `-nofollow` | Refuse to open symbolic links to database files |
| `-noheader` | Turn headers off |
| `-no-stdin` | Exit after processing options instead of reading stdin |
| `-nullvalue TEXT` | Set text string for `NULL` values. Default: empty string |
| `-quote` | Set [output mode](#docs:api:cli:output_formats) to `quote` |
| `-readonly` | Open the database read-only |
| `-s COMMAND` | Run `COMMAND` and exit |
| `-separator SEP` | Set output column separator to `SEP`. Default: `|` |
| `-stats` | Print memory stats before each finalize |
| `-table` | Set [output mode](#docs:api:cli:output_formats) to `table` |
| `-unsigned` | Allow loading of [unsigned extensions](#docs:extensions:overview::unsigned-extensions) |
| `-version` | Show DuckDB version |
<!-- markdownlint-enable MD056 --> | 734 |
|
Client APIs | CLI | Dot Commands | Dot commands are available in the DuckDB CLI client. To use one of these commands, begin the line with a period (` .`) immediately followed by the name of the command you wish to execute. Additional arguments to the command are entered, space separated, after the command. If an argument must contain a space, either single or double quotes may be used to wrap that parameter. Dot commands must be entered on a single line, and no whitespace may occur before the period. No semicolon is required at the end of the line. To see available commands, use the `.help` command. | 119 |
|
Client APIs | CLI | Dot Commands | <div class="narrow_table"></div>
<!-- markdownlint-disable MD056 -->
| Command | Description |
|---|------|
| `.bail on|off` | Stop after hitting an error. Default: `off` |
| `.binary on|off` | Turn binary output on or off. Default: `off` |
| `.cd DIRECTORY` | Change the working directory to `DIRECTORY` |
| `.changes on|off` | Show number of rows changed by SQL |
| `.check GLOB` | Fail if output since .testcase does not match |
| `.columns` | Column-wise rendering of query results |
| `.constant ?COLOR?` | Sets the syntax highlighting color used for constant values |
| `.constantcode ?CODE?` | Sets the syntax highlighting terminal code used for constant values |
| `.databases` | List names and files of attached databases |
| `.echo on|off` | Turn command echo on or `off` |
| `.excel` | Display the output of next command in spreadsheet |
| `.exit ?CODE?` | Exit this program with return-code `CODE` |
| `.explain ?on|off|auto?` | Change the `EXPLAIN` formatting mode. Default: `auto` |
| `.fullschema ?--indent?` | Show schema and the content of `sqlite_stat` tables |
| `.headers on|off` | Turn display of headers on or `off` |
| `.help ?-all? ?PATTERN?` | Show help text for `PATTERN` |
| `.highlight [on|off]` | Toggle syntax highlighting in the shell `on`/`off` |
| `.import FILE TABLE` | Import data from `FILE` into `TABLE` |
| `.indexes ?TABLE?` | Show names of indexes |
| `.keyword ?COLOR?` | Sets the syntax highlighting color used for keywords |
| `.keywordcode ?CODE?` | Sets the syntax highlighting terminal code used for keywords |
| `.lint OPTIONS` | Report potential schema issues. |
| `.log FILE|off` | Turn logging on or off. `FILE` can be `stderr`/`stdout` |
| `.maxrows COUNT` | Sets the maximum number of rows for display. Only for [duckbox mode](#docs:api:cli:output_formats) |
| `.maxwidth COUNT` | Sets the maximum width in characters. 0 defaults to terminal width. Only for [duckbox mode](#docs:api:cli:output_formats) |
| `.mode MODE ?TABLE?` | Set [output mode](#docs:api:cli:output_formats) |
| `.multiline` | Set multi-line mode (default) |
| `.nullvalue STRING` | Use `STRING` in place of `NULL` values |
| `.once ?OPTIONS? ?FILE?` | Output for the next SQL command only to `FILE` |
| `.open ?OPTIONS? ?FILE?` | Close existing database and reopen `FILE` |
| `.output ?FILE?` | Send output to `FILE` or `stdout` if `FILE` is omitted |
| `.parameter CMD ...` | Manage SQL parameter bindings |
| `.print STRING...` | Print literal `STRING` |
| `.prompt MAIN CONTINUE` | Replace the standard prompts |
| `.quit` | Exit this program |
| `.read FILE` | Read input from `FILE` |
| `.rows` | Row-wise rendering of query results (default) |
| `.schema ?PATTERN?` | Show the `CREATE` statements matching `PATTERN` |
| `.separator COL ?ROW?` | Change the column and row separators |
| `.sha3sum ...` | Compute a SHA3 hash of database content |
| `.shell CMD ARGS...` | Run `CMD ARGS...` in a system shell |
| `.show` | Show the current values for various settings |
| `.singleline` | Set single-line mode |
| `.system CMD ARGS...` | Run `CMD ARGS...` in a system shell |
| `.tables ?TABLE?` | List names of tables [matching LIKE pattern](#docs:sql:functions:pattern_matching) `TABLE` |
| `.testcase NAME` | Begin redirecting output to `NAME` |
| `.timer on|off` | Turn SQL timer on or off |
| `.width NUM1 NUM2 ...` | Set minimum column widths for columnar output | | 1,020 |
|
Client APIs | CLI | Using the `.help` Command | The `.help` text may be filtered by passing in a text string as the second argument.
```text
.help m
```
```text
.maxrows COUNT Sets the maximum number of rows for display (default: 40). Only for duckbox mode.
.maxwidth COUNT Sets the maximum width in characters. 0 defaults to terminal width. Only for duckbox mode.
.mode MODE ?TABLE? Set output mode
```
#### `.output`: Writing Results to a File {#docs:api:cli:dot_commands::output-writing-results-to-a-file}
By default, the DuckDB CLI sends results to the terminal's standard output. However, this can be modified using either the `.output` or `.once` commands. Pass in the desired output file location as a parameter. The `.once` command will only output the next set of results and then revert to standard out, but `.output` will redirect all subsequent output to that file location. Note that each result will overwrite the entire file at that destination. To revert back to standard output, enter `.output` with no file parameter.
In this example, the output format is changed to `markdown`, the destination is identified as a Markdown file, and then DuckDB will write the output of the SQL statement to that file. Output is then reverted to standard output using `.output` with no parameter.
```sql
.mode markdown
.output my_results.md
SELECT 'taking flight' AS output_column;
.output
SELECT 'back to the terminal' AS displayed_column;
```
The file `my_results.md` will then contain:
```text
| output_column |
|---------------|
| taking flight |
```
The terminal will then display:
```text
| displayed_column |
|----------------------|
| back to the terminal |
```
A common output format is CSV, or comma separated values. DuckDB supports [SQL syntax to export data as CSV or Parquet](#docs:sql:statements:copy::copy-to), but the CLI-specific commands may be used to write a CSV instead if desired.
```sql
.mode csv
.once my_output_file.csv
SELECT 1 AS col_1, 2 AS col_2
UNION ALL
SELECT 10 AS col1, 20 AS col_2;
```
The file `my_output_file.csv` will then contain:
```csv
col_1,col_2
1,2
10,20
```
By passing special options (flags) to the `.once` command, query results can also be sent to a temporary file and automatically opened in the user's default program. Use either the `-e` flag for a text file (opened in the default text editor), or the `-x` flag for a CSV file (opened in the default spreadsheet editor). This is useful for more detailed inspection of query results, especially if there is a relatively large result set. The `.excel` command is equivalent to `.once -x`.
```sql
.once -e
SELECT 'quack' AS hello;
```
The results then open in the default text file editor of the system, for example:
<img src="/images/cli_docs_output_to_text_editor.jpg" alt="cli_docs_output_to_text_editor" title="Output to text editor" style="width:293px;"/> | 686 |
|
Client APIs | CLI | Querying the Database Schema | All DuckDB clients support [querying the database schema with SQL](#docs:sql:meta:information_schema), but the CLI has additional [dot commands](#docs:api:cli:dot_commands) that can make it easier to understand the contents of a database.
The `.tables` command will return a list of tables in the database. It has an optional argument that will filter the results according to a [`LIKE` pattern](#docs:sql:functions:pattern_matching::like).
```sql
CREATE TABLE swimmers AS SELECT 'duck' AS animal;
CREATE TABLE fliers AS SELECT 'duck' AS animal;
CREATE TABLE walkers AS SELECT 'duck' AS animal;
.tables
```
```text
fliers swimmers walkers
```
For example, to filter to only tables that contain an `l`, use the `LIKE` pattern `%l%`.
```sql
.tables %l%
```
```text
fliers walkers
```
The `.schema` command will show all of the SQL statements used to define the schema of the database.
```text
.schema
```
```sql
CREATE TABLE fliers (animal VARCHAR);
CREATE TABLE swimmers (animal VARCHAR);
CREATE TABLE walkers (animal VARCHAR);
``` | 257 |
|
Client APIs | CLI | Configuring the Syntax Highlighter | By default the shell includes support for syntax highlighting.
The CLI's syntax highlighter can be configured using the following commands.
To turn off the highlighter:
```text
.highlight off
```
To turn on the highlighter:
```text
.highlight on
```
To configure the color used to highlight constants:
```text
.constant [red|green|yellow|blue|magenta|cyan|white|brightblack|brightred|brightgreen|brightyellow|brightblue|brightmagenta|brightcyan|brightwhite]
```
```text
.constantcode [terminal_code]
```
To configure the color used to highlight keywords:
```text
.keyword [red|green|yellow|blue|magenta|cyan|white|brightblack|brightred|brightgreen|brightyellow|brightblue|brightmagenta|brightcyan|brightwhite]
```
```text
.keywordcode [terminal_code]
``` | 190 |
|
Client APIs | CLI | Importing Data from CSV | > **Deprecated. ** This feature is only included for compatibility reasons and may be removed in the future.
> Use the [`read_csv` function or the `COPY` statement](#docs:data:csv:overview) to load CSV files.
DuckDB supports [SQL syntax to directly query or import CSV files](#docs:data:csv:overview), but the CLI-specific commands may be used to import a CSV instead if desired. The `.import` command takes two arguments and also supports several options. The first argument is the path to the CSV file, and the second is the name of the DuckDB table to create. Since DuckDB requires stricter typing than SQLite (upon which the DuckDB CLI is based), the destination table must be created before using the `.import` command. To automatically detect the schema and create a table from a CSV, see the [`read_csv` examples in the import docs](#docs:data:csv:overview).
In this example, a CSV file is generated by changing to CSV mode and setting an output file location:
```sql
.mode csv
.output import_example.csv
SELECT 1 AS col_1, 2 AS col_2 UNION ALL SELECT 10 AS col1, 20 AS col_2;
```
Now that the CSV has been written, a table can be created with the desired schema and the CSV can be imported. The output is reset to the terminal to avoid continuing to edit the output file specified above. The `--skip N` option is used to ignore the first row of data since it is a header row and the table has already been created with the correct column names.
```text
.mode csv
.output
CREATE TABLE test_table (col_1 INTEGER, col_2 INTEGER);
.import import_example.csv test_table --skip 1
```
Note that the `.import` command utilizes the current `.mode` and `.separator` settings when identifying the structure of the data to import. The `--csv` option can be used to override that behavior.
```text
.import import_example.csv test_table --skip 1 --csv
``` | 434 |
|
Client APIs | CLI | Output Formats | The `.mode` [dot command](#docs:api:cli:dot_commands) may be used to change the appearance of the tables returned in the terminal output. In addition to customizing the appearance, these modes have additional benefits. This can be useful for presenting DuckDB output elsewhere by redirecting the terminal [output to a file](#docs:api:cli:dot_commands::output-writing-results-to-a-file). Using the `insert` mode will build a series of SQL statements that can be used to insert the data at a later point.
The `markdown` mode is particularly useful for building documentation and the `latex` mode is useful for writing academic papers.
<div class="narrow_table"></div>
| Mode | Description |
|--------------|----------------------------------------------|
| `ascii` | Columns/rows delimited by 0x1F and 0x1E |
| `box` | Tables using unicode box-drawing characters |
| `csv` | Comma-separated values |
| `column` | Output in columns. (See .width) |
| `duckbox` | Tables with extensive features (default) |
| `html` | HTML `<table>` code |
| `insert` | SQL insert statements for TABLE |
| `json` | Results in a JSON array |
| `jsonlines` | Results in a NDJSON |
| `latex` | LaTeX tabular environment code |
| `line` | One value per line |
| `list` | Values delimited by "\|" |
| `markdown` | Markdown table format |
| `quote` | Escape answers as for SQL |
| `table` | ASCII-art table |
| `tabs` | Tab-separated values |
| `tcl` | TCL list elements |
| `trash` | No output |
Use `.mode` directly to query the appearance currently in use.
```sql
.mode
```
```text
current output mode: duckbox
```
```sql
.mode markdown
SELECT 'quacking intensifies' AS incoming_ducks;
```
```text
| incoming_ducks |
|----------------------|
| quacking intensifies |
```
The output appearance can also be adjusted with the `.separator` command. If using an export mode that relies on a separator (` csv` or `tabs` for example), the separator will be reset when the mode is changed. For example, `.mode csv` will set the separator to a comma (` ,`). Using `.separator "|"` will then convert the output to be pipe-separated.
```sql
.mode csv
SELECT 1 AS col_1, 2 AS col_2
UNION ALL
SELECT 10 AS col1, 20 AS col_2;
```
```csv
col_1,col_2
1,2
10,20
```
```sql
.separator "|"
SELECT 1 AS col_1, 2 AS col_2
UNION ALL
SELECT 10 AS col1, 20 AS col_2;
```
```csv
col_1|col_2
1|2
10|20
``` | 682 |
|
Client APIs | CLI | Editing | > The linenoise-based CLI editor is currently only available for macOS and Linux.
DuckDB's CLI uses a line-editing library based on [linenoise](https://github.com/antirez/linenoise), which has shortcuts that are based on [Emacs mode of readline](https://readline.kablamo.org/emacs.html). Below is a list of available commands. | 79 |
|
Client APIs | CLI | Moving | <div class="narrow_table"></div>
| Key | Action |
|-----------------|------------------------------------------------------------------------|
| `Left` | Move back a character |
| `Right` | Move forward a character |
| `Up` | Move up a line. When on the first line, move to previous history entry |
| `Down` | Move down a line. When on last line, move to next history entry |
| `Home` | Move to beginning of buffer |
| `End` | Move to end of buffer |
| `Ctrl`+`Left` | Move back a word |
| `Ctrl`+`Right` | Move forward a word |
| `Ctrl`+`A` | Move to beginning of buffer |
| `Ctrl`+`B` | Move back a character |
| `Ctrl`+`E` | Move to end of buffer |
| `Ctrl`+`F` | Move forward a character |
| `Alt`+`Left` | Move back a word |
| `Alt`+`Right` | Move forward a word | | 252 |
|
Client APIs | CLI | History | <div class="narrow_table"></div>
| Key | Action |
|------------|--------------------------------|
| `Ctrl`+`P` | Move to previous history entry |
| `Ctrl`+`N` | Move to next history entry |
| `Ctrl`+`R` | Search the history |
| `Ctrl`+`S` | Search the history |
| `Alt`+`<` | Move to first history entry |
| `Alt`+`>` | Move to last history entry |
| `Alt`+`N` | Search the history |
| `Alt`+`P` | Search the history | | 146 |
|
Client APIs | CLI | Changing Text | <div class="narrow_table"></div>
| Key | Action |
|-------------------|----------------------------------------------------------|
| `Backspace` | Delete previous character |
| `Delete` | Delete next character |
| `Ctrl`+`D` | Delete next character. When buffer is empty, end editing |
| `Ctrl`+`H` | Delete previous character |
| `Ctrl`+`K` | Delete everything after the cursor |
| `Ctrl`+`T` | Swap current and next character |
| `Ctrl`+`U` | Delete all text |
| `Ctrl`+`W` | Delete previous word |
| `Alt`+`C` | Convert next word to titlecase |
| `Alt`+`D` | Delete next word |
| `Alt`+`L` | Convert next word to lowercase |
| `Alt`+`R` | Delete all text |
| `Alt`+`T` | Swap current and next word |
| `Alt`+`U` | Convert next word to uppercase |
| `Alt`+`Backspace` | Delete previous word |
| `Alt`+`\` | Delete spaces around cursor | | 278 |
|
Client APIs | CLI | Completing | <div class="narrow_table"></div>
| Key | Action |
|---------------|--------------------------------------------------------|
| `Tab` | Autocomplete. When autocompleting, cycle to next entry |
| `Shift`+`Tab` | When autocompleting, cycle to previous entry |
| `Esc`+`Esc` | When autocompleting, revert autocompletion | | 88 |
|
Client APIs | CLI | Miscellaneous | <div class="narrow_table"></div>
| Key | Action |
|------------|------------------------------------------------------------------------------------|
| `Enter` | Execute query. If query is not complete, insert a newline at the end of the buffer |
| `Ctrl`+`J` | Execute query. If query is not complete, insert a newline at the end of the buffer |
| `Ctrl`+`C` | Cancel editing of current query |
| `Ctrl`+`G` | Cancel editing of current query |
| `Ctrl`+`L` | Clear screen |
| `Ctrl`+`O` | Cancel editing of current query |
| `Ctrl`+`X` | Insert a newline after the cursor |
| `Ctrl`+`Z` | Suspend CLI and return to shell, use `fg` to re-open | | 182 |
|
Client APIs | CLI | Using Read-Line | If you prefer, you can use [`rlwrap`](https://github.com/hanslub42/rlwrap) to use read-line directly with the shell. Then, use `Shift`+`Enter` to insert a newline and `Enter` to execute the query:
```bash
rlwrap --substitute-prompt="D " duckdb -batch
``` | 76 |
|
Client APIs | CLI | Autocomplete | The shell offers context-aware autocomplete of SQL queries through the [`autocomplete` extension](#docs:extensions:autocomplete). autocomplete is triggered by pressing `Tab`.
Multiple autocomplete suggestions can be present. You can cycle forwards through the suggestions by repeatedly pressing `Tab`, or `Shift+Tab` to cycle backwards. autocompletion can be reverted by pressing `ESC` twice.
The shell autocompletes four different groups:
* Keywords
* Table names and table functions
* Column names and scalar functions
* File names
The shell looks at the position in the SQL statement to determine which of these autocompletions to trigger. For example:
```sql
SELECT s
```
```text
student_id
```
```sql
SELECT student_id F
```
```text
FROM
```
```sql
SELECT student_id FROM g
```
```text
grades
```
```sql
SELECT student_id FROM 'd
```
```text
'data/
```
```sql
SELECT student_id FROM 'data/
```
```text
'data/grades.csv
``` | 227 |
|
Client APIs | CLI | Syntax Highlighting | > Syntax highlighting in the CLI is currently only available for macOS and Linux.
SQL queries that are written in the shell are automatically highlighted using syntax highlighting.
![Image showing syntax highlighting in the shell](../images/syntax_highlighting_screenshot.png)
There are several components of a query that are highlighted in different colors. The colors can be configured using [dot commands](#docs:api:cli:dot_commands).
Syntax highlighting can also be disabled entirely using the `.highlight off` command.
Below is a list of components that can be configured.
<div class="narrow_table"></div>
| Type | Command | Default Color |
|-------------------------|-------------|-----------------|
| Keywords | `.keyword` | `green` |
| Constants ad literals | `.constant` | `yellow` |
| Comments | `.comment` | `brightblack` |
| Errors | `.error` | `red` |
| Continuation | `.cont` | `brightblack` |
| Continuation (Selected) | `.cont_sel` | `green` |
The components can be configured using either a supported color name (e.g., `.keyword red`), or by directly providing a terminal code to use for rendering (e.g., `.keywordcode \033[31m`). Below is a list of supported color names and their corresponding terminal codes.
<div class="narrow_table"></div>
| Color | Terminal Code |
|---------------|---------------|
| red | `\033[31m` |
| green | `\033[32m` |
| yellow | `\033[33m` |
| blue | `\033[34m` |
| magenta | `\033[35m` |
| cyan | `\033[36m` |
| white | `\033[37m` |
| brightblack | `\033[90m` |
| brightred | `\033[91m` |
| brightgreen | `\033[92m` |
| brightyellow | `\033[93m` |
| brightblue | `\033[94m` |
| brightmagenta | `\033[95m` |
| brightcyan | `\033[96m` |
| brightwhite | `\033[97m` |
For example, here is an alternative set of syntax highlighting colors:
```text
.keyword brightred
.constant brightwhite
.comment cyan
.error yellow
.cont blue
.cont_sel brightblue
```
If you wish to start up the CLI with a different set of colors every time, you can place these commands in the `~/.duckdbrc` file that is loaded on start-up of the CLI. | 594 |
|
Client APIs | CLI | Error Highlighting | The shell has support for highlighting certain errors. In particular, mismatched brackets and unclosed quotes are highlighted in red (or another color if specified). This highlighting is automatically disabled for large queries. In addition, it can be disabled manually using the `.render_errors off` command. | 56 |
|
Client APIs | Dart API | DuckDB.Dart is the native Dart API for [DuckDB](https://duckdb.org/). | 23 |
||
Client APIs | Dart API | Installation | DuckDB.Dart can be installed from [pub.dev](https://pub.dev/packages/dart_duckdb). Please see the [API Reference](https://pub.dev/documentation/dart_duckdb/latest/) for details.
#### Use This Package as a Library {#docs:api:dart::use-this-package-as-a-library} | 69 |
|
Client APIs | Dart API | Installation | Depend on It | Run this command:
With Flutter:
```bash
flutter pub add dart_duckdb
```
This will add a line like this to your package's `pubspec.yaml` (and run an implicit `flutter pub get`):
```yaml
dependencies:
dart_duckdb: ^1.1.3
```
Alternatively, your editor might support `flutter pub get.` Check the docs for your editor to learn more. | 90 |
Client APIs | Dart API | Installation | Import It | Now in your Dart code, you can import it:
```dart
import 'package:dart_duckdb/dart_duckdb.dart';
``` | 31 |
Client APIs | Dart API | Usage Examples | See the example projects in the [`duckdb-dart` repository](https://github.com/TigerEyeLabs/duckdb-dart/):
* [`cli`](https://github.com/TigerEyeLabs/duckdb-dart/tree/main/examples/cli): command-line application
* [`duckdbexplorer`](https://github.com/TigerEyeLabs/duckdb-dart/tree/main/examples/duckdbexplorer): GUI application which builds for desktop operating systems as well as Android and iOS.
Here are some common code snippets for DuckDB.Dart:
#### Querying an In-Memory Database {#docs:api:dart::querying-an-in-memory-database}
```dart
import 'package:dart_duckdb/dart_duckdb.dart';
void main() {
final db = duckdb.open(":memory:");
final connection = duckdb.connect(db);
connection.execute('''
CREATE TABLE users (id INTEGER, name VARCHAR, age INTEGER);
INSERT INTO users VALUES (1, 'Alice', 30), (2, 'Bob', 25);
''');
final result = connection.query("SELECT * FROM users WHERE age > 28").fetchAll();
for (final row in result) {
print(row);
}
connection.dispose();
db.dispose();
}
```
#### Queries on Background Isolates {#docs:api:dart::queries-on-background-isolates}
```dart
import 'package:dart_duckdb/dart_duckdb.dart';
void main() {
final db = duckdb.open(":memory:");
final connection = duckdb.connect(db);
await Isolate.spawn(backgroundTask, db.transferrable);
connection.dispose();
db.dispose();
}
void backgroundTask(TransferableDatabase transferableDb) {
final connection = duckdb.connectWithTransferred(transferableDb);
// Access database ...
// fetch is needed to send the data back to the main isolate
}
``` | 389 |
|
Client APIs | Go | The DuckDB Go driver, `go-duckdb`, allows using DuckDB via the `database/sql` interface.
For examples on how to use this interface, see the [official documentation](https://pkg.go.dev/database/sql) and [tutorial](https://go.dev/doc/tutorial/database-access).
> The Go client is a third-party library and its repository is hosted <https://github.com/marcboeker/go-duckdb>. | 89 |
||
Client APIs | Go | Installation | To install the `go-duckdb` client, run:
```bash
go get github.com/marcboeker/go-duckdb
``` | 31 |
|
Client APIs | Go | Importing | To import the DuckDB Go package, add the following entries to your imports:
```go
import (
"database/sql"
_ "github.com/marcboeker/go-duckdb"
)
``` | 41 |
|
Client APIs | Go | Appender | The DuckDB Go client supports the [DuckDB Appender API](#docs:data:appender) for bulk inserts. You can obtain a new Appender by supplying a DuckDB connection to `NewAppenderFromConn()`. For example:
```go
connector, err := duckdb.NewConnector("test.db", nil)
if err != nil {
...
}
conn, err := connector.Connect(context.Background())
if err != nil {
...
}
defer conn.Close()
// Retrieve appender from connection (note that you have to create the table 'test' beforehand).
appender, err := NewAppenderFromConn(conn, "", "test")
if err != nil {
...
}
defer appender.Close()
err = appender.AppendRow(...)
if err != nil {
...
}
// Optional, if you want to access the appended rows immediately.
err = appender.Flush()
if err != nil {
...
}
``` | 185 |
|
Client APIs | Go | Examples | #### Simple Example {#docs:api:go::simple-example}
An example for using the Go API is as follows:
```go
package main
import (
"database/sql"
"errors"
"fmt"
"log"
_ "github.com/marcboeker/go-duckdb"
)
func main() {
db, err := sql.Open("duckdb", "")
if err != nil {
log.Fatal(err)
}
defer db.Close()
_, err = db.Exec(` CREATE TABLE people (id INTEGER, name VARCHAR)`)
if err != nil {
log.Fatal(err)
}
_, err = db.Exec(` INSERT INTO people VALUES (42, 'John')`)
if err != nil {
log.Fatal(err)
}
var (
id int
name string
)
row := db.QueryRow(` SELECT id, name FROM people`)
err = row.Scan(&id, &name)
if errors.Is(err, sql.ErrNoRows) {
log.Println("no rows")
} else if err != nil {
log.Fatal(err)
}
fmt.Printf("id: %d, name: %s\n", id, name)
}
```
#### More Examples {#docs:api:go::more-examples}
For more examples, see the [examples in the `duckdb-go` repository](https://github.com/marcboeker/go-duckdb/tree/master/examples). | 274 |
|
Client APIs | Java JDBC API | Installation | The DuckDB Java JDBC API can be installed from [Maven Central](https://search.maven.org/artifact/org.duckdb/duckdb_jdbc). Please see the [installation page](https://duckdb.org/docs/installation/) for details. | 51 |
|
Client APIs | Java JDBC API | Basic API Usage | DuckDB's JDBC API implements the main parts of the standard Java Database Connectivity (JDBC) API, version 4.1. Describing JDBC is beyond the scope of this page, see the [official documentation](https://docs.oracle.com/javase/tutorial/jdbc/basics/index.html) for details. Below we focus on the DuckDB-specific parts.
Refer to the externally hosted [API Reference](https://javadoc.io/doc/org.duckdb/duckdb_jdbc) for more information about our extensions to the JDBC specification, or the below [Arrow Methods](#::arrow-methods).
#### Startup & Shutdown {#docs:api:java::startup--shutdown}
In JDBC, database connections are created through the standard `java.sql.DriverManager` class.
The driver should auto-register in the `DriverManager`, if that does not work for some reason, you can enforce registration using the following statement:
```java
Class.forName("org.duckdb.DuckDBDriver");
```
To create a DuckDB connection, call `DriverManager` with the `jdbc:duckdb:` JDBC URL prefix, like so:
```java
import java.sql.Connection;
import java.sql.DriverManager;
Connection conn = DriverManager.getConnection("jdbc:duckdb:");
```
To use DuckDB-specific features such as the [Appender](#::appender), cast the object to a `DuckDBConnection`:
```java
import java.sql.DriverManager;
import org.duckdb.DuckDBConnection;
DuckDBConnection conn = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:");
```
When using the `jdbc:duckdb:` URL alone, an **in-memory database** is created. Note that for an in-memory database no data is persisted to disk (i.e., all data is lost when you exit the Java program). If you would like to access or create a persistent database, append its file name after the path. For example, if your database is stored in `/tmp/my_database`, use the JDBC URL `jdbc:duckdb:/tmp/my_database` to create a connection to it.
It is possible to open a DuckDB database file in **read-only** mode. This is for example useful if multiple Java processes want to read the same database file at the same time. To open an existing database file in read-only mode, set the connection property `duckdb.read_only` like so:
```java
Properties readOnlyProperty = new Properties();
readOnlyProperty.setProperty("duckdb.read_only", "true");
Connection conn = DriverManager.getConnection("jdbc:duckdb:/tmp/my_database", readOnlyProperty);
```
Additional connections can be created using the `DriverManager`. A more efficient mechanism is to call the `DuckDBConnection#duplicate()` method:
```java
Connection conn2 = ((DuckDBConnection) conn).duplicate();
```
Multiple connections are allowed, but mixing read-write and read-only connections is unsupported.
#### Configuring Connections {#docs:api:java::configuring-connections}
Configuration options can be provided to change different settings of the database system. Note that many of these
settings can be changed later on using [`PRAGMA` statements](#docs:configuration:pragmas) as well.
```java
Properties connectionProperties = new Properties();
connectionProperties.setProperty("temp_directory", "/path/to/temp/dir/");
Connection conn = DriverManager.getConnection("jdbc:duckdb:/tmp/my_database", connectionProperties);
```
#### Querying {#docs:api:java::querying}
DuckDB supports the standard JDBC methods to send queries and retrieve result sets. First a `Statement` object has to be created from the `Connection`, this object can then be used to send queries using `execute` and `executeQuery`. `execute()` is meant for queries where no results are expected like `CREATE TABLE` or `UPDATE` etc. and `executeQuery()` is meant to be used for queries that produce results (e.g., `SELECT`). Below two examples. See also the JDBC [`Statement`](https://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html) and [`ResultSet`](https://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html) documentations.
```java
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
Connection conn = DriverManager.getConnection("jdbc:duckdb:");
// create a table
Statement stmt = conn.createStatement();
stmt.execute("CREATE TABLE items (item VARCHAR, value DECIMAL(10, 2), count INTEGER)");
// insert two items into the table
stmt.execute("INSERT INTO items VALUES ('jeans', 20.0, 1), ('hammer', 42.2, 2)");
try (ResultSet rs = stmt.executeQuery("SELECT * FROM items")) {
while (rs.next()) {
System.out.println(rs.getString(1));
System.out.println(rs.getInt(3));
}
}
stmt.close();
```
```text
jeans
1
hammer
2
```
DuckDB also supports prepared statements as per the JDBC API:
```java
import java.sql.PreparedStatement;
try (PreparedStatement stmt = conn.prepareStatement("INSERT INTO items VALUES (?, ?, ?);")) {
stmt.setString(1, "chainsaw");
stmt.setDouble(2, 500.0);
stmt.setInt(3, 42);
stmt.execute();
// more calls to execute() possible
}
```
> **Warning. ** Do *not* use prepared statements to insert large amounts of data into DuckDB. See [the data import documentation](#docs:data:overview) for better options.
#### Arrow Methods {#docs:api:java::arrow-methods}
Refer to the [API Reference](https://javadoc.io/doc/org.duckdb/duckdb_jdbc/latest/org/duckdb/DuckDBResultSet.html#arrowExportStream(java.lang.Object,long)) for type signatures | 1,225 |
|
Client APIs | Java JDBC API | Basic API Usage | Arrow Export | The following demonstrates exporting an arrow stream and consuming it using the java arrow bindings
```java
import org.apache.arrow.memory.RootAllocator;
import org.apache.arrow.vector.ipc.ArrowReader;
import org.duckdb.DuckDBResultSet;
try (var conn = DriverManager.getConnection("jdbc:duckdb:");
var stmt = conn.prepareStatement("SELECT * FROM generate_series(2000)");
var resultset = (DuckDBResultSet) stmt.executeQuery();
var allocator = new RootAllocator()) {
try (var reader = (ArrowReader) resultset.arrowExportStream(allocator, 256)) {
while (reader.loadNextBatch()) {
System.out.println(reader.getVectorSchemaRoot().getVector("generate_series"));
}
}
stmt.close();
}
``` | 148 |
Client APIs | Java JDBC API | Basic API Usage | Arrow Import | The following demonstrates consuming an Arrow stream from the Java Arrow bindings.
```java
import org.apache.arrow.memory.RootAllocator;
import org.apache.arrow.vector.ipc.ArrowReader;
import org.duckdb.DuckDBConnection;
// Arrow binding
try (var allocator = new RootAllocator();
ArrowStreamReader reader = null; // should not be null of course
var arrow_array_stream = ArrowArrayStream.allocateNew(allocator)) {
Data.exportArrayStream(allocator, reader, arrow_array_stream);
// DuckDB setup
try (var conn = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:")) {
conn.registerArrowStream("asdf", arrow_array_stream);
// run a query
try (var stmt = conn.createStatement();
var rs = (DuckDBResultSet) stmt.executeQuery("SELECT count(*) FROM asdf")) {
while (rs.next()) {
System.out.println(rs.getInt(1));
}
}
}
}
```
#### Streaming Results {#docs:api:java::streaming-results}
Result streaming is opt-in in the JDBC driver – by setting the `jdbc_stream_results` config to `true` before running a query. The easiest way do that is to pass it in the `Properties` object.
```java
Properties props = new Properties();
props.setProperty(DuckDBDriver.JDBC_STREAM_RESULTS, String.valueOf(true));
Connection conn = DriverManager.getConnection("jdbc:duckdb:", props);
```
#### Appender {#docs:api:java::appender}
The [Appender](#docs:data:appender) is available in the DuckDB JDBC driver via the `org.duckdb.DuckDBAppender` class.
The constructor of the class requires the schema name and the table name it is applied to.
The Appender is flushed when the `close()` method is called.
Example:
```java
import java.sql.DriverManager;
import java.sql.Statement;
import org.duckdb.DuckDBConnection;
DuckDBConnection conn = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:");
try (var stmt = conn.createStatement()) {
stmt.execute("CREATE TABLE tbl (x BIGINT, y FLOAT, s VARCHAR)"
);
// using try-with-resources to automatically close the appender at the end of the scope
try (var appender = conn.createAppender(DuckDBConnection.DEFAULT_SCHEMA, "tbl")) {
appender.beginRow();
appender.append(10);
appender.append(3.2);
appender.append("hello");
appender.endRow();
appender.beginRow();
appender.append(20);
appender.append(-8.1);
appender.append("world");
appender.endRow();
}
```
#### Batch Writer {#docs:api:java::batch-writer}
The DuckDB JDBC driver offers batch write functionality.
The batch writer supports prepared statements to mitigate the overhead of query parsing.
> The preferred method for bulk inserts is to use the [Appender](#::appender) due to its higher performance.
> However, when using the Appender is not possbile, the batch writer is available as alternative. | 637 |
Client APIs | Java JDBC API | Basic API Usage | Batch Writer with Prepared Statements | ```java
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import org.duckdb.DuckDBConnection;
DuckDBConnection conn = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:");
PreparedStatement stmt = conn.prepareStatement("INSERT INTO test (x, y, z) VALUES (?, ?, ?);");
stmt.setObject(1, 1);
stmt.setObject(2, 2);
stmt.setObject(3, 3);
stmt.addBatch();
stmt.setObject(1, 4);
stmt.setObject(2, 5);
stmt.setObject(3, 6);
stmt.addBatch();
stmt.executeBatch();
stmt.close();
``` | 135 |
Client APIs | Java JDBC API | Basic API Usage | Batch Writer with Vanilla Statements | The batch writer also supports vanilla SQL statements:
```java
import java.sql.DriverManager;
import java.sql.Statement;
import org.duckdb.DuckDBConnection;
DuckDBConnection conn = (DuckDBConnection) DriverManager.getConnection("jdbc:duckdb:");
Statement stmt = conn.createStatement();
stmt.execute("CREATE TABLE test (x INTEGER, y INTEGER, z INTEGER)");
stmt.addBatch("INSERT INTO test (x, y, z) VALUES (1, 2, 3);");
stmt.addBatch("INSERT INTO test (x, y, z) VALUES (4, 5, 6);");
stmt.executeBatch();
stmt.close();
``` | 133 |
Client APIs | Java JDBC API | Troubleshooting | #### Driver Class Not Found {#docs:api:java::driver-class-not-found}
If the Java application is unable to find the DuckDB, it may throw the following error:
```console
Exception in thread "main" java.sql.SQLException: No suitable driver found for jdbc:duckdb:
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:706)
at java.sql/java.sql.DriverManager.getConnection(DriverManager.java:252)
...
```
And when trying to load the class manually, it may result in this error:
```console
Exception in thread "main" java.lang.ClassNotFoundException: org.duckdb.DuckDBDriver
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:641)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:188)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:520)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:375)
...
```
These errors stem from the DuckDB Maven/Gradle dependency not being detected. To ensure that it is detected, force refresh the Maven configuration in your IDE. | 248 |
|
Client APIs | Julia Package | The DuckDB Julia package provides a high-performance front-end for DuckDB. Much like SQLite, DuckDB runs in-process within the Julia client, and provides a DBInterface front-end.
The package also supports multi-threaded execution. It uses Julia threads/tasks for this purpose. If you wish to run queries in parallel, you must launch Julia with multi-threading support (by e.g., setting the `JULIA_NUM_THREADS` environment variable). | 91 |
||
Client APIs | Julia Package | Installation | Install DuckDB as follows:
```julia
using Pkg
Pkg.add("DuckDB")
```
Alternatively, enter the package manager using the `]` key, and issue the following command:
```julia
pkg> add DuckDB
``` | 56 |
|
Client APIs | Julia Package | Basics | ```julia
using DuckDB
# create a new in-memory database
con = DBInterface.connect(DuckDB.DB, ":memory:")
# create a table
DBInterface.execute(con, "CREATE TABLE integers (i INTEGER)")
# insert data by executing a prepared statement
stmt = DBInterface.prepare(con, "INSERT INTO integers VALUES(?)")
DBInterface.execute(stmt, [42])
# query the database
results = DBInterface.execute(con, "SELECT 42 a")
print(results)
```
Some SQL statements, such as PIVOT and IMPORT DATABASE are executed as multiple prepared statements and will error when using `DuckDB.execute()`. Instead they can be run with `DuckDB.query()` instead of `DuckDB.execute()` and will always return a materialized result. | 162 |
|
Client APIs | Julia Package | Scanning DataFrames | The DuckDB Julia package also provides support for querying Julia DataFrames. Note that the DataFrames are directly read by DuckDB - they are not inserted or copied into the database itself.
If you wish to load data from a DataFrame into a DuckDB table you can run a `CREATE TABLE ... AS` or `INSERT INTO` query.
```julia
using DuckDB
using DataFrames
# create a new in-memory dabase
con = DBInterface.connect(DuckDB.DB)
# create a DataFrame
df = DataFrame(a = [1, 2, 3], b = [42, 84, 42])
# register it as a view in the database
DuckDB.register_data_frame(con, df, "my_df")
# run a SQL query over the DataFrame
results = DBInterface.execute(con, "SELECT * FROM my_df")
print(results)
``` | 182 |
|
Client APIs | Julia Package | Appender API | The DuckDB Julia package also supports the [Appender API](#docs:data:appender), which is much faster than using prepared statements or individual `INSERT INTO` statements. Appends are made in row-wise format. For every column, an `append()` call should be made, after which the row should be finished by calling `flush()`. After all rows have been appended, `close()` should be used to finalize the Appender and clean up the resulting memory.
```julia
using DuckDB, DataFrames, Dates
db = DuckDB.DB()
# create a table
DBInterface.execute(db,
"CREATE OR REPLACE TABLE data(id INTEGER PRIMARY KEY, value FLOAT, timestamp TIMESTAMP, date DATE)")
# create data to insert
len = 100
df = DataFrames.DataFrame(
id = collect(1:len),
value = rand(len),
timestamp = Dates.now() + Dates.Second.(1:len),
date = Dates.today() + Dates.Day.(1:len)
)
# append data by row
appender = DuckDB.Appender(db, "data")
for i in eachrow(df)
for j in i
DuckDB.append(appender, j)
end
DuckDB.end_row(appender)
end
# close the appender after all rows
DuckDB.close(appender)
``` | 268 |
|
Client APIs | Julia Package | Concurrency | Within a Julia process, tasks are able to concurrently read and write to the database, as long as each task maintains its own connection to the database. In the example below, a single task is spawned to periodically read the database and many tasks are spawned to write to the database using both [`INSERT` statements](#docs:sql:statements:insert) as well as the [Appender API](#docs:data:appender).
```julia
using Dates, DataFrames, DuckDB
db = DuckDB.DB()
DBInterface.connect(db)
DBInterface.execute(db, "CREATE OR REPLACE TABLE data (date TIMESTAMP, id INTEGER)")
function run_reader(db)
# create a DuckDB connection specifically for this task
conn = DBInterface.connect(db)
while true
println(DBInterface.execute(conn,
"SELECT id, count(date) AS count, max(date) AS max_date
FROM data GROUP BY id ORDER BY id") |> DataFrames.DataFrame)
Threads.sleep(1)
end
DBInterface.close(conn)
end
# spawn one reader task
Threads.@spawn run_reader(db)
function run_inserter(db, id)
# create a DuckDB connection specifically for this task
conn = DBInterface.connect(db)
for i in 1:1000
Threads.sleep(0.01)
DuckDB.execute(conn, "INSERT INTO data VALUES (current_timestamp, ?)"; id);
end
DBInterface.close(conn)
end
# spawn many insert tasks
for i in 1:100
Threads.@spawn run_inserter(db, 1)
end
function run_appender(db, id)
# create a DuckDB connection specifically for this task
appender = DuckDB.Appender(db, "data")
for i in 1:1000
Threads.sleep(0.01)
row = (Dates.now(Dates.UTC), id)
for j in row
DuckDB.append(appender, j);
end
DuckDB.end_row(appender);
end
DuckDB.close(appender);
end
# spawn many appender tasks
for i in 1:100
Threads.@spawn run_appender(db, 2)
end
``` | 441 |
|
Client APIs | Julia Package | Original Julia Connector | Credits to kimmolinna for the [original DuckDB Julia connector](https://github.com/kimmolinna/DuckDB.jl). | 29 |
|
Client APIs | Node.js | Node.js API | This package provides a Node.js API for DuckDB.
The API for this client is somewhat compliant to the SQLite Node.js client for easier transition.
For TypeScript wrappers, see the [duckdb-async project](https://www.npmjs.com/package/duckdb-async). | 57 |
|
Client APIs | Node.js | Initializing | Load the package and create a database object:
```js
const duckdb = require('duckdb');
const db = new duckdb.Database(':memory:'); // or a file name for a persistent DB
```
All options as described on [Database configuration](#docs:configuration:overview::configuration-reference) can be (optionally) supplied to the `Database` constructor as second argument. The third argument can be optionally supplied to get feedback on the given options.
```js
const db = new duckdb.Database(':memory:', {
"access_mode": "READ_WRITE",
"max_memory": "512MB",
"threads": "4"
}, (err) => {
if (err) {
console.error(err);
}
});
``` | 151 |
|
Client APIs | Node.js | Running a Query | The following code snippet runs a simple query using the `Database.all()` method.
```js
db.all('SELECT 42 AS fortytwo', function(err, res) {
if (err) {
console.warn(err);
return;
}
console.log(res[0].fortytwo)
});
```
Other available methods are `each`, where the callback is invoked for each row, `run` to execute a single statement without results and `exec`, which can execute several SQL commands at once but also does not return results. All those commands can work with prepared statements, taking the values for the parameters as additional arguments. For example like so:
```js
db.all('SELECT ?::INTEGER AS fortytwo, ?::VARCHAR AS hello', 42, 'Hello, World', function(err, res) {
if (err) {
console.warn(err);
return;
}
console.log(res[0].fortytwo)
console.log(res[0].hello)
});
``` | 197 |
|
Client APIs | Node.js | Connections | A database can have multiple `Connection`s, those are created using `db.connect()`.
```js
const con = db.connect();
```
You can create multiple connections, each with their own transaction context.
`Connection` objects also contain shorthands to directly call `run()`, `all()` and `each()` with parameters and callbacks, respectively, for example:
```js
con.all('SELECT 42 AS fortytwo', function(err, res) {
if (err) {
console.warn(err);
return;
}
console.log(res[0].fortytwo)
});
``` | 121 |
|
Client APIs | Node.js | Prepared Statements | From connections, you can create prepared statements (and only that) using `con.prepare()`:
```js
const stmt = con.prepare('SELECT ?::INTEGER AS fortytwo');
```
To execute this statement, you can call for example `all()` on the `stmt` object:
```js
stmt.all(42, function(err, res) {
if (err) {
console.warn(err);
} else {
console.log(res[0].fortytwo)
}
});
```
You can also execute the prepared statement multiple times. This is for example useful to fill a table with data:
```js
con.run('CREATE TABLE a (i INTEGER)');
const stmt = con.prepare('INSERT INTO a VALUES (?)');
for (let i = 0; i < 10; i++) {
stmt.run(i);
}
stmt.finalize();
con.all('SELECT * FROM a', function(err, res) {
if (err) {
console.warn(err);
} else {
console.log(res)
}
});
```
`prepare()` can also take a callback which gets the prepared statement as an argument:
```js
const stmt = con.prepare('SELECT ?::INTEGER AS fortytwo', function(err, stmt) {
stmt.all(42, function(err, res) {
if (err) {
console.warn(err);
} else {
console.log(res[0].fortytwo)
}
});
});
``` | 285 |
|
Client APIs | Node.js | Inserting Data via Arrow | [Apache Arrow](#docs:guides:python:sql_on_arrow) can be used to insert data into DuckDB without making a copy:
```js
const arrow = require('apache-arrow');
const db = new duckdb.Database(':memory:');
const jsonData = [
{"userId":1,"id":1,"title":"delectus aut autem","completed":false},
{"userId":1,"id":2,"title":"quis ut nam facilis et officia qui","completed":false}
];
// note; doesn't work on Windows yet
db.exec(` INSTALL arrow; LOAD arrow;`, (err) => {
if (err) {
console.warn(err);
return;
}
const arrowTable = arrow.tableFromJSON(jsonData);
db.register_buffer("jsonDataTable", [arrow.tableToIPC(arrowTable)], true, (err, res) => {
if (err) {
console.warn(err);
return;
}
// `SELECT * FROM jsonDataTable` would return the entries in `jsonData`
});
});
``` | 209 |
|
Client APIs | Node.js | Loading Unsigned Extensions | To load [unsigned extensions](#docs:extensions:overview::ensuring-the-integrity-of-extensions), instantiate the database as follows:
```js
db = new duckdb.Database(':memory:', {"allow_unsigned_extensions": "true"});
``` | 50 |
|
Client APIs | Node.js | Modules | <dl>
<dt><a href="#module_duckdb">duckdb</a></dt>
<dd></dd>
</dl> | 28 |