h1
stringclasses 12
values | h2
stringclasses 93
values | h3
stringclasses 960
values | h5
stringlengths 0
79
| content
stringlengths 24
533k
| tokens
int64 7
158k
|
---|---|---|---|---|---|
Client APIs | Node.js | Typedefs | <dl>
<dt><a href="#ColumnInfo">ColumnInfo</a> : <code>object</code></dt>
<dd></dd>
<dt><a href="#TypeInfo">TypeInfo</a> : <code>object</code></dt>
<dd></dd>
<dt><a href="#DuckDbError">DuckDbError</a> : <code>object</code></dt>
<dd></dd>
<dt><a href="#HTTPError">HTTPError</a> : <code>object</code></dt>
<dd></dd>
</dl>
<a name="module_duckdb"></a> | 129 |
|
Client APIs | Node.js | duckdb | **Summary**: DuckDB is an embeddable SQL OLAP Database Management System
* [duckdb](#::module_duckdb)
* [~Connection](#::module_duckdb..Connection)
* [.run(sql, ...params, callback)](#::module_duckdb..Connection+run) β <code>void</code>
* [.all(sql, ...params, callback)](#::module_duckdb..Connection+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Connection+arrowIPCAll) β <code>void</code>
* [.arrowIPCStream(sql, ...params, callback)](#::module_duckdb..Connection+arrowIPCStream) β
* [.each(sql, ...params, callback)](#::module_duckdb..Connection+each) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Connection+stream)
* [.register_udf(name, return_type, fun)](#::module_duckdb..Connection+register_udf) β <code>void</code>
* [.prepare(sql, ...params, callback)](#::module_duckdb..Connection+prepare) β <code>Statement</code>
* [.exec(sql, ...params, callback)](#::module_duckdb..Connection+exec) β <code>void</code>
* [.register_udf_bulk(name, return_type, callback)](#::module_duckdb..Connection+register_udf_bulk) β <code>void</code>
* [.unregister_udf(name, return_type, callback)](#::module_duckdb..Connection+unregister_udf) β <code>void</code>
* [.register_buffer(name, array, force, callback)](#::module_duckdb..Connection+register_buffer) β <code>void</code>
* [.unregister_buffer(name, callback)](#::module_duckdb..Connection+unregister_buffer) β <code>void</code>
* [.close(callback)](#::module_duckdb..Connection+close) β <code>void</code>
* [~Statement](#::module_duckdb..Statement)
* [.sql](#::module_duckdb..Statement+sql) β
* [.get()](#::module_duckdb..Statement+get)
* [.run(sql, ...params, callback)](#::module_duckdb..Statement+run) β <code>void</code>
* [.all(sql, ...params, callback)](#::module_duckdb..Statement+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Statement+arrowIPCAll) β <code>void</code>
* [.each(sql, ...params, callback)](#::module_duckdb..Statement+each) β <code>void</code>
* [.finalize(sql, ...params, callback)](#::module_duckdb..Statement+finalize) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Statement+stream)
* [.columns()](#::module_duckdb..Statement+columns) β [<code>Array.<ColumnInfo></code>](#::ColumnInfo)
* [~QueryResult](#::module_duckdb..QueryResult)
* [.nextChunk()](#::module_duckdb..QueryResult+nextChunk) β
* [.nextIpcBuffer()](#::module_duckdb..QueryResult+nextIpcBuffer) β
* [.asyncIterator()](#::module_duckdb..QueryResult+asyncIterator)
* [~Database](#::module_duckdb..Database)
* [.close(callback)](#::module_duckdb..Database+close) β <code>void</code>
* [.close_internal(callback)](#::module_duckdb..Database+close_internal) β <code>void</code>
* [.wait(callback)](#::module_duckdb..Database+wait) β <code>void</code>
* [.serialize(callback)](#::module_duckdb..Database+serialize) β <code>void</code>
* [.parallelize(callback)](#::module_duckdb..Database+parallelize) β <code>void</code>
* [.connect(path)](#::module_duckdb..Database+connect) β <code>Connection</code>
* [.interrupt(callback)](#::module_duckdb..Database+interrupt) β <code>void</code>
* [.prepare(sql)](#::module_duckdb..Database+prepare) β <code>Statement</code>
* [.run(sql, ...params, callback)](#::module_duckdb..Database+run) β <code>void</code>
* [.scanArrowIpc(sql, ...params, callback)](#::module_duckdb..Database+scanArrowIpc) β <code>void</code>
* [.each(sql, ...params, callback)](#::module_duckdb..Database+each) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Database+stream)
* [.all(sql, ...params, callback)](#::module_duckdb..Database+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Database+arrowIPCAll) β <code>void</code>
* [.arrowIPCStream(sql, ...params, callback)](#::module_duckdb..Database+arrowIPCStream) β <code>void</code>
* [.exec(sql, ...params, callback)](#::module_duckdb..Database+exec) β <code>void</code>
* [.register_udf(name, return_type, fun)](#::module_duckdb..Database+register_udf) β <code>this</code>
* [.register_buffer(name)](#::module_duckdb..Database+register_buffer) β <code>this</code>
* [.unregister_buffer(name)](#::module_duckdb..Database+unregister_buffer) β <code>this</code>
* [.unregister_udf(name)](#::module_duckdb..Database+unregister_udf) β <code>this</code>
* [.registerReplacementScan(fun)](#::module_duckdb..Database+registerReplacementScan) β <code>this</code>
* [.tokenize(text)](#::module_duckdb..Database+tokenize) β <code>ScriptTokens</code>
* [.get()](#::module_duckdb..Database+get)
* [~TokenType](#::module_duckdb..TokenType)
* [~ERROR](#::module_duckdb..ERROR) : <code>number</code>
* [~OPEN_READONLY](#::module_duckdb..OPEN_READONLY) : <code>number</code>
* [~OPEN_READWRITE](#::module_duckdb..OPEN_READWRITE) : <code>number</code>
* [~OPEN_CREATE](#::module_duckdb..OPEN_CREATE) : <code>number</code>
* [~OPEN_FULLMUTEX](#::module_duckdb..OPEN_FULLMUTEX) : <code>number</code>
* [~OPEN_SHAREDCACHE](#::module_duckdb..OPEN_SHAREDCACHE) : <code>number</code>
* [~OPEN_PRIVATECACHE](#::module_duckdb..OPEN_PRIVATECACHE) : <code>number</code>
<a name="module_duckdb..Connection"></a>
#### duckdb~Connection {#docs:api:nodejs:reference::duckdbconnection}
**Kind**: inner class of [<code>duckdb</code>](#::module_duckdb)
* [~Connection](#::module_duckdb..Connection)
* [.run(sql, ...params, callback)](#::module_duckdb..Connection+run) β <code>void</code>
* [.all(sql, ...params, callback)](#::module_duckdb..Connection+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Connection+arrowIPCAll) β <code>void</code>
* [.arrowIPCStream(sql, ...params, callback)](#::module_duckdb..Connection+arrowIPCStream) β
* [.each(sql, ...params, callback)](#::module_duckdb..Connection+each) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Connection+stream)
* [.register_udf(name, return_type, fun)](#::module_duckdb..Connection+register_udf) β <code>void</code>
* [.prepare(sql, ...params, callback)](#::module_duckdb..Connection+prepare) β <code>Statement</code>
* [.exec(sql, ...params, callback)](#::module_duckdb..Connection+exec) β <code>void</code>
* [.register_udf_bulk(name, return_type, callback)](#::module_duckdb..Connection+register_udf_bulk) β <code>void</code>
* [.unregister_udf(name, return_type, callback)](#::module_duckdb..Connection+unregister_udf) β <code>void</code>
* [.register_buffer(name, array, force, callback)](#::module_duckdb..Connection+register_buffer) β <code>void</code>
* [.unregister_buffer(name, callback)](#::module_duckdb..Connection+unregister_buffer) β <code>void</code>
* [.close(callback)](#::module_duckdb..Connection+close) β <code>void</code>
<a name="module_duckdb..Connection+run"></a> | 2,109 |
|
Client APIs | Node.js | duckdb | connection.run(sql, ...params, callback) β <code>void</code> | Run a SQL statement and trigger a callback when done
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+all"></a> | 81 |
Client APIs | Node.js | duckdb | connection.all(sql, ...params, callback) β <code>void</code> | Run a SQL query and triggers the callback once for all result rows
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+arrowIPCAll"></a> | 86 |
Client APIs | Node.js | duckdb | connection.arrowIPCAll(sql, ...params, callback) β <code>void</code> | Run a SQL query and serialize the result into the Apache Arrow IPC format (requires arrow extension to be loaded)
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+arrowIPCStream"></a> | 95 |
Client APIs | Node.js | duckdb | connection.arrowIPCStream(sql, ...params, callback) β | Run a SQL query, returns a IpcResultStreamIterator that allows streaming the result into the Apache Arrow IPC format
(requires arrow extension to be loaded)
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
**Returns**: Promise<IpcResultStreamIterator>
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+each"></a> | 113 |
Client APIs | Node.js | duckdb | connection.each(sql, ...params, callback) β <code>void</code> | Runs a SQL query and triggers the callback for each result row
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+stream"></a> | 83 |
Client APIs | Node.js | duckdb | connection.stream(sql, ...params) | **Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
<a name="module_duckdb..Connection+register_udf"></a> | 67 |
Client APIs | Node.js | duckdb | connection.register\_udf(name, return_type, fun) β <code>void</code> | Register a User Defined Function
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
**Note**: this follows the wasm udfs somewhat but is simpler because we can pass data much more cleanly
| Param |
| --- |
| name |
| return_type |
| fun |
<a name="module_duckdb..Connection+prepare"></a> | 82 |
Client APIs | Node.js | duckdb | connection.prepare(sql, ...params, callback) β <code>Statement</code> | Prepare a SQL query for execution
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+exec"></a> | 77 |
Client APIs | Node.js | duckdb | connection.exec(sql, ...params, callback) β <code>void</code> | Execute a SQL query
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Connection+register_udf_bulk"></a> | 78 |
Client APIs | Node.js | duckdb | connection.register\_udf\_bulk(name, return_type, callback) β <code>void</code> | Register a User Defined Function
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param |
| --- |
| name |
| return_type |
| callback |
<a name="module_duckdb..Connection+unregister_udf"></a> | 64 |
Client APIs | Node.js | duckdb | connection.unregister\_udf(name, return_type, callback) β <code>void</code> | Unregister a User Defined Function
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param |
| --- |
| name |
| return_type |
| callback |
<a name="module_duckdb..Connection+register_buffer"></a> | 63 |
Client APIs | Node.js | duckdb | connection.register\_buffer(name, array, force, callback) β <code>void</code> | Register a Buffer to be scanned using the Apache Arrow IPC scanner
(requires arrow extension to be loaded)
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param |
| --- |
| name |
| array |
| force |
| callback |
<a name="module_duckdb..Connection+unregister_buffer"></a> | 81 |
Client APIs | Node.js | duckdb | connection.unregister\_buffer(name, callback) β <code>void</code> | Unregister the Buffer
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param |
| --- |
| name |
| callback |
<a name="module_duckdb..Connection+close"></a> | 56 |
Client APIs | Node.js | duckdb | connection.close(callback) β <code>void</code> | Closes connection
**Kind**: instance method of [<code>Connection</code>](#::module_duckdb..Connection)
| Param |
| --- |
| callback |
<a name="module_duckdb..Statement"></a>
#### duckdb~Statement {#docs:api:nodejs:reference::duckdbstatement}
**Kind**: inner class of [<code>duckdb</code>](#::module_duckdb)
* [~Statement](#::module_duckdb..Statement)
* [.sql](#::module_duckdb..Statement+sql) β
* [.get()](#::module_duckdb..Statement+get)
* [.run(sql, ...params, callback)](#::module_duckdb..Statement+run) β <code>void</code>
* [.all(sql, ...params, callback)](#::module_duckdb..Statement+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Statement+arrowIPCAll) β <code>void</code>
* [.each(sql, ...params, callback)](#::module_duckdb..Statement+each) β <code>void</code>
* [.finalize(sql, ...params, callback)](#::module_duckdb..Statement+finalize) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Statement+stream)
* [.columns()](#::module_duckdb..Statement+columns) β [<code>Array.<ColumnInfo></code>](#::ColumnInfo)
<a name="module_duckdb..Statement+sql"></a> | 359 |
Client APIs | Node.js | duckdb | statement.sql β | **Kind**: instance property of [<code>Statement</code>](#::module_duckdb..Statement)
**Returns**: sql contained in statement
**Field**:
<a name="module_duckdb..Statement+get"></a> | 49 |
Client APIs | Node.js | duckdb | statement.get() | Not implemented
**Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
<a name="module_duckdb..Statement+run"></a> | 40 |
Client APIs | Node.js | duckdb | statement.run(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Statement+all"></a> | 70 |
Client APIs | Node.js | duckdb | statement.all(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Statement+arrowIPCAll"></a> | 72 |
Client APIs | Node.js | duckdb | statement.arrowIPCAll(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Statement+each"></a> | 70 |
Client APIs | Node.js | duckdb | statement.each(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Statement+finalize"></a> | 70 |
Client APIs | Node.js | duckdb | statement.finalize(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Statement+stream"></a> | 70 |
Client APIs | Node.js | duckdb | statement.stream(sql, ...params) | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
<a name="module_duckdb..Statement+columns"></a> | 65 |
Client APIs | Node.js | duckdb | statement.columns() β [<code>Array.<ColumnInfo></code>](#::ColumnInfo) | **Kind**: instance method of [<code>Statement</code>](#::module_duckdb..Statement)
**Returns**: [<code>Array.<ColumnInfo></code>](#::ColumnInfo) - - Array of column names and types
<a name="module_duckdb..QueryResult"></a>
#### duckdb~QueryResult {#docs:api:nodejs:reference::duckdbqueryresult}
**Kind**: inner class of [<code>duckdb</code>](#::module_duckdb)
* [~QueryResult](#::module_duckdb..QueryResult)
* [.nextChunk()](#::module_duckdb..QueryResult+nextChunk) β
* [.nextIpcBuffer()](#::module_duckdb..QueryResult+nextIpcBuffer) β
* [.asyncIterator()](#::module_duckdb..QueryResult+asyncIterator)
<a name="module_duckdb..QueryResult+nextChunk"></a> | 210 |
Client APIs | Node.js | duckdb | queryResult.nextChunk() β | **Kind**: instance method of [<code>QueryResult</code>](#::module_duckdb..QueryResult)
**Returns**: data chunk
<a name="module_duckdb..QueryResult+nextIpcBuffer"></a> | 49 |
Client APIs | Node.js | duckdb | queryResult.nextIpcBuffer() β | Function to fetch the next result blob of an Arrow IPC Stream in a zero-copy way.
(requires arrow extension to be loaded)
**Kind**: instance method of [<code>QueryResult</code>](#::module_duckdb..QueryResult)
**Returns**: data chunk
<a name="module_duckdb..QueryResult+asyncIterator"></a> | 74 |
Client APIs | Node.js | duckdb | queryResult.asyncIterator() | **Kind**: instance method of [<code>QueryResult</code>](#::module_duckdb..QueryResult)
<a name="module_duckdb..Database"></a>
#### duckdb~Database {#docs:api:nodejs:reference::duckdbdatabase}
Main database interface
**Kind**: inner property of [<code>duckdb</code>](#::module_duckdb)
| Param | Description |
| --- | --- |
| path | path to database file or :memory: for in-memory database |
| access_mode | access mode |
| config | the configuration object |
| callback | callback function |
* [~Database](#::module_duckdb..Database)
* [.close(callback)](#::module_duckdb..Database+close) β <code>void</code>
* [.close_internal(callback)](#::module_duckdb..Database+close_internal) β <code>void</code>
* [.wait(callback)](#::module_duckdb..Database+wait) β <code>void</code>
* [.serialize(callback)](#::module_duckdb..Database+serialize) β <code>void</code>
* [.parallelize(callback)](#::module_duckdb..Database+parallelize) β <code>void</code>
* [.connect(path)](#::module_duckdb..Database+connect) β <code>Connection</code>
* [.interrupt(callback)](#::module_duckdb..Database+interrupt) β <code>void</code>
* [.prepare(sql)](#::module_duckdb..Database+prepare) β <code>Statement</code>
* [.run(sql, ...params, callback)](#::module_duckdb..Database+run) β <code>void</code>
* [.scanArrowIpc(sql, ...params, callback)](#::module_duckdb..Database+scanArrowIpc) β <code>void</code>
* [.each(sql, ...params, callback)](#::module_duckdb..Database+each) β <code>void</code>
* [.stream(sql, ...params)](#::module_duckdb..Database+stream)
* [.all(sql, ...params, callback)](#::module_duckdb..Database+all) β <code>void</code>
* [.arrowIPCAll(sql, ...params, callback)](#::module_duckdb..Database+arrowIPCAll) β <code>void</code>
* [.arrowIPCStream(sql, ...params, callback)](#::module_duckdb..Database+arrowIPCStream) β <code>void</code>
* [.exec(sql, ...params, callback)](#::module_duckdb..Database+exec) β <code>void</code>
* [.register_udf(name, return_type, fun)](#::module_duckdb..Database+register_udf) β <code>this</code>
* [.register_buffer(name)](#::module_duckdb..Database+register_buffer) β <code>this</code>
* [.unregister_buffer(name)](#::module_duckdb..Database+unregister_buffer) β <code>this</code>
* [.unregister_udf(name)](#::module_duckdb..Database+unregister_udf) β <code>this</code>
* [.registerReplacementScan(fun)](#::module_duckdb..Database+registerReplacementScan) β <code>this</code>
* [.tokenize(text)](#::module_duckdb..Database+tokenize) β <code>ScriptTokens</code>
* [.get()](#::module_duckdb..Database+get)
<a name="module_duckdb..Database+close"></a> | 777 |
Client APIs | Node.js | duckdb | database.close(callback) β <code>void</code> | Closes database instance
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+close_internal"></a> | 54 |
Client APIs | Node.js | duckdb | database.close\_internal(callback) β <code>void</code> | Internal method. Do not use, call Connection#close instead
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+wait"></a> | 61 |
Client APIs | Node.js | duckdb | database.wait(callback) β <code>void</code> | Triggers callback when all scheduled database tasks have completed.
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+serialize"></a> | 60 |
Client APIs | Node.js | duckdb | database.serialize(callback) β <code>void</code> | Currently a no-op. Provided for SQLite compatibility
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+parallelize"></a> | 59 |
Client APIs | Node.js | duckdb | database.parallelize(callback) β <code>void</code> | Currently a no-op. Provided for SQLite compatibility
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+connect"></a> | 58 |
Client APIs | Node.js | duckdb | database.connect(path) β <code>Connection</code> | Create a new database connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Description |
| --- | --- |
| path | the database to connect to, either a file path, or `:memory:` |
<a name="module_duckdb..Database+interrupt"></a> | 75 |
Client APIs | Node.js | duckdb | database.interrupt(callback) β <code>void</code> | Supposedly interrupt queries, but currently does not do anything.
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| callback |
<a name="module_duckdb..Database+prepare"></a> | 62 |
Client APIs | Node.js | duckdb | database.prepare(sql) β <code>Statement</code> | Prepare a SQL query for execution
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| sql |
<a name="module_duckdb..Database+run"></a> | 55 |
Client APIs | Node.js | duckdb | database.run(sql, ...params, callback) β <code>void</code> | Convenience method for Connection#run using a built-in default connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+scanArrowIpc"></a> | 87 |
Client APIs | Node.js | duckdb | database.scanArrowIpc(sql, ...params, callback) β <code>void</code> | Convenience method for Connection#scanArrowIpc using a built-in default connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+each"></a> | 87 |
Client APIs | Node.js | duckdb | database.each(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+stream"></a> | 70 |
Client APIs | Node.js | duckdb | database.stream(sql, ...params) | **Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
<a name="module_duckdb..Database+all"></a> | 65 |
Client APIs | Node.js | duckdb | database.all(sql, ...params, callback) β <code>void</code> | Convenience method for Connection#apply using a built-in default connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+arrowIPCAll"></a> | 86 |
Client APIs | Node.js | duckdb | database.arrowIPCAll(sql, ...params, callback) β <code>void</code> | Convenience method for Connection#arrowIPCAll using a built-in default connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+arrowIPCStream"></a> | 88 |
Client APIs | Node.js | duckdb | database.arrowIPCStream(sql, ...params, callback) β <code>void</code> | Convenience method for Connection#arrowIPCStream using a built-in default connection
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+exec"></a> | 86 |
Client APIs | Node.js | duckdb | database.exec(sql, ...params, callback) β <code>void</code> | **Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Type |
| --- | --- |
| sql | |
| ...params | <code>\*</code> |
| callback | |
<a name="module_duckdb..Database+register_udf"></a> | 72 |
Client APIs | Node.js | duckdb | database.register\_udf(name, return_type, fun) β <code>this</code> | Register a User Defined Function
Convenience method for Connection#register_udf
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| name |
| return_type |
| fun |
<a name="module_duckdb..Database+register_buffer"></a> | 72 |
Client APIs | Node.js | duckdb | database.register\_buffer(name) β <code>this</code> | Register a buffer containing serialized data to be scanned from DuckDB.
Convenience method for Connection#unregister_buffer
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| name |
<a name="module_duckdb..Database+unregister_buffer"></a> | 74 |
Client APIs | Node.js | duckdb | database.unregister\_buffer(name) β <code>this</code> | Unregister a Buffer
Convenience method for Connection#unregister_buffer
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| name |
<a name="module_duckdb..Database+unregister_udf"></a> | 66 |
Client APIs | Node.js | duckdb | database.unregister\_udf(name) β <code>this</code> | Unregister a UDF
Convenience method for Connection#unregister_udf
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| name |
<a name="module_duckdb..Database+registerReplacementScan"></a> | 67 |
Client APIs | Node.js | duckdb | database.registerReplacementScan(fun) β <code>this</code> | Register a table replace scan function
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param | Description |
| --- | --- |
| fun | Replacement scan function |
<a name="module_duckdb..Database+tokenize"></a> | 64 |
Client APIs | Node.js | duckdb | database.tokenize(text) β <code>ScriptTokens</code> | Return positions and types of tokens in given text
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
| Param |
| --- |
| text |
<a name="module_duckdb..Database+get"></a> | 58 |
Client APIs | Node.js | duckdb | database.get() | Not implemented
**Kind**: instance method of [<code>Database</code>](#::module_duckdb..Database)
<a name="module_duckdb..TokenType"></a>
#### duckdb~TokenType {#docs:api:nodejs:reference::duckdbtokentype}
Types of tokens return by `tokenize`.
**Kind**: inner property of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..ERROR"></a>
#### duckdb~ERROR : <code>number</code> {#docs:api:nodejs:reference::duckdberror--codenumbercode}
Check that errno attribute equals this to check for a duckdb error
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_READONLY"></a>
#### duckdb~OPEN\_READONLY : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_readonly--codenumbercode}
Open database in readonly mode
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_READWRITE"></a>
#### duckdb~OPEN\_READWRITE : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_readwrite--codenumbercode}
Currently ignored
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_CREATE"></a>
#### duckdb~OPEN\_CREATE : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_create--codenumbercode}
Currently ignored
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_FULLMUTEX"></a>
#### duckdb~OPEN\_FULLMUTEX : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_fullmutex--codenumbercode}
Currently ignored
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_SHAREDCACHE"></a>
#### duckdb~OPEN\_SHAREDCACHE : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_sharedcache--codenumbercode}
Currently ignored
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="module_duckdb..OPEN_PRIVATECACHE"></a>
#### duckdb~OPEN\_PRIVATECACHE : <code>number</code> {#docs:api:nodejs:reference::duckdbopen_privatecache--codenumbercode}
Currently ignored
**Kind**: inner constant of [<code>duckdb</code>](#::module_duckdb)
<a name="ColumnInfo"></a> | 663 |
Client APIs | Node.js | ColumnInfo : <code>object</code> | **Kind**: global typedef
**Properties**
| Name | Type | Description |
| --- | --- | --- |
| name | <code>string</code> | Column name |
| type | [<code>TypeInfo</code>](#::TypeInfo) | Column type |
<a name="TypeInfo"></a> | 65 |
|
Client APIs | Node.js | TypeInfo : <code>object</code> | **Kind**: global typedef
**Properties**
| Name | Type | Description |
| --- | --- | --- |
| id | <code>string</code> | Type ID |
| [alias] | <code>string</code> | SQL type alias |
| sql_type | <code>string</code> | SQL type name |
<a name="DuckDbError"></a> | 82 |
|
Client APIs | Node.js | DuckDbError : <code>object</code> | **Kind**: global typedef
**Properties**
| Name | Type | Description |
| --- | --- | --- |
| errno | <code>number</code> | -1 for DuckDB errors |
| message | <code>string</code> | Error message |
| code | <code>string</code> | 'DUCKDB_NODEJS_ERROR' for DuckDB errors |
| errorType | <code>string</code> | DuckDB error type code (eg, HTTP, IO, Catalog) |
<a name="HTTPError"></a> | 116 |
|
Client APIs | Node.js | HTTPError : <code>object</code> | **Kind**: global typedef
**Extends**: [<code>DuckDbError</code>](#::DuckDbError)
**Properties**
| Name | Type | Description |
| --- | --- | --- |
| statusCode | <code>number</code> | HTTP response status code |
| reason | <code>string</code> | HTTP response reason |
| response | <code>string</code> | HTTP response body |
| headers | <code>object</code> | HTTP headers | | 105 |
|
Client APIs | Python | Installation | The DuckDB Python API can be installed using [pip](https://pip.pypa.io): `pip install duckdb`. Please see the [installation page](https://duckdb.org/docs/installation/): `conda install python-duckdb -c conda-forge`.
**Python version:**
DuckDB requires Python 3.7 or newer. | 75 |
|
Client APIs | Python | Basic API Usage | The most straight-forward manner of running SQL queries using DuckDB is using the `duckdb.sql` command.
```python
import duckdb
duckdb.sql("SELECT 42").show()
```
This will run queries using an **in-memory database** that is stored globally inside the Python module. The result of the query is returned as a **Relation**. A relation is a symbolic representation of the query. The query is not executed until the result is fetched or requested to be printed to the screen.
Relations can be referenced in subsequent queries by storing them inside variables, and using them as tables. This way queries can be constructed incrementally.
```python
import duckdb
r1 = duckdb.sql("SELECT 42 AS i")
duckdb.sql("SELECT i * 2 AS k FROM r1").show()
``` | 172 |
|
Client APIs | Python | Data Input | DuckDB can ingest data from a wide variety of formats β both on-disk and in-memory. See the [data ingestion page](#docs:api:python:data_ingestion) for more information.
```python
import duckdb
duckdb.read_csv("example.csv") # read a CSV file into a Relation
duckdb.read_parquet("example.parquet") # read a Parquet file into a Relation
duckdb.read_json("example.json") # read a JSON file into a Relation
duckdb.sql("SELECT * FROM 'example.csv'") # directly query a CSV file
duckdb.sql("SELECT * FROM 'example.parquet'") # directly query a Parquet file
duckdb.sql("SELECT * FROM 'example.json'") # directly query a JSON file
```
#### DataFrames {#docs:api:python:overview::dataframes}
DuckDB can directly query Pandas DataFrames, Polars DataFrames and Arrow tables.
Note that these are read-only, i.e., editing these tables via [`INSERT`](#docs:sql:statements:insert) or [`UPDATE` statements](#docs:sql:statements:update) is not possible. | 250 |
|
Client APIs | Python | Data Input | Pandas | To directly query a Pandas DataFrame, run:
```python
import duckdb
import pandas as pd
pandas_df = pd.DataFrame({"a": [42]})
duckdb.sql("SELECT * FROM pandas_df")
```
```text
βββββββββ
β a β
β int64 β
βββββββββ€
β 42 β
βββββββββ
``` | 92 |
Client APIs | Python | Data Input | Polars | To directly query a Polars DataFrame, run:
```python
import duckdb
import polars as pl
polars_df = pl.DataFrame({"a": [42]})
duckdb.sql("SELECT * FROM polars_df")
```
```text
βββββββββ
β a β
β int64 β
βββββββββ€
β 42 β
βββββββββ
``` | 94 |
Client APIs | Python | Data Input | PyArrow | To directly query a PyArrow table, run:
```python
import duckdb
import pyarrow as pa
arrow_table = pa.Table.from_pydict({"a": [42]})
duckdb.sql("SELECT * FROM arrow_table")
```
```text
βββββββββ
β a β
β int64 β
βββββββββ€
β 42 β
βββββββββ
``` | 95 |
Client APIs | Python | Result Conversion | DuckDB supports converting query results efficiently to a variety of formats. See the [result conversion page](#docs:api:python:conversion) for more information.
```python
import duckdb
duckdb.sql("SELECT 42").fetchall() # Python objects
duckdb.sql("SELECT 42").df() # Pandas DataFrame
duckdb.sql("SELECT 42").pl() # Polars DataFrame
duckdb.sql("SELECT 42").arrow() # Arrow Table
duckdb.sql("SELECT 42").fetchnumpy() # NumPy Arrays
``` | 124 |
|
Client APIs | Python | Writing Data to Disk | DuckDB supports writing Relation objects directly to disk in a variety of formats. The [`COPY` statement](#docs:sql:statements:copy) can be used to write data to disk using SQL as an alternative.
```python
import duckdb
duckdb.sql("SELECT 42").write_parquet("out.parquet") # Write to a Parquet file
duckdb.sql("SELECT 42").write_csv("out.csv") # Write to a CSV file
duckdb.sql("COPY (SELECT 42) TO 'out.parquet'") # Copy to a Parquet file
``` | 126 |
|
Client APIs | Python | Connection Options | Applications can open a new DuckDB connection via the `duckdb.connect()` method.
#### Using an In-Memory Database {#docs:api:python:overview::using-an-in-memory-database}
When using DuckDB through `duckdb.sql()`, it operates on an **in-memory** database, i.e., no tables are persisted on disk.
Invoking the `duckdb.connect()` method without arguments returns a connection, which also uses an in-memory database:
```python
import duckdb
con = duckdb.connect()
con.sql("SELECT 42 AS x").show()
```
#### Persistent Storage {#docs:api:python:overview::persistent-storage}
The `duckdb.connect(dbname)` creates a connection to a **persistent** database.
Any data written to that connection will be persisted, and can be reloaded by reconnecting to the same file, both from Python and from other DuckDB clients.
```python
import duckdb
# create a connection to a file called 'file.db'
con = duckdb.connect("file.db")
# create a table and load data into it
con.sql("CREATE TABLE test (i INTEGER)")
con.sql("INSERT INTO test VALUES (42)")
# query the table
con.table("test").show()
# explicitly close the connection
con.close()
# Note: connections also closed implicitly when they go out of scope
```
You can also use a context manager to ensure that the connection is closed:
```python
import duckdb
with duckdb.connect("file.db") as con:
con.sql("CREATE TABLE test (i INTEGER)")
con.sql("INSERT INTO test VALUES (42)")
con.table("test").show()
# the context manager closes the connection automatically
```
#### Configuration {#docs:api:python:overview::configuration}
The `duckdb.connect()` accepts a `config` dictionary, where [configuration options](#docs:configuration:overview::configuration-reference) can be specified. For example:
```python
import duckdb
con = duckdb.connect(config = {'threads': 1})
```
#### Connection Object and Module {#docs:api:python:overview::connection-object-and-module}
The connection object and the `duckdb` module can be used interchangeably β they support the same methods. The only difference is that when using the `duckdb` module a global in-memory database is used.
> If you are developing a package designed for others to use, and use DuckDB in the package, it is recommend that you create connection objects instead of using the methods on the `duckdb` module. That is because the `duckdb` module uses a shared global database β which can cause hard to debug issues if used from within multiple different packages.
#### Using Connections in Parallel Python Programs {#docs:api:python:overview::using-connections-in-parallel-python-programs}
The `DuckDBPyConnection` object is not thread-safe. If you would like to write to the same database from multiple threads, create a cursor for each thread with the [`DuckDBPyConnection.cursor()` method](#docs:api:python:reference:index::duckdb.DuckDBPyConnection.cursor). | 664 |
|
Client APIs | Python | Loading and Installing Extensions | DuckDB's Python API provides functions for installing and loading [extensions](#docs:extensions:overview), which perform the equivalent operations to running the `INSTALL` and `LOAD` SQL commands, respectively. An example that installs and loads the [`spatial` extension](#docs:extensions:spatial:overview) looks like follows:
```python
import duckdb
con = duckdb.connect()
con.install_extension("spatial")
con.load_extension("spatial")
```
#### Community Extensions {#docs:api:python:overview::community-extensions}
To load [community extensions](#docs:extensions:community_extensions), use `repository="community"` argument to the `install_extension` method.
For example, install and load the `h3` community extension as follows:
```python
import duckdb
con = duckdb.connect()
con.install_extension("h3", repository="community")
con.load_extension("h3")
```
#### Unsigned Extensions {#docs:api:python:overview::unsigned-extensions}
To load [unsigned extensions](#docs:extensions:overview::unsigned-extensions), use the `config = {"allow_unsigned_extensions": "true"}` argument to the `duckdb.connect()` method. | 256 |
|
Client APIs | Python | Data Ingestion | This page contains examples for data ingestion to Python using DuckDB. First, import the DuckDB page:
```python
import duckdb
```
Then, proceed with any of the following sections. | 41 |
|
Client APIs | Python | CSV Files | CSV files can be read using the `read_csv` function, called either from within Python or directly from within SQL. By default, the `read_csv` function attempts to auto-detect the CSV settings by sampling from the provided file.
Read from a file using fully auto-detected settings:
```python
duckdb.read_csv("example.csv")
```
Read multiple CSV files from a folder:
```python
duckdb.read_csv("folder/*.csv")
```
Specify options on how the CSV is formatted internally:
```python
duckdb.read_csv("example.csv", header = False, sep = ",")
```
Override types of the first two columns:
```python
duckdb.read_csv("example.csv", dtype = ["int", "varchar"])
```
Directly read a CSV file from within SQL:
```python
duckdb.sql("SELECT * FROM 'example.csv'")
```
Call `read_csv` from within SQL:
```python
duckdb.sql("SELECT * FROM read_csv('example.csv')")
```
See the [CSV Import](#docs:data:csv:overview) page for more information. | 233 |
|
Client APIs | Python | Parquet Files | Parquet files can be read using the `read_parquet` function, called either from within Python or directly from within SQL.
Read from a single Parquet file:
```python
duckdb.read_parquet("example.parquet")
```
Read multiple Parquet files from a folder:
```python
duckdb.read_parquet("folder/*.parquet")
```
Read a Parquet file over [https](#docs:extensions:httpfs:overview):
```python
duckdb.read_parquet("https://some.url/some_file.parquet")
```
Read a list of Parquet files:
```python
duckdb.read_parquet(["file1.parquet", "file2.parquet", "file3.parquet"])
```
Directly read a Parquet file from within SQL:
```python
duckdb.sql("SELECT * FROM 'example.parquet'")
```
Call `read_parquet` from within SQL:
```python
duckdb.sql("SELECT * FROM read_parquet('example.parquet')")
```
See the [Parquet Loading](#docs:data:parquet:overview) page for more information. | 235 |
|
Client APIs | Python | JSON Files | JSON files can be read using the `read_json` function, called either from within Python or directly from within SQL. By default, the `read_json` function will automatically detect if a file contains newline-delimited JSON or regular JSON, and will detect the schema of the objects stored within the JSON file.
Read from a single JSON file:
```python
duckdb.read_json("example.json")
```
Read multiple JSON files from a folder:
```python
duckdb.read_json("folder/*.json")
```
Directly read a JSON file from within SQL:
```python
duckdb.sql("SELECT * FROM 'example.json'")
```
Call `read_json` from within SQL:
```python
duckdb.sql("SELECT * FROM read_json_auto('example.json')")
``` | 162 |
|
Client APIs | Python | Directly Accessing DataFrames and Arrow Objects | DuckDB is automatically able to query certain Python variables by referring to their variable name (as if it was a table).
These types include the following: Pandas DataFrame, Polars DataFrame, Polars LazyFrame, NumPy arrays, [relations](#docs:api:python:relational_api), and Arrow objects.
Accessing these is made possible by [replacement scans](#docs:api:c:replacement_scans).
DuckDB supports querying multiple types of Apache Arrow objects including [tables](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html), [datasets](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Dataset.html), [RecordBatchReaders](https://arrow.apache.org/docs/python/generated/pyarrow.ipc.RecordBatchStreamReader.html), and [scanners](https://arrow.apache.org/docs/python/generated/pyarrow.dataset.Scanner.html). See the Python [guides](#docs:python:overview::) for more examples.
```python
import duckdb
import pandas as pd
test_df = pd.DataFrame.from_dict({"i": [1, 2, 3, 4], "j": ["one", "two", "three", "four"]})
print(duckdb.sql("SELECT * FROM test_df").fetchall())
```
```text
[(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
```
DuckDB also supports βregisteringβ a DataFrame or Arrow object as a virtual table, comparable to a SQL `VIEW`. This is useful when querying a DataFrame/Arrow object that is stored in another way (as a class variable, or a value in a dictionary). Below is a Pandas example:
If your Pandas DataFrame is stored in another location, here is an example of manually registering it:
```python
import duckdb
import pandas as pd
my_dictionary = {}
my_dictionary["test_df"] = pd.DataFrame.from_dict({"i": [1, 2, 3, 4], "j": ["one", "two", "three", "four"]})
duckdb.register("test_df_view", my_dictionary["test_df"])
print(duckdb.sql("SELECT * FROM test_df_view").fetchall())
```
```text
[(1, 'one'), (2, 'two'), (3, 'three'), (4, 'four')]
```
You can also create a persistent table in DuckDB from the contents of the DataFrame (or the view):
```python
# create a new table from the contents of a DataFrame
con.execute("CREATE TABLE test_df_table AS SELECT * FROM test_df")
# insert into an existing table from the contents of a DataFrame
con.execute("INSERT INTO test_df_table SELECT * FROM test_df")
```
#### Pandas DataFrames β `object` Columns {#docs:api:python:data_ingestion::pandas-dataframes--object-columns}
`pandas.DataFrame` columns of an `object` dtype require some special care, since this stores values of arbitrary type.
To convert these columns to DuckDB, we first go through an analyze phase before converting the values.
In this analyze phase a sample of all the rows of the column are analyzed to determine the target type.
This sample size is by default set to 1000.
If the type picked during the analyze step is incorrect, this will result in a "Failed to cast value:" error, in which case you will need to increase the sample size.
The sample size can be changed by setting the `pandas_analyze_sample` config option.
```python
# example setting the sample size to 100k
duckdb.execute("SET GLOBAL pandas_analyze_sample = 100_000")
```
#### Registering Objects {#docs:api:python:data_ingestion::registering-objects}
You can register Python objects as DuckDB tables using the [`DuckDBPyConnection.register()` function](#docs:api:python:reference:index::duckdb.DuckDBPyConnection.register).
The precedence of objects with the same name is as follows:
* Objects explicitly registered via `DuckDBPyConnection.register()`
* Native DuckDB tables and views
* [Replacement scans](#docs:api:c:replacement_scans) | 883 |
|
Client APIs | Python | Conversion between DuckDB and Python | This page documents the rules for converting [Python objects to DuckDB](#::object-conversion-python-object-to-duckdb) and [DuckDB results to Python](#::result-conversion-duckdb-results-to-python). | 47 |
|
Client APIs | Python | Object Conversion: Python Object to DuckDB | This is a mapping of Python object types to DuckDB [Logical Types](#docs:sql:data_types:overview):
* `None` β `NULL`
* `bool` β `BOOLEAN`
* `datetime.timedelta` β `INTERVAL`
* `str` β `VARCHAR`
* `bytearray` β `BLOB`
* `memoryview` β `BLOB`
* `decimal.Decimal` β `DECIMAL` / `DOUBLE`
* `uuid.UUID` β `UUID`
The rest of the conversion rules are as follows.
#### `int` {#docs:api:python:conversion::int}
Since integers can be of arbitrary size in Python, there is not a one-to-one conversion possible for ints.
Instead we perform these casts in order until one succeeds:
* `BIGINT`
* `INTEGER`
* `UBIGINT`
* `UINTEGER`
* `DOUBLE`
When using the DuckDB Value class, it's possible to set a target type, which will influence the conversion.
#### `float` {#docs:api:python:conversion::float}
These casts are tried in order until one succeeds:
* `DOUBLE`
* `FLOAT`
#### `datetime.datetime` {#docs:api:python:conversion::datetimedatetime}
For `datetime` we will check `pandas.isnull` if it's available and return `NULL` if it returns `true`.
We check against `datetime.datetime.min` and `datetime.datetime.max` to convert to `-inf` and `+inf` respectively.
If the `datetime` has tzinfo, we will use `TIMESTAMPTZ`, otherwise it becomes `TIMESTAMP`.
#### `datetime.time` {#docs:api:python:conversion::datetimetime}
If the `time` has tzinfo, we will use `TIMETZ`, otherwise it becomes `TIME`.
#### `datetime.date` {#docs:api:python:conversion::datetimedate}
`date` converts to the `DATE` type.
We check against `datetime.date.min` and `datetime.date.max` to convert to `-inf` and `+inf` respectively.
#### `bytes` {#docs:api:python:conversion::bytes}
`bytes` converts to `BLOB` by default, when it's used to construct a Value object of type `BITSTRING`, it maps to `BITSTRING` instead.
#### `list` {#docs:api:python:conversion::list}
`list` becomes a `LIST` type of the βmost permissiveβ type of its children, for example:
```python
my_list_value = [
12345,
"test"
]
```
Will become `VARCHAR[]` because 12345 can convert to `VARCHAR` but `test` can not convert to `INTEGER`.
```sql
[12345, test]
```
#### `dict` {#docs:api:python:conversion::dict}
The `dict` object can convert to either `STRUCT(...)` or `MAP(..., ...)` depending on its structure.
If the dict has a structure similar to:
```python
my_map_dict = {
"key": [
1, 2, 3
],
"value": [
"one", "two", "three"
]
}
```
Then we'll convert it to a `MAP` of key-value pairs of the two lists zipped together.
The example above becomes a `MAP(INTEGER, VARCHAR)`:
```sql
{1=one, 2=two, 3=three}
```
> The names of the fields matter and the two lists need to have the same size.
Otherwise we'll try to convert it to a `STRUCT`.
```python
my_struct_dict = {
1: "one",
"2": 2,
"three": [1, 2, 3],
False: True
}
```
Becomes:
```sql
{'1': one, '2': 2, 'three': [1, 2, 3], 'False': true}
```
> Every `key` of the dictionary is converted to string.
#### `tuple` {#docs:api:python:conversion::tuple}
`tuple` converts to `LIST` by default, when it's used to construct a Value object of type `STRUCT` it will convert to `STRUCT` instead.
#### `numpy.ndarray` and `numpy.datetime64` {#docs:api:python:conversion::numpyndarray-and-numpydatetime64}
`ndarray` and `datetime64` are converted by calling `tolist()` and converting the result of that. | 985 |
|
Client APIs | Python | Result Conversion: DuckDB Results to Python | DuckDB's Python client provides multiple additional methods that can be used to efficiently retrieve data.
#### NumPy {#docs:api:python:conversion::numpy}
* `fetchnumpy()` fetches the data as a dictionary of NumPy arrays
#### Pandas {#docs:api:python:conversion::pandas}
* `df()` fetches the data as a Pandas DataFrame
* `fetchdf()` is an alias of `df()`
* `fetch_df()` is an alias of `df()`
* `fetch_df_chunk(vector_multiple)` fetches a portion of the results into a DataFrame. The number of rows returned in each chunk is the vector size (2048 by default) * vector_multiple (1 by default).
#### Apache Arrow {#docs:api:python:conversion::apache-arrow}
* `arrow()` fetches the data as an [Arrow table](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html)
* `fetch_arrow_table()` is an alias of `arrow()`
* `fetch_record_batch(chunk_size)` returns an [Arrow record batch reader](https://arrow.apache.org/docs/python/generated/pyarrow.ipc.RecordBatchStreamReader.html) with `chunk_size` rows per batch
#### Polars {#docs:api:python:conversion::polars}
* `pl()` fetches the data as a Polars DataFrame
#### Examples {#docs:api:python:conversion::examples}
Below are some examples using this functionality. See the [Python guides](#docs:python:overview::) for more examples.
Fetch as Pandas DataFrame:
```python
df = con.execute("SELECT * FROM items").fetchdf()
print(df)
```
```text
item value count
0 jeans 20.0 1
1 hammer 42.2 2
2 laptop 2000.0 1
3 chainsaw 500.0 10
4 iphone 300.0 2
```
Fetch as dictionary of NumPy arrays:
```python
arr = con.execute("SELECT * FROM items").fetchnumpy()
print(arr)
```
```text
{'item': masked_array(data=['jeans', 'hammer', 'laptop', 'chainsaw', 'iphone'],
mask=[False, False, False, False, False],
fill_value='?',
dtype=object), 'value': masked_array(data=[20.0, 42.2, 2000.0, 500.0, 300.0],
mask=[False, False, False, False, False],
fill_value=1e+20), 'count': masked_array(data=[1, 2, 1, 10, 2],
mask=[False, False, False, False, False],
fill_value=999999,
dtype=int32)}
```
Fetch as an Arrow table. Converting to Pandas afterwards just for pretty printing:
```python
tbl = con.execute("SELECT * FROM items").fetch_arrow_table()
print(tbl.to_pandas())
```
```text
item value count
0 jeans 20.00 1
1 hammer 42.20 2
2 laptop 2000.00 1
3 chainsaw 500.00 10
4 iphone 300.00 2
``` | 723 |
|
Client APIs | Python | Python DB API | The standard DuckDB Python API provides a SQL interface compliant with the [DB-API 2.0 specification described by PEP 249](https://www.python.org/dev/peps/pep-0249/) similar to the [SQLite Python API](https://docs.python.org/3.7/library/sqlite3.html). | 67 |
|
Client APIs | Python | Connection | To use the module, you must first create a `DuckDBPyConnection` object that represents a connection to a database.
This is done through the [`duckdb.connect`](#docs:api:python:reference:index::duckdb.connect) method.
The 'config' keyword argument can be used to provide a `dict` that contains key->value pairs referencing [settings](#docs:configuration:overview::configuration-reference) understood by DuckDB.
#### In-Memory Connection {#docs:api:python:dbapi::in-memory-connection}
The special value `:memory:` can be used to create an **in-memory database**. Note that for an in-memory database no data is persisted to disk (i.e., all data is lost when you exit the Python process). | 163 |
|
Client APIs | Python | Connection | Named in-memory Connections | The special value `:memory:` can also be postfixed with a name, for example: `:memory:conn3`.
When a name is provided, subsequent `duckdb.connect` calls will create a new connection to the same database, sharing the catalogs (views, tables, macros etc..).
Using `:memory:` without a name will always create a new and separate database instance.
#### Default Connection {#docs:api:python:dbapi::default-connection}
By default we create an (unnamed) **in-memory-database** that lives inside the `duckdb` module.
Every method of `DuckDBPyConnection` is also available on the `duckdb` module, this connection is what's used by these methods.
The special value `:default:` can be used to get this default connection.
#### File-Based Connection {#docs:api:python:dbapi::file-based-connection}
If the `database` is a file path, a connection to a persistent database is established.
If the file does not exist the file will be created (the extension of the file is irrelevant and can be `.db`, `.duckdb` or anything else). | 247 |
Client APIs | Python | Connection | `read_only` Connections | If you would like to connect in read-only mode, you can set the `read_only` flag to `True`. If the file does not exist, it is **not** created when connecting in read-only mode.
Read-only mode is required if multiple Python processes want to access the same database file at the same time.
```python
import duckdb
duckdb.execute("CREATE TABLE tbl AS SELECT 42 a")
con = duckdb.connect(":default:")
con.sql("SELECT * FROM tbl")
# or
duckdb.default_connection.sql("SELECT * FROM tbl")
```
```text
βββββββββ
β a β
β int32 β
βββββββββ€
β 42 β
βββββββββ
```
```python
import duckdb
# to start an in-memory database
con = duckdb.connect(database = ":memory:")
# to use a database file (not shared between processes)
con = duckdb.connect(database = "my-db.duckdb", read_only = False)
# to use a database file (shared between processes)
con = duckdb.connect(database = "my-db.duckdb", read_only = True)
# to explicitly get the default connection
con = duckdb.connect(database = ":default:")
```
If you want to create a second connection to an existing database, you can use the `cursor()` method. This might be useful for example to allow parallel threads running queries independently. A single connection is thread-safe but is locked for the duration of the queries, effectively serializing database access in this case.
Connections are closed implicitly when they go out of scope or if they are explicitly closed using `close()`. Once the last connection to a database instance is closed, the database instance is closed as well. | 373 |
Client APIs | Python | Querying | SQL queries can be sent to DuckDB using the `execute()` method of connections. Once a query has been executed, results can be retrieved using the `fetchone` and `fetchall` methods on the connection. `fetchall` will retrieve all results and complete the transaction. `fetchone` will retrieve a single row of results each time that it is invoked until no more results are available. The transaction will only close once `fetchone` is called and there are no more results remaining (the return value will be `None`). As an example, in the case of a query only returning a single row, `fetchone` should be called once to retrieve the results and a second time to close the transaction. Below are some short examples:
```python
# create a table
con.execute("CREATE TABLE items (item VARCHAR, value DECIMAL(10, 2), count INTEGER)")
# insert two items into the table
con.execute("INSERT INTO items VALUES ('jeans', 20.0, 1), ('hammer', 42.2, 2)")
# retrieve the items again
con.execute("SELECT * FROM items")
print(con.fetchall())
# [('jeans', Decimal('20.00'), 1), ('hammer', Decimal('42.20'), 2)]
# retrieve the items one at a time
con.execute("SELECT * FROM items")
print(con.fetchone())
# ('jeans', Decimal('20.00'), 1)
print(con.fetchone())
# ('hammer', Decimal('42.20'), 2)
print(con.fetchone()) # This closes the transaction. Any subsequent calls to .fetchone will return None
# None
```
The `description` property of the connection object contains the column names as per the standard.
#### Prepared Statements {#docs:api:python:dbapi::prepared-statements}
DuckDB also supports [prepared statements](#docs:sql:query_syntax:prepared_statements) in the API with the `execute` and `executemany` methods. The values may be passed as an additional parameter after a query that contains `?` or `$1` (dollar symbol and a number) placeholders. Using the `?` notation adds the values in the same sequence as passed within the Python parameter. Using the `$` notation allows for values to be reused within the SQL statement based on the number and index of the value found within the Python parameter. Values are converted according to the [conversion rules](#docs:api:python:conversion::object-conversion-python-object-to-duckdb).
Here are some examples. First, insert a row using a [prepared statement](#docs:sql:query_syntax:prepared_statements):
```python
con.execute("INSERT INTO items VALUES (?, ?, ?)", ["laptop", 2000, 1])
```
Second, insert several rows using a [prepared statement](#docs:sql:query_syntax:prepared_statements):
```python
con.executemany("INSERT INTO items VALUES (?, ?, ?)", [["chainsaw", 500, 10], ["iphone", 300, 2]] )
```
Query the database using a [prepared statement](#docs:sql:query_syntax:prepared_statements):
```python
con.execute("SELECT item FROM items WHERE value > ?", [400])
print(con.fetchall())
```
```text
[('laptop',), ('chainsaw',)]
```
Query using the `$` notation for a [prepared statement](#docs:sql:query_syntax:prepared_statements) and reused values:
```python
con.execute("SELECT $1, $1, $2", ["duck", "goose"])
print(con.fetchall())
```
```text
[('duck', 'duck', 'goose')]
```
> **Warning. ** Do *not* use `executemany` to insert large amounts of data into DuckDB. See the [data ingestion page](#docs:api:python:data_ingestion) for better options. | 832 |
|
Client APIs | Python | Named Parameters | Besides the standard unnamed parameters, like `$1`, `$2` etc., it's also possible to supply named parameters, like `$my_parameter`.
When using named parameters, you have to provide a dictionary mapping of `str` to value in the `parameters` argument.
An example use is the following:
```python
import duckdb
res = duckdb.execute("""
SELECT
$my_param,
$other_param,
$also_param
""",
{
"my_param": 5,
"other_param": "DuckDB",
"also_param": [42]
}
).fetchall()
print(res)
```
```text
[(5, 'DuckDB', [42])]
``` | 139 |
|
Client APIs | Python | Relational API | The Relational API is an alternative API that can be used to incrementally construct queries. The API is centered around `DuckDBPyRelation` nodes. The relations can be seen as symbolic representations of SQL queries. They do not hold any data β and nothing is executed β until a method that triggers execution is called. | 65 |
|
Client APIs | Python | Constructing Relations | Relations can be created from SQL queries using the `duckdb.sql` method. Alternatively, they can be created from the various data ingestion methods (` read_parquet`, `read_csv`, `read_json`).
For example, here we create a relation from a SQL query:
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(10_000_000_000) tbl(id)")
rel.show()
```
```text
ββββββββββββββββββββββββββ
β id β
β int64 β
ββββββββββββββββββββββββββ€
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
β Β· β
β Β· β
β Β· β
β 9990 β
β 9991 β
β 9992 β
β 9993 β
β 9994 β
β 9995 β
β 9996 β
β 9997 β
β 9998 β
β 9999 β
ββββββββββββββββββββββββββ€
β ? rows β
β (>9999 rows, 20 shown) β
ββββββββββββββββββββββββββ
```
Note how we are constructing a relation that computes an immense amount of data (10B rows or 74 GB of data). The relation is constructed instantly β and we can even print the relation instantly.
When printing a relation using `show` or displaying it in the terminal, the first `10K` rows are fetched. If there are more than `10K` rows, the output window will show `>9999 rows` (as the amount of rows in the relation is unknown). | 404 |
|
Client APIs | Python | Data Ingestion | Outside of SQL queries, the following methods are provided to construct relation objects from external data.
* `from_arrow`
* `from_df`
* `read_csv`
* `read_json`
* `read_parquet` | 45 |
|
Client APIs | Python | SQL Queries | Relation objects can be queried through SQL through [replacement scans](#docs:api:c:replacement_scans). If you have a relation object stored in a variable, you can refer to that variable as if it was a SQL table (in the `FROM` clause). This allows you to incrementally build queries using relation objects.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
duckdb.sql("SELECT sum(id) FROM rel").show()
```
```text
ββββββββββββββββ
β sum(id) β
β int128 β
ββββββββββββββββ€
β 499999500000 β
ββββββββββββββββ
``` | 160 |
|
Client APIs | Python | Operations | There are a number of operations that can be performed on relations. These are all short-hand for running the SQL queries β and will return relations again themselves.
#### `aggregate(expr, groups = {})` {#docs:api:python:relational_api::aggregateexpr-groups--}
Apply an (optionally grouped) aggregate over the relation. The system will automatically group by any columns that are not aggregates.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
rel.aggregate("id % 2 AS g, sum(id), min(id), max(id)")
```
```text
βββββββββ¬βββββββββββββββ¬ββββββββββ¬ββββββββββ
β g β sum(id) β min(id) β max(id) β
β int64 β int128 β int64 β int64 β
βββββββββΌβββββββββββββββΌββββββββββΌββββββββββ€
β 0 β 249999500000 β 0 β 999998 β
β 1 β 250000000000 β 1 β 999999 β
βββββββββ΄βββββββββββββββ΄ββββββββββ΄ββββββββββ
```
#### `except_(rel)` {#docs:api:python:relational_api::except_rel}
Select all rows in the first relation, that do not occur in the second relation. The relations must have the same number of columns.
```python
import duckdb
r1 = duckdb.sql("SELECT * FROM range(10) tbl(id)")
r2 = duckdb.sql("SELECT * FROM range(5) tbl(id)")
r1.except_(r2).show()
```
```text
βββββββββ
β id β
β int64 β
βββββββββ€
β 5 β
β 6 β
β 7 β
β 8 β
β 9 β
βββββββββ
```
#### `filter(condition)` {#docs:api:python:relational_api::filtercondition}
Apply the given condition to the relation, filtering any rows that do not satisfy the condition.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
rel.filter("id > 5").limit(3).show()
```
```text
βββββββββ
β id β
β int64 β
βββββββββ€
β 6 β
β 7 β
β 8 β
βββββββββ
```
#### `intersect(rel)` {#docs:api:python:relational_api::intersectrel}
Select the intersection of two relations β returning all rows that occur in both relations. The relations must have the same number of columns.
```python
import duckdb
r1 = duckdb.sql("SELECT * FROM range(10) tbl(id)")
r2 = duckdb.sql("SELECT * FROM range(5) tbl(id)")
r1.intersect(r2).show()
```
```text
βββββββββ
β id β
β int64 β
βββββββββ€
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
βββββββββ
```
#### `join(rel, condition, type = "inner")` {#docs:api:python:relational_api::joinrel-condition-type--inner}
Combine two relations, joining them based on the provided condition.
```python
import duckdb
r1 = duckdb.sql("SELECT * FROM range(5) tbl(id)").set_alias("r1")
r2 = duckdb.sql("SELECT * FROM range(10, 15) tbl(id)").set_alias("r2")
r1.join(r2, "r1.id + 10 = r2.id").show()
```
```text
βββββββββ¬ββββββββ
β id β id β
β int64 β int64 β
βββββββββΌββββββββ€
β 0 β 10 β
β 1 β 11 β
β 2 β 12 β
β 3 β 13 β
β 4 β 14 β
βββββββββ΄ββββββββ
```
#### `limit(n, offset = 0)` {#docs:api:python:relational_api::limitn-offset--0}
Select the first *n* rows, optionally offset by *offset*.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
rel.limit(3).show()
```
```text
βββββββββ
β id β
β int64 β
βββββββββ€
β 0 β
β 1 β
β 2 β
βββββββββ
```
#### `order(expr)` {#docs:api:python:relational_api::orderexpr}
Sort the relation by the given set of expressions.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
rel.order("id DESC").limit(3).show()
```
```text
ββββββββββ
β id β
β int64 β
ββββββββββ€
β 999999 β
β 999998 β
β 999997 β
ββββββββββ
```
#### `project(expr)` {#docs:api:python:relational_api::projectexpr}
Apply the given expression to each row in the relation.
```python
import duckdb
rel = duckdb.sql("SELECT * FROM range(1_000_000) tbl(id)")
rel.project("id + 10 AS id_plus_ten").limit(3).show()
```
```text
βββββββββββββββ
β id_plus_ten β
β int64 β
βββββββββββββββ€
β 10 β
β 11 β
β 12 β
βββββββββββββββ
```
#### `union(rel)` {#docs:api:python:relational_api::unionrel}
Combine two relations, returning all rows in `r1` followed by all rows in `r2`. The relations must have the same number of columns.
```python
import duckdb
r1 = duckdb.sql("SELECT * FROM range(5) tbl(id)")
r2 = duckdb.sql("SELECT * FROM range(10, 15) tbl(id)")
r1.union(r2).show()
```
```text
βββββββββ
β id β
β int64 β
βββββββββ€
β 0 β
β 1 β
β 2 β
β 3 β
β 4 β
β 10 β
β 11 β
β 12 β
β 13 β
β 14 β
βββββββββ
``` | 1,613 |
|
Client APIs | Python | Result Output | The result of relations can be converted to various types of Python structures, see the [result conversion page](#docs:api:python:conversion) for more information.
The result of relations can also be directly written to files using the below methods.
* [`write_csv`](#docs:api:python:reference:index::duckdb.DuckDBPyRelation.write_csv)
* [`write_parquet`](#docs:api:python:reference:index::duckdb.DuckDBPyRelation.write_parquet) | 104 |
|
Client APIs | Python | Python Function API | You can create a DuckDB user-defined function (UDF) from a Python function so it can be used in SQL queries.
Similarly to regular [functions](#docs:sql:functions:overview), they need to have a name, a return type and parameter types.
Here is an example using a Python function that calls a third-party library.
```python
import duckdb
from duckdb.typing import *
from faker import Faker
def generate_random_name():
fake = Faker()
return fake.name()
duckdb.create_function("random_name", generate_random_name, [], VARCHAR)
res = duckdb.sql("SELECT random_name()").fetchall()
print(res)
```
```text
[('Gerald Ashley',)]
``` | 149 |
|
Client APIs | Python | Creating Functions | To register a Python UDF, use the `create_function` method from a DuckDB connection. Here is the syntax:
```python
import duckdb
con = duckdb.connect()
con.create_function(name, function, parameters, return_type)
```
The `create_function` method takes the following parameters:
1. **name**: A string representing the unique name of the UDF within the connection catalog.
2. **function**: The Python function you wish to register as a UDF.
3. **parameters**: Scalar functions can operate on one or more columns. This parameter takes a list of column types used as input.
4. **return_type**: Scalar functions return one element per row. This parameter specifies the return type of the function.
5. **type** (Optional): DuckDB supports both built-in Python types and PyArrow Tables. By default, built-in types are assumed, but you can specify `type = 'arrow'` to use PyArrow Tables.
6. **null_handling** (Optional): By default, null values are automatically handled as Null-In Null-Out. Users can specify a desired behavior for null values by setting `null_handling = 'special'`.
7. **exception_handling** (Optional): By default, when an exception is thrown from the Python function, it will be re-thrown in Python. Users can disable this behavior, and instead return `null`, by setting this parameter to `'return_null'`
8. **side_effects** (Optional): By default, functions are expected to produce the same result for the same input. If the result of a function is impacted by any type of randomness, `side_effects` must be set to `True`.
To unregister a UDF, you can call the `remove_function` method with the UDF name:
```python
con.remove_function(name)
``` | 377 |
|
Client APIs | Python | Type Annotation | When the function has type annotation it's often possible to leave out all of the optional parameters.
Using `DuckDBPyType` we can implicitly convert many known types to DuckDBs type system.
For example:
```python
import duckdb
def my_function(x: int) -> str:
return x
duckdb.create_function("my_func", my_function)
print(duckdb.sql("SELECT my_func(42)"))
```
```text
βββββββββββββββ
β my_func(42) β
β varchar β
βββββββββββββββ€
β 42 β
βββββββββββββββ
```
If only the parameter list types can be inferred, you'll need to pass in `None` as `parameters`. | 163 |
|
Client APIs | Python | Null Handling | By default when functions receive a `NULL` value, this instantly returns `NULL`, as part of the default `NULL`-handling.
When this is not desired, you need to explicitly set this parameter to `"special"`.
```python
import duckdb
from duckdb.typing import *
def dont_intercept_null(x):
return 5
duckdb.create_function("dont_intercept", dont_intercept_null, [BIGINT], BIGINT)
res = duckdb.sql("SELECT dont_intercept(NULL)").fetchall()
print(res)
```
```text
[(None,)]
```
With `null_handling="special"`:
```python
import duckdb
from duckdb.typing import *
def dont_intercept_null(x):
return 5
duckdb.create_function("dont_intercept", dont_intercept_null, [BIGINT], BIGINT, null_handling="special")
res = duckdb.sql("SELECT dont_intercept(NULL)").fetchall()
print(res)
```
```text
[(5,)]
``` | 210 |
|
Client APIs | Python | Exception Handling | By default, when an exception is thrown from the Python function, we'll forward (re-throw) the exception.
If you want to disable this behavior, and instead return null, you'll need to set this parameter to `"return_null"`
```python
import duckdb
from duckdb.typing import *
def will_throw():
raise ValueError("ERROR")
duckdb.create_function("throws", will_throw, [], BIGINT)
try:
res = duckdb.sql("SELECT throws()").fetchall()
except duckdb.InvalidInputException as e:
print(e)
duckdb.create_function("doesnt_throw", will_throw, [], BIGINT, exception_handling="return_null")
res = duckdb.sql("SELECT doesnt_throw()").fetchall()
print(res)
```
```console
Invalid Input Error: Python exception occurred while executing the UDF: ValueError: ERROR
At:
...(5): will_throw
...(9): <module>
```
```text
[(None,)]
``` | 199 |
|
Client APIs | Python | Side Effects | By default DuckDB will assume the created function is a *pure* function, meaning it will produce the same output when given the same input.
If your function does not follow that rule, for example when your function makes use of randomness, then you will need to mark this function as having `side_effects`.
For example, this function will produce a new count for every invocation
```python
def count() -> int:
old = count.counter;
count.counter += 1
return old
count.counter = 0
```
If we create this function without marking it as having side effects, the result will be the following:
```python
con = duckdb.connect()
con.create_function("my_counter", count, side_effects = False)
res = con.sql("SELECT my_counter() FROM range(10)").fetchall()
print(res)
```
```text
[(0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,), (0,)]
```
Which is obviously not the desired result, when we add `side_effects = True`, the result is as we would expect:
```python
con.remove_function("my_counter")
count.counter = 0
con.create_function("my_counter", count, side_effects = True)
res = con.sql("SELECT my_counter() FROM range(10)").fetchall()
print(res)
```
```text
[(0,), (1,), (2,), (3,), (4,), (5,), (6,), (7,), (8,), (9,)]
``` | 323 |
|
Client APIs | Python | Python Function Types | Currently, two function types are supported, `native` (default) and `arrow`.
#### Arrow {#docs:api:python:function::arrow}
If the function is expected to receive arrow arrays, set the `type` parameter to `'arrow'`.
This will let the system know to provide arrow arrays of up to `STANDARD_VECTOR_SIZE` tuples to the function, and also expect an array of the same amount of tuples to be returned from the function.
#### Native {#docs:api:python:function::native}
When the function type is set to `native` the function will be provided with a single tuple at a time, and expect only a single value to be returned.
This can be useful to interact with Python libraries that don't operate on Arrow, such as `faker`:
```python
import duckdb
from duckdb.typing import *
from faker import Faker
def random_date():
fake = Faker()
return fake.date_between()
duckdb.create_function("random_date", random_date, [], DATE, type="native")
res = duckdb.sql("SELECT random_date()").fetchall()
print(res)
```
```text
[(datetime.date(2019, 5, 15),)]
``` | 254 |
|
Client APIs | Python | Types API | The `DuckDBPyType` class represents a type instance of our [data types](#docs:sql:data_types:overview). | 28 |
|
Client APIs | Python | Converting from Other Types | To make the API as easy to use as possible, we have added implicit conversions from existing type objects to a DuckDBPyType instance.
This means that wherever a DuckDBPyType object is expected, it is also possible to provide any of the options listed below.
#### Python Built-ins {#docs:api:python:types::python-built-ins}
The table below shows the mapping of Python Built-in types to DuckDB type.
<div class="narrow_table monospace_table"></div>
| Built-in types | DuckDB type |
|:---------------|:------------|
| bool | BOOLEAN |
| bytearray | BLOB |
| bytes | BLOB |
| float | DOUBLE |
| int | BIGINT |
| str | VARCHAR |
#### Numpy DTypes {#docs:api:python:types::numpy-dtypes}
The table below shows the mapping of Numpy DType to DuckDB type.
<div class="narrow_table monospace_table"></div>
| Type | DuckDB type |
|:------------|:------------|
| bool | BOOLEAN |
| float32 | FLOAT |
| float64 | DOUBLE |
| int16 | SMALLINT |
| int32 | INTEGER |
| int64 | BIGINT |
| int8 | TINYINT |
| uint16 | USMALLINT |
| uint32 | UINTEGER |
| uint64 | UBIGINT |
| uint8 | UTINYINT |
#### Nested Types {#docs:api:python:types::nested-types} | 347 |
|
Client APIs | Python | Converting from Other Types | `list[child_type]` | `list` type objects map to a `LIST` type of the child type.
Which can also be arbitrarily nested.
```python
import duckdb
from typing import Union
duckdb.typing.DuckDBPyType(list[dict[Union[str, int], str]])
```
```text
MAP(UNION(u1 VARCHAR, u2 BIGINT), VARCHAR)[]
``` | 79 |
Client APIs | Python | Converting from Other Types | `dict[key_type, value_type]` | `dict` type objects map to a `MAP` type of the key type and the value type.
```python
import duckdb
print(duckdb.typing.DuckDBPyType(dict[str, int]))
```
```text
MAP(VARCHAR, BIGINT)
``` | 58 |
Client APIs | Python | Converting from Other Types | `{'a': field_one, 'b': field_two, .., 'n': field_n}` | `dict` objects map to a `STRUCT` composed of the keys and values of the dict.
```python
import duckdb
print(duckdb.typing.DuckDBPyType({'a': str, 'b': int}))
```
```text
STRUCT(a VARCHAR, b BIGINT)
``` | 63 |
Client APIs | Python | Converting from Other Types | `Union[β¨type_1β©, ... β¨type_nβ©]` | `typing.Union` objects map to a `UNION` type of the provided types.
```python
import duckdb
from typing import Union
print(duckdb.typing.DuckDBPyType(Union[int, str, bool, bytearray]))
```
```text
UNION(u1 BIGINT, u2 VARCHAR, u3 BOOLEAN, u4 BLOB)
```
#### Creation Functions {#docs:api:python:types::creation-functions}
For the built-in types, you can use the constants defined in `duckdb.typing`:
<div class="narrow_table monospace_table"></div>
| DuckDB type |
|:---------------|
| BIGINT |
| BIT |
| BLOB |
| BOOLEAN |
| DATE |
| DOUBLE |
| FLOAT |
| HUGEINT |
| INTEGER |
| INTERVAL |
| SMALLINT |
| SQLNULL |
| TIME_TZ |
| TIME |
| TIMESTAMP_MS |
| TIMESTAMP_NS |
| TIMESTAMP_S |
| TIMESTAMP_TZ |
| TIMESTAMP |
| TINYINT |
| UBIGINT |
| UHUGEINT |
| UINTEGER |
| USMALLINT |
| UTINYINT |
| UUID |
| VARCHAR |
For the complex types there are methods available on the `DuckDBPyConnection` object or the `duckdb` module.
Anywhere a `DuckDBPyType` is accepted, we will also accept one of the type objects that can implicitly convert to a `DuckDBPyType`. | 334 |
Client APIs | Python | Converting from Other Types | `list_type` | `array_type` | Parameters:
* `child_type: DuckDBPyType` | 13 |
Client APIs | Python | Converting from Other Types | `struct_type` | `row_type` | Parameters:
* `fields: Union[list[DuckDBPyType], dict[str, DuckDBPyType]]` | 24 |