h1
stringclasses 12
values | h2
stringclasses 93
values | h3
stringclasses 960
values | h5
stringlengths 0
79
| content
stringlengths 24
533k
| tokens
int64 7
158k
|
---|---|---|---|---|---|
Development | Building | Prerequisites | DuckDB needs CMake and a C++11-compliant compiler (e.g., GCC, Apple-Clang, MSVC).
Additionally, we recommend using the [Ninja build system](https://ninja-build.org/), which automatically parallelizes the build process. | 56 |
|
Development | Building | UNIX-Like Systems | #### macOS Packages {#docs:dev:building:build_instructions::macos-packages}
Install Xcode and [Homebrew](https://brew.sh/). Then, install the required packages with:
```bash
brew install git cmake ninja
```
#### Linux Packages {#docs:dev:building:build_instructions::linux-packages}
Install the required packages with the package manager of your distribution. | 83 |
|
Development | Building | UNIX-Like Systems | Ubuntu and Debian | ```bash
sudo apt-get update && sudo apt-get install -y git g++ cmake ninja-build libssl-dev
``` | 25 |
Development | Building | UNIX-Like Systems | Fedora, CentOS, and Red Hat | ```bash
sudo yum install -y git g++ cmake ninja-build openssl-devel
``` | 18 |
Development | Building | UNIX-Like Systems | Alpine Linux | ```bash
apk add g++ git make cmake ninja
```
Note that Alpine Linux uses the musl libc as its C standard library.
There are no official binaries distributed for musl libc but DuckDB can be build with it manually following the instructions on this page.
#### Cloning the Repository {#docs:dev:building:build_instructions::cloning-the-repository}
Clone the DuckDB repository:
```bash
git clone https://github.com/duckdb/duckdb
```
We recommend creating a full clone of the repository. Note that the directory uses approximately 1.3 GB of disk space.
#### Building DuckDB {#docs:dev:building:build_instructions::building-duckdb}
To build DuckDB, we use a Makefile which in turn calls into CMake. We also advise using [Ninja](https://ninja-build.org/manual.html) as the generator for CMake.
```bash
GEN=ninja make
```
> **Best practice. ** It is not advised to directly call CMake, as the Makefile sets certain variables that are crucial to properly building the package.
Once the build finishes successfully, you can find the `duckdb` binary in the `build` directory:
```bash
build/release/duckdb
```
#### Linking Extensions {#docs:dev:building:build_instructions::linking-extensions}
For testing, it can be useful to build DuckDB with statically linked core extensions. To do so, run:
```bash
CORE_EXTENSIONS='autocomplete;httpfs;icu;parquet;json' GEN=ninja make
```
This option also accepts out-of-tree extensions:
```bash
CORE_EXTENSIONS='autocomplete;httpfs;icu;parquet;json;delta' GEN=ninja make
```
For more details, see the [“Building Extensions” page](#docs:dev:building:building_extensions). | 404 |
Development | Building | Windows | On Windows, DuckDB requires the [Microsoft Visual C++ Redistributable package](https://learn.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist) both as a build-time and runtime dependency. Note that unlike the build process on UNIX-like systems, the Windows builds directly call CMake.
#### Visual Studio {#docs:dev:building:build_instructions::visual-studio}
To build DuckDB on Windows, we recommend using the Visual Studio compiler.
To use it, follow the instructions in the [CI workflow](https://github.com/duckdb/duckdb/blob/52b43b166091c82b3f04bf8af15f0ace18207a64/.github/workflows/Windows.yml#L73):
```batch
python scripts/windows_ci.py
cmake \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_GENERATOR_PLATFORM=x64 \
-DENABLE_EXTENSION_AUTOLOADING=1 \
-DENABLE_EXTENSION_AUTOINSTALL=1 \
-DDUCKDB_EXTENSION_CONFIGS="${GITHUB_WORKSPACE}/.github/config/bundled_extensions.cmake" \
-DDISABLE_UNITY=1 \
-DOVERRIDE_GIT_DESCRIBE="$OVERRIDE_GIT_DESCRIBE"
cmake --build . --config Release --parallel
```
#### MSYS2 and MinGW64 {#docs:dev:building:build_instructions::msys2-and-mingw64}
DuckDB on Windows can also be built with [MSYS2](https://www.msys2.org/) and [MinGW64](https://www.mingw-w64.org/).
Note that this build is only supported for compatibility reasons and should only be used if the Visual Studio build is not feasible on a given platform.
To build DuckDB with MinGW64, install the required dependencies using Pacman.
When prompted with `Enter a selection (default=all)`, select the default option by pressing `Enter`.
```batch
pacman -Syu git mingw-w64-x86_64-toolchain mingw-w64-x86_64-cmake mingw-w64-x86_64-ninja
git clone https://github.com/duckdb/duckdb
cd duckdb
cmake -G "Ninja" -DCMAKE_BUILD_TYPE=Release -DBUILD_EXTENSIONS="icu;parquet;json"
cmake --build . --config Release
```
Once the build finishes successfully, you can find the `duckdb.exe` binary in the repository's directory:
```batch
./duckdb.exe
``` | 527 |
|
Development | Building | Raspberry Pi (32-bit) | On 32-bit Raspberry Pi boards, you need to add the [`-latomic` link flag](https://github.com/duckdb/duckdb/issues/13855#issuecomment-2341539339).
As extensions are not distributed for this platform, it's recommended to also include them in the build.
For example:
```batch
mkdir build
cd build
cmake .. \
-DCORE_EXTENSIONS="httpfs;json;parquet" \
-DDUCKDB_EXTRA_LINK_FLAGS="-latomic"
make -j4
``` | 111 |
|
Development | Building | Troubleshooting | #### The Build Runs Out of Memory {#docs:dev:building:build_instructions::the-build-runs-out-of-memory}
Ninja parallelizes the build, which can cause out-of-memory issues on systems with limited resources.
These issues have also been reported to occur on Alpine Linux, especially on machines with limited resources.
In these cases, avoid using Ninja by setting the Makefile generator to empty (` GEN=`):
```bash
GEN= make
``` | 94 |
|
Development | Building | Build Types | DuckDB can be built in many different settings, most of these correspond directly to CMake but not all of them.
#### `release` {#docs:dev:building:build_configuration::release}
This build has been stripped of all the assertions and debug symbols and code, optimized for performance.
#### `debug` {#docs:dev:building:build_configuration::debug}
This build runs with all the debug information, including symbols, assertions and `#ifdef DEBUG` blocks.
Due to these, binaries of this build are expected to be slow.
Note: the special debug defines are not automatically set for this build.
#### `relassert` {#docs:dev:building:build_configuration::relassert}
This build does not trigger the `#ifdef DEBUG` code blocks but it still has debug symbols that make it possible to step through the execution with line number information and `D_ASSERT` lines are still checked in this build.
Binaries of this build mode are significantly faster than those of the `debug` mode.
#### `reldebug` {#docs:dev:building:build_configuration::reldebug}
This build is similar to `relassert` in many ways, only assertions are also stripped in this build.
#### `benchmark` {#docs:dev:building:build_configuration::benchmark}
This build is a shorthand for `release` with `BUILD_BENCHMARK=1` set.
#### `tidy-check` {#docs:dev:building:build_configuration::tidy-check}
This creates a build and then runs [Clang-Tidy](https://clang.llvm.org/extra/clang-tidy/) to check for issues or style violations through static analysis.
The CI will also run this check, causing it to fail if this check fails.
#### `format-fix` | `format-changes` | `format-main` {#docs:dev:building:build_configuration::format-fix--format-changes--format-main}
This doesn't actually create a build, but uses the following format checkers to check for style issues:
* [clang-format](https://clang.llvm.org/docs/ClangFormat.html) to fix format issues in the code.
* [cmake-format](https://cmake-format.readthedocs.io/en/latest/) to fix format issues in the `CMakeLists.txt` files.
The CI will also run this check, causing it to fail if this check fails. | 513 |
|
Development | Building | Extension Selection | [Core DuckDB extensions](#docs:extensions:core_extensions) are the ones maintaned by the DuckDB team. These are hosted in the `duckdb` GitHub organization and are served by the `core` extension repository.
Core extensions can be built as part of DuckDB via the `CORE_EXTENSION` flag, then listing the names of the extensions that are to be built.
```bash
CORE_EXTENSION='tpcd;httpfs;fts;json;parquet' make
```
More on this topic at [building duckdb extensions](#docs:dev:building:building_extensions). | 124 |
|
Development | Building | Package Flags | For every package that is maintained by core DuckDB, there exists a flag in the Makefile to enable building the package.
These can be enabled by either setting them in the current `env`, through set up files like `bashrc` or `zshrc`, or by setting them before the call to `make`, for example:
```bash
BUILD_PYTHON=1 make debug
```
#### `BUILD_PYTHON` {#docs:dev:building:build_configuration::build_python}
When this flag is set, the [Python](#docs:api:python:overview) package is built.
#### `BUILD_SHELL` {#docs:dev:building:build_configuration::build_shell}
When this flag is set, the [CLI](#docs:api:cli:overview) is built, this is usually enabled by default.
#### `BUILD_BENCHMARK` {#docs:dev:building:build_configuration::build_benchmark}
When this flag is set, DuckDB's in-house benchmark suite is built.
More information about this can be found [here](https://github.com/duckdb/duckdb/blob/main/benchmark/README.md).
#### `BUILD_JDBC` {#docs:dev:building:build_configuration::build_jdbc}
When this flag is set, the [Java](#docs:api:java) package is built.
#### `BUILD_ODBC` {#docs:dev:building:build_configuration::build_odbc}
When this flag is set, the [ODBC](#docs:api:odbc:overview) package is built. | 336 |
|
Development | Building | Miscellaneous Flags | #### `DISABLE_UNITY` {#docs:dev:building:build_configuration::disable_unity}
To improve compilation time, we use [Unity Build](https://cmake.org/cmake/help/latest/prop_tgt/UNITY_BUILD.html) to combine translation units.
This can however hide include bugs, this flag disables using the unity build so these errors can be detected.
#### `DISABLE_SANITIZER` {#docs:dev:building:build_configuration::disable_sanitizer}
In some situations, running an executable that has been built with sanitizers enabled is not support / can cause problems. Julia is an example of this.
With this flag enabled, the sanitizers are disabled for the build. | 144 |
|
Development | Building | Overriding Git Hash and Version | It is possible to override the Git hash and version when building from source using the `OVERRIDE_GIT_DESCRIBE` environment variable.
This is useful when building from sources that are not part of a complete Git repository (e.g., an archive file with no information on commit hashes and tags).
For example:
```bash
OVERRIDE_GIT_DESCRIBE=v0.10.0-843-g09ea97d0a9 GEN=ninja make
```
Will result in the following output when running `./build/release/duckdb`:
```text
v0.10.1-dev843 09ea97d0a9
...
``` | 136 |
|
Development | Building | Building Extensions | [Extensions]({% link docs/extensions/overview.md %}) can be built from source and installed from the resulting local binary. | 26 |
|
Development | Building | Building Extensions Using Build Flags | To build using extension flags, set the `CORE_EXTENSIONS` flag to the list of extensions that you want to be buiold.
Extension will in most cases by directly linked the resulting DuckDB executable.
For example, to build DuckDB with the [`httpfs` extension](#docs:extensions:httpfs:overview), run the following script:
```bash
GEN=ninja CORE_EXTENSIONS='httpfs' make
```
#### Special Extension Flags {#docs:dev:building:building_extensions::special-extension-flags} | 109 |
|
Development | Building | Building Extensions Using Build Flags | `BUILD_JEMALLOC` | When this flag is set, the [`jemalloc` extension](#docs:extensions:jemalloc) is built. | 24 |
Development | Building | Building Extensions Using Build Flags | `BUILD_TPCE` | When this flag is set, the [TPCE](https://www.tpc.org/tpce/) libray is built. Unlike TPC-H and TPC-DS this is not a proper extension and it's not distributed as such. Enablign this allows TPC-E enabled queries through our test suite.
#### Debug Flags {#docs:dev:building:building_extensions::debug-flags} | 83 |
Development | Building | Building Extensions Using Build Flags | `CRASH_ON_ASSERT` | `D_ASSERT(condition)` is used all throughout the code, these will throw an InternalException in debug builds.
With this flag enabled, when the assertion triggers it will instead directly cause a crash. | 39 |
Development | Building | Building Extensions Using Build Flags | `DISABLE_STRING_INLINE` | In our execution format `string_t` has the feature to “inline” strings that are under a certain length (12 bytes), this means they don't require a separate allocation.
When this flag is set, we disable this and don't inline small strings. | 52 |
Development | Building | Building Extensions Using Build Flags | `DISABLE_MEMORY_SAFETY` | Our data structures that are used extensively throughout the non-performance-critical code have extra checks to ensure memory safety, these checks include:
* Making sure `nullptr` is never dereferenced.
* Making sure index out of bounds accesses don't trigger a crash.
With this flag enabled we remove these checks, this is mostly done to check that the performance hit of these checks is negligible. | 79 |
Development | Building | Building Extensions Using Build Flags | `DESTROY_UNPINNED_BLOCKS` | When previously pinned blocks in the BufferManager are unpinned, with this flag enabled we destroy them instantly to make sure that there aren't situations where this memory is still being used, despite not being pinned. | 41 |
Development | Building | Building Extensions Using Build Flags | `DEBUG_STACKTRACE` | When a crash or assertion hit occurs in a test, print a stack trace.
This is useful when debugging a crash that is hard to pinpoint with a debugger attached. | 33 |
Development | Building | Using a CMake Configuration File | To build using a CMake configuration file, create an extension configuration file named `extension_config.cmake` with e.g., the following content:
```cmake
duckdb_extension_load(autocomplete)
duckdb_extension_load(fts)
duckdb_extension_load(inet)
duckdb_extension_load(icu)
duckdb_extension_load(json)
duckdb_extension_load(parquet)
```
Build DuckDB as follows:
```bash
GEN=ninja EXTENSION_CONFIGS="extension_config.cmake" make
```
Then, to install the extensions in one go, run:
```bash
# for release builds
cd build/release/extension/
# for debug builds
cd build/debug/extension/
# install extensions
for EXTENSION in *; do
../duckdb -c "INSTALL '${EXTENSION}/${EXTENSION}.duckdb_extension';"
done
``` | 176 |
|
Development | Building | Supported Platforms | DuckDB officially supports the following platforms:
| Platform name | Description |
|--------------------|--------------------------------------------|
| `linux_amd64` | Linux AMD64 |
| `linux_arm64` | Linux ARM64 |
| `osx_amd64` | macOS 12+ (Intel CPUs) |
| `osx_arm64` | macOS 12+ (Apple Silicon: M1, M2, M3 CPUs) |
| `windows_amd64` | Windows 10+ on Intel and AMD CPUs (x86_64) |
| `windows_arm64` | Windows 10+ on ARM CPUs (AArch64) | | 146 |
|
Development | Building | Other Platforms | There are several platforms with varying levels of support.
For some, DuckDB binaries and extensions (or a [subset of extensions](#docs:extensions:working_with_extensions::platforms)) are distributed.
For most platforms, DuckDB can often be [built from source](#::building-duckdb-from-source).
| Platform name | Description |
|------------------------|------------------------------------------------------------------------------------------------------|
| `freebsd_amd64` | FreeBSD AMD64 (x64_64) |
| `freebsd_arm64` | FreeBSD ARM64 |
| `linux_arm64_android` | Android ARM64 |
| `linux_arm64_gcc4` | Linux AMD64 with GCC 4 (e.g., CentOS 7) |
| `wasm_eh` | WebAssembly Exception Handling |
| `wasm_mvp` | WebAssembly Minimum Viable Product |
| `windows_amd64_mingw` | Windows 10+ AMD64 (x86_64) with MinGW |
| `windows_amd64_rtools` | Windows 10+ AMD64 (x86_64) for [RTools](https://cran.r-project.org/bin/windows/Rtools/) (deprecated) |
| `windows_arm64_mingw` | Windows 10+ AMD64 (x86_64) with MinGW |
32-bit architectures are officially not supported but it is possible to build DuckDB manually for some of these platforms, e.g., for [Raspberry Pi boards](#docs:dev:building:build_instructions::raspberry-pi-32-bit). | 341 |
|
Development | Building | Building DuckDB from Source | DuckDB can be [built from source](#docs:dev:building:build_instructions) for several other platforms such as FreeBSD, Linux distributions using musl libc, and macOS 11.
For details on free and commercial support, see the [support policy blog post](https://duckdblabs.com/news/2023/10/02/support-policy#platforms). | 77 |
|
Development | Building | R Package: The Build Only Uses a Single Thread | **Problem:**
By default, R compiles packages using a single thread, which causes the build to be slow.
**Solution:**
To parallelize the compilation, create or edit the `~/.R/Makevars` file, and add a line like the following:
```ini
MAKEFLAGS = -j8
```
The above will parallelize the compilation using 8 threads. On Linux/macOS, you can add the following to use all of the machine's threads:
```ini
MAKEFLAGS = -j$(nproc)
```
However, note that, the more threads that are used, the higher the RAM consumption. If the system runs out of RAM while compiling, then the R session will crash. | 151 |
|
Development | Building | R Package on Linux AArch64: `too many GOT entries` Build Error | **Problem:**
Building the R package on Linux running on an ARM64 architecture (AArch64) may result in the following error message:
```console
/usr/bin/ld: /usr/include/c++/10/bits/basic_string.tcc:206:
warning: too many GOT entries for -fpic, please recompile with -fPIC
```
**Solution:**
Create or edit the `~/.R/Makevars` file. This example also contains the [flag to parallelize the build](#::r-package-the-build-only-uses-a-single-thread):
```ini
ALL_CXXFLAGS = $(PKG_CXXFLAGS) -fPIC $(SHLIB_CXXFLAGS) $(CXXFLAGS)
MAKEFLAGS = -j$(nproc)
```
When making this change, also consider [making the build parallel](#::r-package-the-build-only-uses-a-single-thread). | 186 |
|
Development | Building | Python Package: `No module named 'duckdb.duckdb'` Build Error | **Problem:**
Building the Python package succeeds but the package cannot be imported:
```batch
cd tools/pythonpkg/
python3 -m pip install .
python3 -c "import duckdb"
```
This returns the following error message:
```console
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/duckdb/tools/pythonpkg/duckdb/__init__.py", line 4, in <module>
import duckdb.functional as functional
File "/duckdb/tools/pythonpkg/duckdb/functional/__init__.py", line 1, in <module>
from duckdb.duckdb.functional import (
ModuleNotFoundError: No module named 'duckdb.duckdb'
```
**Solution:**
The problem is caused by Python trying to import from the current working directory.
To work around this, navigate to a different directory (e.g., `cd ..`) and try running Python import again. | 197 |
|
Development | Building | Python Package on macOS: Building the httpfs Extension Fails | **Problem:**
The build fails on macOS when both the [`httpfs` extension](#docs:extensions:httpfs:overview) and the Python package are included:
```bash
GEN=ninja BUILD_PYTHON=1 CORE_EXTENSIONS="httpfs" make
```
```console
ld: library not found for -lcrypto
clang: error: linker command failed with exit code 1 (use -v to see invocation)
error: command '/usr/bin/clang++' failed with exit code 1
ninja: build stopped: subcommand failed.
make: *** [release] Error 1
```
**Solution:**
In general, we recommended using the nightly builds, available under GitHub main on the [installation page](https://duckdb.org/docs/installation/).
If you would like to build DuckDB from source, avoid using the `BUILD_PYTHON=1` flag unless you are actively developing the Python library.
Instead, first build the `httpfs` extension (if required), then build and install the Python package separately using pip:
```bash
GEN=ninja CORE_EXTENSIONS="httpfs" make
```
If the next line complains about pybind11 being missing, or `--use-pep517` not being supported, make sure you're using a modern version of pip and setuptools.
`python3-pip` on your OS may not be modern, so you may need to run `python3 -m pip install pip -U` first.
```bash
python3 -m pip install tools/pythonpkg --use-pep517 --user
``` | 327 |
|
Development | Building | Linux: Building the httpfs Extension Fails | **Problem:**
When building the [`httpfs` extension](#docs:extensions:httpfs:overview) on Linux, the build may fail with the following error.
```console
CMake Error at /usr/share/cmake-3.22/Modules/FindPackageHandleStandardArgs.cmake:230 (message):
Could NOT find OpenSSL, try to set the path to OpenSSL root folder in the
system variable OPENSSL_ROOT_DIR (missing: OPENSSL_CRYPTO_LIBRARY
OPENSSL_INCLUDE_DIR)
```
**Solution:**
Install the `libssl-dev` library.
```bash
sudo apt-get install -y libssl-dev
```
Then, build with:
```bash
GEN=ninja CORE_EXTENSIONS="httpfs" make
``` | 154 |
|
Development | Benchmark Suite | DuckDB has an extensive benchmark suite.
When making changes that have potential performance implications, it is important to run these benchmarks to detect potential performance regressions. | 32 |
||
Development | Benchmark Suite | Getting Started | To build the benchmark suite, run the following command in the [DuckDB repository](https://github.com/duckdb/duckdb):
```bash
BUILD_BENCHMARK=1 CORE_EXTENSIONS='tpch' make
``` | 47 |
|
Development | Benchmark Suite | Listing Benchmarks | To list all available benchmarks, run:
```bash
build/release/benchmark/benchmark_runner --list
``` | 23 |
|
Development | Benchmark Suite | Running Benchmarks | #### Running a Single Benchmark {#docs:dev:benchmark::running-a-single-benchmark}
To run a single benchmark, issue the following command:
```bash
build/release/benchmark/benchmark_runner benchmark/micro/nulls/no_nulls_addition.benchmark
```
The output will be printed to `stdout` in CSV format, in the following format:
```text
nameruntiming
benchmark/micro/nulls/no_nulls_addition.benchmark10.121234
benchmark/micro/nulls/no_nulls_addition.benchmark20.121702
benchmark/micro/nulls/no_nulls_addition.benchmark30.122948
benchmark/micro/nulls/no_nulls_addition.benchmark40.122534
benchmark/micro/nulls/no_nulls_addition.benchmark50.124102
```
You can also specify an output file using the `--out` flag. This will write only the timings (delimited by newlines) to that file.
```bash
build/release/benchmark/benchmark_runner benchmark/micro/nulls/no_nulls_addition.benchmark --out=timings.out
```
The output will contain the following:
```text
0.182472
0.185027
0.184163
0.185281
0.182948
```
#### Running Multiple Benchmark Using a Regular Expression {#docs:dev:benchmark::running-multiple-benchmark-using-a-regular-expression}
You can also use a regular expression to specify which benchmarks to run.
Be careful of shell expansion of certain regex characters (e.g., `*` will likely be expanded by your shell, hence this requires proper quoting or escaping).
```bash
build/release/benchmark/benchmark_runner "benchmark/micro/nulls/.*"
``` | 368 |
|
Development | Benchmark Suite | Running Benchmarks | Running All Benchmarks | Not specifying any argument will run all benchmarks.
```bash
build/release/benchmark/benchmark_runner
``` | 22 |
Development | Benchmark Suite | Running Benchmarks | Other Options | The `--info` flag gives you some other information about the benchmark.
```bash
build/release/benchmark/benchmark_runner benchmark/micro/nulls/no_nulls_addition.benchmark --info
```
```text
display_name:NULL Addition (no nulls)
group:micro
subgroup:nulls
```
The `--query` flag will print the query that is run by the benchmark.
```sql
SELECT min(i + 1) FROM integers;
```
The `--profile` flag will output a query tree. | 112 |
Internals | Overview of DuckDB Internals | On this page is a brief description of the internals of the DuckDB engine. | 16 |
||
Internals | Overview of DuckDB Internals | Parser | The parser converts a query string into the following tokens:
* [`SQLStatement`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/parser/sql_statement.hpp)
* [`QueryNode`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/parser/query_node.hpp)
* [`TableRef`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/parser/tableref.hpp)
* [`ParsedExpression`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/parser/parsed_expression.hpp)
The parser is not aware of the catalog or any other aspect of the database. It will not throw errors if tables do not exist, and will not resolve **any** types of columns yet. It only transforms a query string into a set of tokens as specified.
#### ParsedExpression {#docs:internals:overview::parsedexpression}
The ParsedExpression represents an expression within a SQL statement. This can be e.g., a reference to a column, an addition operator or a constant value. The type of the ParsedExpression indicates what it represents, e.g., a comparison is represented as a [`ComparisonExpression`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/parser/expression/comparison_expression.hpp).
ParsedExpressions do **not** have types, except for nodes with explicit types such as `CAST` statements. The types for expressions are resolved in the Binder, not in the Parser.
#### TableRef {#docs:internals:overview::tableref}
The TableRef represents any table source. This can be a reference to a base table, but it can also be a join, a table-producing function or a subquery.
#### QueryNode {#docs:internals:overview::querynode}
The QueryNode represents either (1) a `SELECT` statement, or (2) a set operation (i.e. `UNION`, `INTERSECT` or `DIFFERENCE`).
#### SQL Statement {#docs:internals:overview::sql-statement}
The SQLStatement represents a complete SQL statement. The type of the SQL Statement represents what kind of statement it is (e.g., `StatementType::SELECT` represents a `SELECT` statement). A single SQL string can be transformed into multiple SQL statements in case the original query string contains multiple queries. | 496 |
|
Internals | Overview of DuckDB Internals | Binder | The binder converts all nodes into their **bound** equivalents. In the binder phase:
* The tables and columns are resolved using the catalog
* Types are resolved
* Aggregate/window functions are extracted
The following conversions happen:
* SQLStatement → [`BoundStatement`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/bound_statement.hpp)
* QueryNode → [`BoundQueryNode`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/bound_query_node.hpp)
* TableRef → [`BoundTableRef`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/bound_tableref.hpp)
* ParsedExpression → [`Expression`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/expression.hpp) | 179 |
|
Internals | Overview of DuckDB Internals | Logical Planner | The logical planner creates [`LogicalOperator`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/logical_operator.hpp) nodes from the bound statements. In this phase, the actual logical query tree is created. | 50 |
|
Internals | Overview of DuckDB Internals | Optimizer | After the logical planner has created the logical query tree, the optimizers are run over that query tree to create an optimized query plan. The following query optimizers are run:
* **Expression Rewriter**: Simplifies expressions, performs constant folding
* **Filter Pushdown**: Pushes filters down into the query plan and duplicates filters over equivalency sets. Also prunes subtrees that are guaranteed to be empty (because of filters that statically evaluate to false).
* **Join Order Optimizer**: Reorders joins using dynamic programming. Specifically, the `DPccp` algorithm from the paper [Dynamic Programming Strikes Back](https://15721.courses.cs.cmu.edu/spring2017/papers/14-optimizer1/p539-moerkotte.pdf) is used.
* **Common Sub Expressions**: Extracts common subexpressions from projection and filter nodes to prevent unnecessary duplicate execution.
* **In Clause Rewriter**: Rewrites large static IN clauses to a MARK join or INNER join. | 202 |
|
Internals | Overview of DuckDB Internals | Column Binding Resolver | The column binding resolver converts logical [`BoundColumnRefExpresion`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/expression/bound_columnref_expression.hpp) nodes that refer to a column of a specific table into [`BoundReferenceExpression`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/planner/expression/bound_reference_expression.hpp) nodes that refer to a specific index into the DataChunks that are passed around in the execution engine. | 105 |
|
Internals | Overview of DuckDB Internals | Physical Plan Generator | The physical plan generator converts the resulting logical operator tree into a [`PhysicalOperator`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/execution/physical_operator.hpp) tree. | 42 |
|
Internals | Overview of DuckDB Internals | Execution | In the execution phase, the physical operators are executed to produce the query result.
DuckDB uses a push-based vectorized model, where [`DataChunks`](https://github.com/duckdb/duckdb/blob/main/src/include/duckdb/common/types/data_chunk.hpp) are pushed through the operator tree.
For more information, see the talk [Push-Based Execution in DuckDB](https://www.youtube.com/watch?v=1kDrPgRUuEI). | 93 |
|
Internals | Storage Versions and Format | Compatibility | #### Backward Compatibility {#docs:internals:storage::backward-compatibility}
_Backward compatibility_ refers to the ability of a newer DuckDB version to read storage files created by an older DuckDB version. Version 0.10 is the first release of DuckDB that supports backward compatibility in the storage format. DuckDB v0.10 can read and operate on files created by the previous DuckDB version – DuckDB v0.9.
For future DuckDB versions, our goal is to ensure that any DuckDB version released **after** can read files created by previous versions, starting from this release. We want to ensure that the file format is fully backward compatible. This allows you to keep data stored in DuckDB files around and guarantees that you will be able to read the files without having to worry about which version the file was written with or having to convert files between versions.
#### Forward Compatibility {#docs:internals:storage::forward-compatibility}
_Forward compatibility_ refers to the ability of an older DuckDB version to read storage files produced by a newer DuckDB version. DuckDB v0.9 is [**partially** forward compatible with DuckDB v0.10](https://duckdb.org/2024/02/13/announcing-duckdb-0100#forward-compatibility). Certain files created by DuckDB v0.10 can be read by DuckDB v0.9.
Forward compatibility is provided on a **best effort** basis. While stability of the storage format is important – there are still many improvements and innovations that we want to make to the storage format in the future. As such, forward compatibility may be (partially) broken on occasion. | 350 |
|
Internals | Storage Versions and Format | How to Move Between Storage Formats | When you update DuckDB and open an old database file, you might encounter an error message about incompatible storage formats, pointing to this page.
To move your database(s) to newer format you only need the older and the newer DuckDB executable.
Open your database file with the older DuckDB and run the SQL statement `EXPORT DATABASE 'tmp'`. This allows you to save the whole state of the current database in use inside folder `tmp`.
The content of the `tmp` folder will be overridden, so choose an empty/non yet existing location. Then, start the newer DuckDB and execute `IMPORT DATABASE 'tmp'` (pointing to the previously populated folder) to load the database, which can be then saved to the file you pointed DuckDB to.
A bash one-liner (to be adapted with the file names and executable locations) is:
```bash
/older/version/duckdb mydata.db -c "EXPORT DATABASE 'tmp'" && /newer/duckdb mydata.new.db -c "IMPORT DATABASE 'tmp'"
```
After this, `mydata.db` will remain in the old format, `mydata.new.db` will contain the same data but in a format accessible by the more recent DuckDB version, and the folder `tmp` will hold the same data in a universal format as different files.
Check [`EXPORT` documentation](#docs:sql:statements:export) for more details on the syntax. | 298 |
|
Internals | Storage Versions and Format | Storage Header | DuckDB files start with a `uint64_t` which contains a checksum for the main header, followed by four magic bytes (` DUCK`), followed by the storage version number in a `uint64_t`.
```bash
hexdump -n 20 -C mydata.db
```
```text
00000000 01 d0 e2 63 9c 13 39 3e 44 55 43 4b 2b 00 00 00 |...c..9>DUCK+...|
00000010 00 00 00 00 |....|
00000014
```
A simple example of reading the storage version using Python is below.
```python
import struct
pattern = struct.Struct('<8x4sQ')
with open('test/sql/storage_version/storage_version.db', 'rb') as fh:
print(pattern.unpack(fh.read(pattern.size)))
``` | 197 |
|
Internals | Storage Versions and Format | Storage Version Table | For changes in each given release, check out the [change log](https://github.com/duckdb/duckdb/releases) on GitHub.
To see the commits that changed each storage version, see the [commit log](https://github.com/duckdb/duckdb/commits/main/src/storage/storage_info.cpp).
<div class="narrow_table"></div>
| Storage version | DuckDB version(s) |
|----------------:|---------------------------------|
| 64 | v0.9.x, v0.10.x, v1.0.0, v1.1.x |
| 51 | v0.8.x |
| 43 | v0.7.x |
| 39 | v0.6.x |
| 38 | v0.5.x |
| 33 | v0.3.3, v0.3.4, v0.4.0 |
| 31 | v0.3.2 |
| 27 | v0.3.1 |
| 25 | v0.3.0 |
| 21 | v0.2.9 |
| 18 | v0.2.8 |
| 17 | v0.2.7 |
| 15 | v0.2.6 |
| 13 | v0.2.5 |
| 11 | v0.2.4 |
| 6 | v0.2.3 |
| 4 | v0.2.2 |
| 1 | v0.2.1 and prior | | 357 |
|
Internals | Storage Versions and Format | Compression | DuckDB uses [lightweight compression](https://duckdb.org/2022/10/28/lightweight-compression).
Note that compression is only applied to persistent databases and is **not applied to in-memory instances**.
#### Compression Algorithms {#docs:internals:storage::compression-algorithms}
The compression algorithms supported by DuckDB include the following:
* [Constant Encoding](https://duckdb.org/2022/10/28/lightweight-compression#constant-encoding)
* [Run-Length Encoding (RLE)](https://duckdb.org/2022/10/28/lightweight-compression#run-length-encoding-rle)
* [Bit Packing](https://duckdb.org/2022/10/28/lightweight-compression#bit-packing)
* [Frame of Reference (FOR)](https://duckdb.org/2022/10/28/lightweight-compression#frame-of-reference)
* [Dictionary Encoding](https://duckdb.org/2022/10/28/lightweight-compression#dictionary-encoding)
* [Fast Static Symbol Table (FSST)](https://duckdb.org/2022/10/28/lightweight-compression#fsst) – [VLDB 2020 paper](https://www.vldb.org/pvldb/vol13/p2649-boncz.pdf)
* [Adaptive Lossless Floating-Point Compression (ALP)](https://duckdb.org/2024/02/13/announcing-duckdb-0100#adaptive-lossless-floating-point-compression-alp) – [SIGMOD 2024 paper](https://ir.cwi.nl/pub/33334/33334.pdf)
* [Chimp](https://duckdb.org/2022/10/28/lightweight-compression#chimp--patas) – [VLDB 2022 paper](https://www.vldb.org/pvldb/vol15/p3058-liakos.pdf)
* [Patas](https://duckdb.org/2022/11/14/announcing-duckdb-060#compression-improvements) | 435 |
|
Internals | Storage Versions and Format | Disk Usage | The disk usage of DuckDB's format depends on a number of factors, including the data type and the data distribution, the compression methods used, etc.
As a rough approximation, loading 100 GB of uncompressed CSV files into a DuckDB database file will require 25 GB of disk space, while loading 100 GB of Parquet files will require 120 GB of disk space. | 77 |
|
Internals | Storage Versions and Format | Row Groups | DuckDB's storage format stores the data in _row groups,_ i.e., horizontal partitions of the data.
This concept is equivalent to [Parquet's row groups](https://parquet.apache.org/docs/concepts/).
Several features in DuckDB, including [parallelism](#docs:guides:performance:how_to_tune_workloads) and [compression](https://duckdb.org/2022/10/28/lightweight-compression) are based on row groups. | 99 |
|
Internals | Storage Versions and Format | Troubleshooting | #### Error Message When Opening an Incompatible Database File {#docs:internals:storage::error-message-when-opening-an-incompatible-database-file}
When opening a database file that has been written by a different DuckDB version from the one you are using, the following error message may occur:
```console
Error: unable to open database "...": Serialization Error: Failed to deserialize: ...
```
The message implies that the database file was created with a newer DuckDB version and uses features that are backward incompatible with the DuckDB version used to read the file.
There are two potential workarounds:
1. Update your DuckDB version to the latest stable version.
2. Open the database with the latest version of DuckDB, export it to a standard format (e.g., Parquet), then import it using to any version of DuckDB. See the [`EXPORT/IMPORT DATABASE` statements](#docs:sql:statements:export) for details. | 197 |
|
Internals | Execution Format | `Vector` is the container format used to store in-memory data during execution.
`DataChunk` is a collection of Vectors, used for instance to represent a column list in a `PhysicalProjection` operator. | 43 |
||
Internals | Execution Format | Data Flow | DuckDB uses a vectorized query execution model.
All operators in DuckDB are optimized to work on Vectors of a fixed size.
This fixed size is commonly referred to in the code as `STANDARD_VECTOR_SIZE`.
The default `STANDARD_VECTOR_SIZE` is 2048 tuples. | 60 |
|
Internals | Execution Format | Vector Format | Vectors logically represent arrays that contain data of a single type. DuckDB supports different *vector formats*, which allow the system to store the same logical data with a different *physical representation*. This allows for a more compressed representation, and potentially allows for compressed execution throughout the system. Below the list of supported vector formats is shown.
#### Flat Vectors {#docs:internals:vector::flat-vectors}
Flat vectors are physically stored as a contiguous array, this is the standard uncompressed vector format.
For flat vectors the logical and physical representations are identical.
<img src="/images/internals/flat.png" alt="Flat Vector example" style="max-width:40%;width:40%;height:auto;margin:auto"/>
#### Constant Vectors {#docs:internals:vector::constant-vectors}
Constant vectors are physically stored as a single constant value.
<img src="/images/internals/constant.png" alt="Constant Vector example" style="max-width:40%;width:40%;height:auto;margin:auto"/>
Constant vectors are useful when data elements are repeated – for example, when representing the result of a constant expression in a function call, the constant vector allows us to only store the value once.
```sql
SELECT lst || 'duckdb'
FROM range(1000) tbl(lst);
```
Since `duckdb` is a string literal, the value of the literal is the same for every row. In a flat vector, we would have to duplicate the literal 'duckdb' once for every row. The constant vector allows us to only store the literal once.
Constant vectors are also emitted by the storage when decompressing from constant compression.
#### Dictionary Vectors {#docs:internals:vector::dictionary-vectors}
Dictionary vectors are physically stored as a child vector, and a selection vector that contains indexes into the child vector.
<img src="/images/internals/dictionary.png" alt="Dictionary Vector example" style="max-width:40%;width:40%;height:auto;margin:auto"/>
Dictionary vectors are emitted by the storage when decompressing from dictionary
Just like constant vectors, dictionary vectors are also emitted by the storage.
When deserializing a dictionary compressed column segment, we store this in a dictionary vector so we can keep the data compressed during query execution.
#### Sequence Vectors {#docs:internals:vector::sequence-vectors}
Sequence vectors are physically stored as an offset and an increment value.
<img src="/images/internals/sequence.png" alt="Sequence Vector example" style="max-width:40%;width:40%;height:auto;margin:auto"/>
Sequence vectors are useful for efficiently storing incremental sequences. They are generally emitted for row identifiers.
#### Unified Vector Format {#docs:internals:vector::unified-vector-format}
These properties of the different vector formats are great for optimization purposes, for example you can imagine the scenario where all the parameters to a function are constant, we can just compute the result once and emit a constant vector.
But writing specialized code for every combination of vector types for every function is unfeasible due to the combinatorial explosion of possibilities.
Instead of doing this, whenever you want to generically use a vector regardless of the type, the UnifiedVectorFormat can be used.
This format essentially acts as a generic view over the contents of the Vector. Every type of Vector can convert to this format. | 695 |
|
Internals | Execution Format | Complex Types | #### String Vectors {#docs:internals:vector::string-vectors}
To efficiently store strings, we make use of our `string_t` class.
```cpp
struct string_t {
union {
struct {
uint32_t length;
char prefix[4];
char *ptr;
} pointer;
struct {
uint32_t length;
char inlined[12];
} inlined;
} value;
};
```
Short strings (` <= 12 bytes`) are inlined into the structure, while larger strings are stored with a pointer to the data in the auxiliary string buffer. The length is used throughout the functions to avoid having to call `strlen` and having to continuously check for null-pointers. The prefix is used for comparisons as an early out (when the prefix does not match, we know the strings are not equal and don't need to chase any pointers).
#### List Vectors {#docs:internals:vector::list-vectors}
List vectors are stored as a series of *list entries* together with a child Vector. The child vector contains the *values* that are present in the list, and the list entries specify how each individual list is constructed.
```cpp
struct list_entry_t {
idx_t offset;
idx_t length;
};
```
The offset refers to the start row in the child Vector, the length keeps track of the size of the list of this row.
List vectors can be stored recursively. For nested list vectors, the child of a list vector is again a list vector.
For example, consider this mock representation of a Vector of type `BIGINT[][]`:
```json
{
"type": "list",
"data": "list_entry_t",
"child": {
"type": "list",
"data": "list_entry_t",
"child": {
"type": "bigint",
"data": "int64_t"
}
}
}
```
#### Struct Vectors {#docs:internals:vector::struct-vectors}
Struct vectors store a list of child vectors. The number and types of the child vectors is defined by the schema of the struct.
#### Map Vectors {#docs:internals:vector::map-vectors}
Internally map vectors are stored as a `LIST[STRUCT(key KEY_TYPE, value VALUE_TYPE)]`.
#### Union Vectors {#docs:internals:vector::union-vectors}
Internally `UNION` utilizes the same structure as a `STRUCT`.
The first “child” is always occupied by the Tag Vector of the `UNION`, which records for each row which of the `UNION`'s types apply to that row. | 535 |
|
Internals | Pivot Internals | `PIVOT` | [Pivoting](#docs:sql:statements:pivot) is implemented as a combination of SQL query re-writing and a dedicated `PhysicalPivot` operator for higher performance.
Each `PIVOT` is implemented as set of aggregations into lists and then the dedicated `PhysicalPivot` operator converts those lists into column names and values.
Additional pre-processing steps are required if the columns to be created when pivoting are detected dynamically (which occurs when the `IN` clause is not in use).
DuckDB, like most SQL engines, requires that all column names and types be known at the start of a query.
In order to automatically detect the columns that should be created as a result of a `PIVOT` statement, it must be translated into multiple queries.
[`ENUM` types](#docs:sql:data_types:enum) are used to find the distinct values that should become columns.
Each `ENUM` is then injected into one of the `PIVOT` statement's `IN` clauses.
After the `IN` clauses have been populated with `ENUM`s, the query is re-written again into a set of aggregations into lists.
For example:
```sql
PIVOT cities
ON year
USING sum(population);
```
is initially translated into:
```sql
CREATE TEMPORARY TYPE __pivot_enum_0_0 AS ENUM (
SELECT DISTINCT
year::VARCHAR
FROM cities
ORDER BY
year
);
PIVOT cities
ON year IN __pivot_enum_0_0
USING sum(population);
```
and finally translated into:
```sql
SELECT country, name, list(year), list(population_sum)
FROM (
SELECT country, name, year, sum(population) AS population_sum
FROM cities
GROUP BY ALL
)
GROUP BY ALL;
```
This produces the result:
<div class="narrow_table"></div>
| country | name | list("year") | list(population_sum) |
|---------|---------------|--------------------|----------------------|
| NL | Amsterdam | [2000, 2010, 2020] | [1005, 1065, 1158] |
| US | Seattle | [2000, 2010, 2020] | [564, 608, 738] |
| US | New York City | [2000, 2010, 2020] | [8015, 8175, 8772] |
The `PhysicalPivot` operator converts those lists into column names and values to return this result:
<div class="narrow_table"></div>
| country | name | 2000 | 2010 | 2020 |
|---------|---------------|-----:|-----:|-----:|
| NL | Amsterdam | 1005 | 1065 | 1158 |
| US | Seattle | 564 | 608 | 738 |
| US | New York City | 8015 | 8175 | 8772 | | 648 |
|
Internals | Pivot Internals | `UNPIVOT` | #### Internals {#docs:internals:pivot::internals}
Unpivoting is implemented entirely as rewrites into SQL queries.
Each `UNPIVOT` is implemented as set of `unnest` functions, operating on a list of the column names and a list of the column values.
If dynamically unpivoting, the `COLUMNS` expression is evaluated first to calculate the column list.
For example:
```sql
UNPIVOT monthly_sales
ON jan, feb, mar, apr, may, jun
INTO
NAME month
VALUE sales;
```
is translated into:
```sql
SELECT
empid,
dept,
unnest(['jan', 'feb', 'mar', 'apr', 'may', 'jun']) AS month,
unnest(["jan", "feb", "mar", "apr", "may", "jun"]) AS sales
FROM monthly_sales;
```
Note the single quotes to build a list of text strings to populate `month`, and the double quotes to pull the column values for use in `sales`.
This produces the same result as the initial example:
<div class="narrow_table"></div>
| empid | dept | month | sales |
|------:|-------------|-------|------:|
| 1 | electronics | jan | 1 |
| 1 | electronics | feb | 2 |
| 1 | electronics | mar | 3 |
| 1 | electronics | apr | 4 |
| 1 | electronics | may | 5 |
| 1 | electronics | jun | 6 |
| 2 | clothes | jan | 10 |
| 2 | clothes | feb | 20 |
| 2 | clothes | mar | 30 |
| 2 | clothes | apr | 40 |
| 2 | clothes | may | 50 |
| 2 | clothes | jun | 60 |
| 3 | cars | jan | 100 |
| 3 | cars | feb | 200 |
| 3 | cars | mar | 300 |
| 3 | cars | apr | 400 |
| 3 | cars | may | 500 |
| 3 | cars | jun | 600 | | 533 |
|
Acknowledgments | This document is built with [Pandoc](https://pandoc.org/) using the [Eisvogel template](https://github.com/Wandmalfarbe/pandoc-latex-template). The scripts to build the document are available in the [DuckDB-Web repository](https://github.com/duckdb/duckdb-web/tree/main/single-file-document).
The emojis used in this document are provided by [Twemoji](https://twemoji.twitter.com/) under the [CC-BY 4.0 license](https://creativecommons.org/licenses/by/4.0/).
The syntax highlighter uses the [Bluloco Light theme](https://github.com/uloco/theme-bluloco-light) by Umut Topuzoğlu. | 160 |