Appearance
DuckDB β
π Note: The DuckDB implementation is currently on
betaversion, and is derived from the PostgreSQL implementation.
This page documents the DuckDB-specific features exposed by the Unleasharp.DB Query Builder.
Auto Increment Sequences β
DuckDB does not provide a dedicated AUTO INCREMENT column attribute at table-creation time the same way some other engines do. Instead, you create and manage sequences explicitly and use the sequence in the column default via nextval(...).
To create a sequence with the Query Builder use Query.CreateSequence() and then set the column default to use the sequence, for example Default = "nextval('sequence_name')".
csharp
dbConnector.QueryBuilder().Build(query => query
.CreateSequence("seq_example_table_id")
).Execute();CreateSequence() Parameters β
| Parameter | Description | Default |
|---|---|---|
sequenceName | The name of the sequence to create. Used by nextval()/default expressions and must be unique within the database. | Required |
start | The initial value of the sequence (first value returned). | 1 |
increment | Step size between successive sequence values (positive or negative for descending sequences). | 1 |
maxValue | Maximum value the sequence can produce. Use -1 to indicate no maximum (unbounded). | -1 (no max) |
cycle | If true, the sequence will wrap around to the start (or min) when reaching max; if false, it will error when exhausted. | false |
Column With Sequence β
csharp
[Table("example_table")]
[UniqueKey (typeof(ExampleTable), nameof(ExampleTable.Id))]
public class ExampleTable {
[Column("id", ColumnDataType.UInt64, PrimaryKey = true, Unsigned = true, AutoIncrement = true, Default = "nextval('seq_example_table_id')")]
public ulong? Id {
get; set;
}
[Column("_mediumtext", ColumnDataType.Text, Length = -1)]
public string MediumText {
get; set;
}
[Column("_longtext", ColumnDataType.Text, Length = -1)]
public string Longtext {
get; set;
}
}CSV β
DuckDB provides first-class CSV support (functions like read_csv and COPY FROM). The Query Builder exposes helpers for common CSV workflows while preserving the full parameter surface available in DuckDB.
See the DuckDB docs for the complete set of CSV parameters: https://duckdb.org/docs/stable/data/csv/overview#parameters
The examples below use the provided sample flights.csv and a simple Unleasharp.DB mapping class.
csv
FlightDate|UniqueCarrier|OriginCityName|DestCityName
1988-01-01|AA|New York, NY|Los Angeles, CA
1988-01-02|AA|New York, NY|Los Angeles, CA
1988-01-03|AA|New York, NY|Los Angeles, CAcsharp
[Table("flights")]
public class Flights {
[Column("FlightDate", ColumnDataType.Date)]
public DateOnly FlightDate { get; set; }
[Column("UniqueCarrier", ColumnDataType.Varchar)]
public string UniqueCarrier { get; set; }
[Column("OriginCityName", ColumnDataType.Varchar)]
public string OriginCityName { get; set; }
[Column("DestCityName", ColumnDataType.Varchar)]
public string DestCityName { get; set; }
}CSV to Table β
This workflow copies rows from a CSV file into a database table (equivalent to COPY table_name FROM 'csv_file.csv').
π Note: With the current Query Builder you must create the destination table beforehand (for example with
CreateTable<T>()).
csharp
var readCSVFunction = new Unleasharp.DB.DuckDB.Functions.ReadCSVFunction {
Path = "flights.csv",
Delim = "|",
Header = true,
Columns = new Dictionary<string, string> {
{"FlightDate", "DATE" },
{"UniqueCarrier", "VARCHAR" },
{"OriginCityName", "VARCHAR" },
{"DestCityName", "VARCHAR" },
}
};
dbConnector.QueryBuilder().Build(query => query.CreateTable<Flights>()).Execute<bool>();
int insertedFromCSV = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromCSV<Flights>(readCSVFunction)
).Execute<int>();sql
COPY
"flights"
FROM
'flights.csv' (
DELIM '|',
HEADER
)
;If the table already exists you may also target a table by name (no class mapping required) and dump the CSV contents directly into it:
csharp
var readCSVFunction = new Unleasharp.DB.DuckDB.Functions.ReadCSVFunction {
Path = "flights.csv",
Delim = "|",
Header = true,
Columns = new Dictionary<string, string> {
{"FlightDate", "DATE" },
{"UniqueCarrier", "VARCHAR" },
{"OriginCityName", "VARCHAR" },
{"DestCityName", "VARCHAR" },
}
};
int insertedFromCSV = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromCSV("raw_table_name", readCSVFunction)
).Execute<int>();sql
COPY
"raw_table_name"
FROM
'flights.csv' (
DELIM '|',
HEADER
)
;CSV to Rows β
DuckDB allows direct interaction with CSV data using regular queries, reading data from a CSV file.
This method reads the data from a CSV file as a List<T> of rows. It is the equivalent to SELECT * FROM read_csv('csv_file.csv').
csharp
var readCSVFunction = new Unleasharp.DB.DuckDB.Functions.ReadCSVFunction {
Path = "flights.csv",
Delim = "|",
Header = true,
Columns = new Dictionary<string, string> {
{"FlightDate", "DATE" },
{"UniqueCarrier", "VARCHAR" },
{"OriginCityName", "VARCHAR" },
{"DestCityName", "VARCHAR" },
}
};
List<Flights> csvFlights = dbConnector.QueryBuilder().Build(query => query
.Select()
.From(readCSVFunction)
).ToList<Flights>();sql
SELECT
*
FROM
read_csv('flights.csv',
columns = {
'FlightDate': 'DATE',
'UniqueCarrier': 'VARCHAR',
'OriginCityName': 'VARCHAR',
'DestCityName': 'VARCHAR'
},
delim = '|',
header = true
)JSON β
DuckDB provides JSON support (functions like read_json and COPY FROM). The Query Builder exposes helpers for common JSON workflows while preserving the full parameter surface available in DuckDB.
See the DuckDB docs for the complete set of JSON parameters: https://duckdb.org/docs/stable/data/json/loading_json#parameters
The examples below use the provided sample todos.json and a simple Unleasharp.DB mapping class.
json
[
{
"userId": 1,
"id": 1,
"title": "delectus aut autem",
"completed": false
},
{
"userId": 1,
"id": 2,
"title": "quis ut nam facilis et officia qui",
"completed": false
},
{
"userId": 1,
"id": 3,
"title": "fugiat veniam minus",
"completed": false
},
...
]csharp
[Table("todo")]
public class Todos {
[Column("id", ColumnDataType.UInt64, Unsigned = true, PrimaryKey = true)]
public ulong id { get; set; }
[Column("userId", ColumnDataType.UInt64, Unsigned = true)]
public ulong userId { get; set; }
[Column("title", ColumnDataType.Varchar)]
public string title { get; set; }
[Column("completed", ColumnDataType.Boolean)]
public string completed { get; set; }
}JSON to Table β
This workflow copies objects from a JSON file into a database table (equivalent to COPY table_name FROM 'json_file.json').
π Note: With the current Query Builder you must create the destination table beforehand (for example with
CreateTable<T>()).
csharp
var readJSONFunction = new Unleasharp.DB.DuckDB.Functions.ReadJSONFunction {
Path = "todos.json",
Format = JSONFormat.Array,
Columns = new Dictionary<string, string> {
{"id", "UBIGINT" },
{"userId", "UBIGINT" },
{"title", "VARCHAR" },
{"completed", "BOOLEAN" },
}
};
dbConnector.QueryBuilder().Build(query => query.CreateTable<Todos>()).Execute<bool>();
int insertedFromJSON = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromJSON<Todos>(readJSONFunction)
).Execute<int>();sql
COPY
"todo"
FROM
'todos.json' (ARRAY);If the table already exists you may also target a table by name (no class mapping required) and dump the JSON contents directly into it:
csharp
var readJSONFunction = new Unleasharp.DB.DuckDB.Functions.ReadJSONFunction {
Path = "todos.json",
Format = JSONFormat.Array,
Columns = new Dictionary<string, string> {
{"id", "UBIGINT" },
{"userId", "UBIGINT" },
{"title", "VARCHAR" },
{"completed", "BOOLEAN" },
}
};
int insertedFromJSON = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromJSON("raw_table_name", readJSONFunction)
).Execute<int>();sql
COPY
"raw_table_name"
FROM
'todos.json' (ARRAY);JSON to Rows β
DuckDB allows direct interaction with JSON data using regular queries, reading data from a JSON file.
This method reads the data from a JSON file as a List<T> of rows. It is the equivalent to SELECT * FROM read_json('json_file.json').
csharp
var readJSONFunction = new Unleasharp.DB.DuckDB.Functions.ReadJSONFunction {
Path = "todos.json",
Format = JSONFormat.Array,
Columns = new Dictionary<string, string> {
{"id", "UBIGINT" },
{"userId", "UBIGINT" },
{"title", "VARCHAR" },
{"completed", "BOOLEAN" },
}
};
List<Todos> jsonTodos = dbConnector.QueryBuilder().Build(query => query
.Select()
.From(readJSONFunction)
.Where<Todos>("userId", 1)
).ToList<Todos>();sql
SELECT
*
FROM
read_json('todos.json',
format = 'array',
columns = {
'id': 'UBIGINT',
'userId': 'UBIGINT',
'title': 'VARCHAR',
'completed': 'BOOLEAN'
}
)
WHERE
"userId"=1Parquet β
DuckDB provides Parquet support (functions like read_parquet and COPY FROM). The Query Builder exposes helpers for common Parquet workflows while preserving the full parameter surface available in DuckDB.
See the DuckDB docs for the complete set of Parquet parameters: https://duckdb.org/docs/stable/data/parquet/overview#parameters
The examples below use the provided sample test.parquet file and a simple Unleasharp.DB mapping class.
https://www.kaggle.com/datasets/sahilsawantss/test-parquet
csharp
[Table("parquet_test")]
public class TestParquet {
[Column("ID", ColumnDataType.UInt64, PrimaryKey = true)]
public long Id { get; set; }
[Column("Date", ColumnDataType.DateTime)]
public DateTime Date { get; set; }
[Column("X1", ColumnDataType.Double)]
public double X1 { get; set; }
[Column("X2", ColumnDataType.Double)]
public double X2 { get; set; }
[Column("X3", ColumnDataType.Double)]
public double X3 { get; set; }
[Column("X4", ColumnDataType.Double)]
public double X4 { get; set; }
[Column("X5", ColumnDataType.Double)]
public double X5 { get; set; }
}Parquet to Table β
This workflow copies objects from a Parquet file into a database table (equivalent to COPY table_name FROM 'parquet_file.parquet').
π Note: With the current Query Builder you must create the destination table beforehand (for example with
CreateTable<T>()).
csharp
var readParquetFunction = new Unleasharp.DB.DuckDB.Functions.ReadParquetFunction {
Path = "test.parquet",
};
dbConnector.QueryBuilder().Build(query => query.CreateTable<Todos>()).Execute<bool>();
int InsertedFromParquet = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromParquet<TestParquet>(readParquetFunction)
).Execute<int>();sql
COPY
"parquet_test"
FROM
'test.parquet';If the table already exists you may also target a table by name (no class mapping required) and dump the Parquet contents directly into it:
csharp
var readParquetFunction = new Unleasharp.DB.DuckDB.Functions.ReadParquetFunction {
Path = "test.parquet",
};
int InsertedFromParquet = dbConnector.QueryBuilder().Build(query => query
.CopyIntoFromParquet("raw_table_name", readParquetFunction)
).Execute<int>();sql
COPY
"raw_table_name"
FROM
'test.parquet';Parquet to Rows β
DuckDB allows direct interaction with Parquet data using regular queries, reading data from a Parquet file.
This method reads the data from a Parquet file as a List<T> of rows. It is the equivalent to SELECT * FROM read_parquet('parquet_file.parquet').
csharp
var readParquetFunction = new Unleasharp.DB.DuckDB.Functions.ReadParquetFunction {
Path = "test.parquet",
};
List<TestParquet> parquetEntries = dbConnector.QueryBuilder().Build(query => query
.Select()
.From(readParquetFunction)
.Limit(100)
).ToList<TestParquet>();sql
SELECT
*
FROM
read_parquet('test.parquet')
LIMIT
100 OFFSET 0