I tried using DuckDB in a recent project. In their Python API, it had trouble correctly handling binary blob inserts. I really should have started poking at the code and tried to submit a patch, but I also realized I'd probably need to write my own schema migration layer (I'm use to yoyo-migrations for sqlite3).
I ended up back on sqlite3. It seemed cool though; might pick it up again for a future project.
DuckDB is incredible for large stats projects using R. R can't really handle large datasets and simply runs out of memory if you load everything. You get around this by splitting your data into a bunch of RData files but there's quite a bit of overhead maintaining those, recreating them when you need more variables, etc.
With DuckDB you can just load it once into the database then use dplyr/dbplyr to transparently access data with the queries pushing down to the DB afaik. And it runs anywhere and doesn't need local admin (like most database servers).
(SQLLite can't handle many columns without a custom build)
Have you figured out a way to add columns to DuckDB from R vectors? I'm not sure this is possible yet without using inefficient insert operations. Ideally, I'd like to pull in some columns from the DB, generate a new column vector and store it back in the DB by adding a column to an existing table.
I ended up back on sqlite3. It seemed cool though; might pick it up again for a future project.