The simplest way of using this is to not do any serialization at all, just store any information you want persisted in memory allocated from the region you mmaped to the Optane DIMMs instead of the DRAM DIMMs.
The main reason I've been thinking about tools like that is more because the persistent structure should probably work regardless of compiler settings/flags and code updates. Directly mmaping the structures you're going to have to worry about how things are packed, and if a new compiler optimization causes things to go differently (something gets eliminated in one version and not the other).
I would say that this is still a mentality of thinking of something as "data on disk", where the data should be in an "ABI-stable" format.
Think of persistent-memory data as more like data resident in the memory of a runtime which can experience a "hot code upgrade", like the Erlang runtime.
In Erlang, when you hot-upgrade your running code, you usually do so through a managed system of "relups" (RELease UPdates), which are sort of a cross between an RDBMS migration, and a traditional installer-package full of newer versions of code and assets.
The Erlang runtime takes this package, unpacks it, and then runs a master relup script, which can been authored to do arbitrary things (including, if ultimately necessary, fully rebooting the node, throwing away all that in-memory state.) Mostly, though, a relup script calls into individual "appup" scripts for each Erlang application. Those applications then specify how their corresponding running processes are to be updated—which can sometimes be fraught (if e.g. the new release requires that you add new service-processes or remove old ones, migrating in-memory state into a new architecture), but usually just means calling a "code_change" callback on all the service-processes.
This "code_change" callback is the thing that's most like an RDBMS migration: it is called from the event-loop running in the old version of the code of the service-process, and passes in the old in-memory state; and when it returns, it's returning the new in-memory state, to resume the event loop in the new version of the code of the service-process.
This is basically how I'd picture dealing with code updates (including ones due to build-setting changes) in software that deals with pmem: you'd architect your code such that the library that touches the pmem can have multiple versions of it dynamically loaded (though not running) at the same time; and then you'd stage a migration from the old code's pmem state encoding, to the new version's, by
1. dlopen(2)ing the new version of the lib;
2. telling the old version of the lib to stop any ongoing work;
3. handing off the toplevel pmem state-handle that the old version of the lib was using, to a "migrate" function in the new version of the lib;
4. replacing the old version's pmem state-handle with a dummy one;
5. telling the old version of the lib to terminate (and so do the trivial cleanup to the world it sees through the dummy handle);
6. tell the new version of the lib to initialize, using the handle to the now-migrated-in-format pmem;
7. dlclose(2) the old version of the lib.
Basically, picture what something like Photoshop would have to do to enable you to upgrade its plugins without restarting it or closing your working document, and you'll have the right architecture.
I mean, it depends on whether your pmem data is a bunch of copy-on-write persistent data structures like HAMTs; or maybe packed data structures like Vector<Foo>s where you can't easily rewrite one Foo to be a different size without rewriting the whole vector; etc.
If 1. the new format is just like the old format except for one little difference to one struct, and 2. structs point to other structs, rather than containing them; then it's just a matter of calling your within-mmap(2)ed-arena malloc(2)-equivalent function to get a new chunk of the pmem arena of the right size for the new version of the struct; and then rewriting the pointer in the other struct to point to it; and then calling your free(2)-equivalent on the old version of the struct.
If you change the structure of some fundamental primitive type like how strings are represented, then you're probably going to have to rewrite your whole pmem arena.
Though, also, you can just make your code deal with both old and new versions of the struct, and only migrate structs when they're getting modified anyway. (This is equivalent to the way you'd avoid an RDBMS migration rewrite an entire table, by instead adding a trigger that makes the migration happen to a row on UPDATE, and then ensuring that your business-layer can deal with both migrated and un-migrated versions of the row.)
> If you change the structure of some fundamental primitive type like how strings are represented, then you're probably going to have to rewrite your whole pmem arena.
That's part of the reason why I was thinking something like Cap'n Proto or Protocol Buffers might make sense for a lot of structures. You pay a bit of cost for writing but get to gracefully handle upgrades to the structure if you do it right. I'd imagine you want to use something higher level just above them to organize the records. But this is all a really new area of thinking about this so I'm probably being a bit obtuse about it.