It's unclear to me what "100%" refers to here, but surely it does not include the Linux kernel or drivers? (I've recently read conversations about how difficult this would be.)
I'm no expert, but as an interested amateur I thought the Linux kernel could already be built reproducibly?
There is some documentation at least... and I know several Linux distributions have been working on reproducible builds for a long time now - I'd be surprised if there hasn't been good progress on this.
In container context I've heard a definition of "100% reproducible" that means even file timestamps are 100% the same. Like your entire build is bit-by-bit precisely the same if you didn't modify any source.
I expect it refers to the proportion, weighted by filesize, of programs that are byte-for-byte reproducible across machines. (The assumption being that to take the same measurement in a dynamically-linked context would result in a number less than 100% due to machines having different copies of some libs.) In other contexts, it might simply be a proportion of whole packages, for example Arch Linux' core package set is 96.6% reproducible[1] (=256/(256+9)).
The Linux kernel's lack of a stable ABI (specifically [2]; many userspace APIs are stabilised) doesn't mean individual revisions can't be built reproducibly.
I don't think these are mutually incompatible. Reproducible builds means the same sources will produce the same results when built twice. The kernel's lack of stability guarantees refers to its evolution over time.
It's unclear to me what "100%" refers to here, but surely it does not include the Linux kernel or drivers? (I've recently read conversations about how difficult this would be.)