As far as I could see, it's not a problem in the CSPRNG itself, but in how it is used. More specifically it seems like a lot of applications use more entropy bits per second than servers generate by normal use. I'd say this is the result of not understanding how CSPRNG works and how to use it safely. Adding more or better sources of entropy to your systems would solve this.
>I'd say this is the result of not understanding how CSPRNG works and how to use it safely.
My take from previous discussions is that once you seed a CSPRNG properly you can take secure random numbers from it forever. So in a linux server once /dev/urandom has been properly seeded you can take random numbers from it forever with no issues.
So if what they discovered in this research is that "Linux's /dev/urandom entropy pool often runs empty on servers" this shouldn't really be much of an issue.
I wonder if OpenBSD's arc4random_buf() is unaffected?
cc 'tptacek :)