The difference is between "given this particular hardware and OS setup, the driver will work correctly, guaranteed." vs. "on your (discontinued) Sony laptop with a strange hardware interface running the beta Slackware release the driver will probably work"
It's also the difference between "using our tools implementing our API on our hardware" vs. "trying to figure out the right thing when every component has a slightly different take on the API spec and the applications using the API make mistakes that we have to try to correct for with unreliable heuristics". Developers in the scientific computing world care about correctness a lot more than game developers working under unrealistic deadlines and with no commitment to long-term maintenance.
An OpenGL driver capable of running most commercial games is horrifically more complex than a CUDA driver. An OpenGL driver that merely works according to spec is useless in practice. To achieve any practicality for non-trivial use cases an OpenGL driver has to take an attitude of "do what I mean, not what I say", much like web browsers and Windows' backwards compatibility. CUDA doesn't have those problems. NVidia never has to deal with developers complaining that their broken code worked fine on some other vendor's platform. They don't have to worry about programs relying on some esoteric decades-old feature that NVidia doesn't care about but had to implement anyways for standards compliance. And since CUDA is operating in the professional segment of the market, they can take their time when it comes to compatibility with bleeding-edge versions of other OS components.