No, it's worse than that - the answer is "yes", because virtual memory + overcommit means that most of the time the OS will happily allow you to allocate more memory than physical+swap, and essentially gamble that you won't actually need all of it (and this is implemented because apparently that's almost always a winning bet).
The OS doesn't have to gamble that you won't actually need all the memory you allocate, it could just be a gamble that another memory hogging process exits before you need to use all of your memory, or that you don't need to use all of your memory at the same time.
Yeah. And the issue is that the actual problem happens sometime later when the application actually tries to use that memory. So you replaced an error that is relatively simple to handle with something that is impossible to handle reliably.
So the operating system very much doesn't like to admit it doesn't have physical memory to back the area you are trying to use. Now it does not have a simple way to signal this to the application (there is no longer an option to return an error code) and so either everything slows down (as OS hopes that another process will return a little bit of memory to get things going for a little while) or one of the processes gets killed.
Interestingly, Linux's OOMKiller actually gives you (the sysadmin or system designer) more control over what happens when the system is low on memory than disabling overcommit.
In a system without overcommit, every process is taking memory out of the shared pool, until some random process is the unlucky one that can't allocate more. In a happy case, that unlucky process also has some data that it can let go of. But this is entirely random - you could have a bunch of gigabyte-sized application caches in half of your processes, but the NTP daemon might be the one who cends up failing because it can't allocate a few more bytes. Even worse, it could be the SSH server or bash failing to spawn a new shell, preventing any kind of intervention on the system.
With OOMKiller, you can at least define some priorities, and ensure some critical processes are never going to be killed or stalled.
With overcommit disabled, the program will fail in a more predictable way. Moreover, if it allocates the memory successfully nothing bad can happen to it later. So you have the option to allocate the memory right at the startup of your program and be sure it is not going to fail later.
With overcommit enabled you technically have more memory to work with. You could say that if the program has to fail anyway, then it might be better if it fails later at a higher memory usage.
There is userspace integration available as well for OOM killers, afaik Facebook using/having developed them. They can take into account much more fine-grained selection criteria before killing a process.
And we additionally end up in a feedback loop where coders don't check the return value from malloc(!) Since why bother when it never errors. Then we don't have enough resilience capital there either.