This is like carrying around a pound of beef because you refuse to look up the address of a McDonald's 7 minutes away.
Setup quotas or implement some damn monitoring -- if you're not monitoring something as simple and critical as disk usage, what else are you not monitoring?
Monitoring doesn't prevent random things from spiking, and something like this makes it easier to recover.
Quotas are tricky to set up when things are sharing disk space, and that could easily give you a false positive where a service unnecessarily runs out of space.
Not all environments require a stringent SLA. I have some servers that don't have a stringent SLA and aren't worth being woken up at night over if their disk is filling up fast.
It allows me to remove that big file, then I'm able to run sudo since I don't allow root ssh and sudo won't work with a full disk, then I can clear up space on the system, bring it up again, then update log rotate or do whatever to prevent that case from happening again.
That sounds a lot more complicated (and time consuming) than just having monitoring in place, realizing the disk is filling up and fixing it before it leads to downtime.
Monitoring is in place and usually it is caught in time. Downtime is acceptable in this environment, I don't think its worth being woken up in the middle of the night when it can just be resolved in the morning.
Setup quotas or implement some damn monitoring -- if you're not monitoring something as simple and critical as disk usage, what else are you not monitoring?