Yes it is an issue. Under high memory pressure, we start digging into swap, and at that point, the UI is starting to significantly chug.
Worse, this often happens when there's plenty of cache to evict. I can and have restored a nigh-unusable desktop to normal operation many times with a painfully entered `echo 3 > /proc/sys/vm/drop_caches` from a new TTY, instantly resolving the pressure and giving me time to find and terminate the presumptuous program who thinks its entitled to 3/4 of system memory (usually some flavor of web browser or electron bloatware).
Why's the kernel so jealously guarding its cache allocation and making the UX suck harder? Not a clue. Whatever performance penalty I take from nuking caches is far, far less than from allowing free memory to fill up and dealing with the pathological behavior surrounding that.
Just to again note that just because a program says it's using N MB of RAM doesn't mean that all of that RAM is actually paged in. Every thread you execute has an 8+MB stack but most of it won't get allocated for the majority of programs.
> we start digging into swap, and at that point, the UI is starting to significantly chug.
Only if you're constantly swapping in and out of swap. Just putting something into swap and never retrieving it won't case issues.
I'd generally recommend disabling swap altogether though and just letting OOM take out misbehaving processes.
This isn't all to say that using less memory is 'bad', but when people say 'oh that program is such a memory hog' I wonder if they might be measuring incorrectly, or not realizing what it's doing with that memory.
>Only if you're constantly swapping in and out of swap.
Which, in the experience I just gave, is what's happening. System memory at some high 90s percent utilization, swap usage creeping up, kswapd with a ton of CPU usage, and worst of all, UI chugging. If it wasn't 'actual' memory usage, why does dropping caches, instantly freeing up some amount of memory, restore responsiveness?
I've tried operating swapless before, but that just means OOM killer kicks in even when there's cache to evict. That seems like a priority inversion to me - of anything paged in, shouldn't cache have the absolute lowest priority, and be the first thing to go when memory's needed for other things?
Worse, this often happens when there's plenty of cache to evict. I can and have restored a nigh-unusable desktop to normal operation many times with a painfully entered `echo 3 > /proc/sys/vm/drop_caches` from a new TTY, instantly resolving the pressure and giving me time to find and terminate the presumptuous program who thinks its entitled to 3/4 of system memory (usually some flavor of web browser or electron bloatware).
Why's the kernel so jealously guarding its cache allocation and making the UX suck harder? Not a clue. Whatever performance penalty I take from nuking caches is far, far less than from allowing free memory to fill up and dealing with the pathological behavior surrounding that.