Shit performance is what happens when every response to optimizations or overhead is immediately answered with "premature optimization is the root of all evil."
Or the always fun "profile it!" or "the runtime will optimize it" when discussing new language features and systems.
So often performance isn't just ignored, it's actively preached against. Don't question how that new runtime feature performs today or even dare to ask. No no no, go all in on whatever and hope the JIT fairy is real and fixes it. Even though it never is and never does.
There's a place for all the current tech, of course. Developer productivity can be more important at times. But it should be far more known what the tradeoffs are and rough optimization guides than there are.
I think the issue isn't even individual developers, it's indeed the runtime itself. Anything you build on top of it is laggy.
Take my simple example of reading a file, processing it in memory, writing output. A process that should be instant in almost any case.
An implementation of such process that is commonly used in the front-end world would be CSS compilation, where a SCSS file (which is 90% CSS) is compiled into normal CSS output. The computation being pretty simple, it's all in-memory and some reshuffling of values.
In terms of what is actually happening (if we take the shortest path to solve the problem), this process should be instant. Not only that, it can probably handle a 1,000 of such files per second.
Instead, just a handful of files takes multiple seconds. Possibly a thousand times slower than the shortest path. Because that process is a node package with dependencies 17 layers deep running an interpreted language. Worse, the main package requires a Ruby runtime (no longer true for this example, but it was), which then loads a gem and then finally is ready to discover alien life, or...do simple string manipulation.
To appreciate the absurdity of this level of waste, I'd compare it to bringing the full force of the US army in order to kill a mosquito.
It's in end user apps too, and spreading. Desktop apps like Spotify, parts of Photoshop, parts of Office365...all rewritten in Electron, React, etc.
I can understand the perspective of the lonesome developer needing productivity. What I cannot understand is that the core layers are so poor. It means that millions of developers are building millions of apps on this poor foundation. It's a planetary level of waste.
Hmm, I recently built a site on Zola, and rebuilding the whole blog (including a theme with around 10 files of Sass) compiles in a few dozen milliseconds, and around 1 second on a 15 year old Core 2 Duo. But then again this is compiled Rust camping into libsass, which (despite Rust's dependency auditing nightmare) compiles to low-overhead executables. And apparently libsass is now deprecated for Dart Sass which relies on JS or the Dart VM.
In my experience, one of the most common causes of slowness is IO when there should be none. I’ve managed to speed up some computations at my company by over 1000x by batching IO and keeping the main computational pathways IO-free.
The Java and Python runtimes, which have much better test coverage and higher correctness standards than most enteprise applications, shipped a broken sort method for decades because it was a few percent faster. Never mind that for some inputs the returned value wouldn't actually be sorted.
As an industry we're not qualified to even start caring about performance when our record on correctness is so abysmal. If you have a bug then your worst-case runtime is infinity, and so far almost all nontrivial programs have bugs.
Wouldn't "profile it!" be the exact opposite if ignoring performance wins? It tells you which optimizations will noticeably improve your performance and which are theoretical gains that made no difference to realistic workloads.
It's a dismissive answer. It'd be like if someone asked "why does 0.2f + 0.1f print 0.30000000001?" and getting back an answer of "use a debugger!" It's not strictly wrong, the debugger would provide you with the data on what's happening. But it doesn't actually answer the question or provide commentary on why.
Similarly, the "profile it!" answer is often used when the person answering doesn't actually know themselves, and is just shutting down the discussion without meaningfully contributing. And it doesn't provide any commentary on why something performs like it does or if the cost is reasonable.
Well, performance is rarely the most important thing nowadays. What's preached against is not performance, but a performance-first attitude.
I agree it would be nice to value performance a bit more, but not at all costs, and depending on the use case and context of the application not necessarily as the priority over security, maintainability, velocity, reliability, etc.
> What's preached against is not performance, but a performance-first attitude.
That's what's preached against in theory. But in practice any performance discussion is immediately met with that answer. The standing recommendation is build it fully ignorant of all things performance, and then hope you can somehow profile and fix it later. But you probably can't, because your architecture and APIs are fundamentally wrong now. Or you've been so pervasively infested with slow patterns you can't reasonably dig out of it after the fact. Like, say, if you went all in on Java Streams and stopped using for loops entirely, which is something I've seen more than a few times. Or another example would be if you actually listen to all the lint warnings yelling at you to use List<T> everywhere instead of the concrete type. That pattern doesn't meaningfully improve flexibility, but it does cost you performance everywhere.
> What's preached against is not performance, but a performance-first attitude.
No, I can tell you this same record has been stuck on repeat since at least the mid 1990's. People want to shut down conversations or assign homework because it gets them out of having to think. Not because they're stupid (though occasionally...) but because you're harshing their buzz, taking away from something that's fun to think about.
Or the always fun "profile it!" or "the runtime will optimize it" when discussing new language features and systems.
So often performance isn't just ignored, it's actively preached against. Don't question how that new runtime feature performs today or even dare to ask. No no no, go all in on whatever and hope the JIT fairy is real and fixes it. Even though it never is and never does.
There's a place for all the current tech, of course. Developer productivity can be more important at times. But it should be far more known what the tradeoffs are and rough optimization guides than there are.