It's death by a thousand paper cuts. Lots of things that aren't really that slow in isolation, but in aggregate (or under pressure) they slow down the system and become impossible to measure.
Let's do web development. Since you mentioned payloads: today they're bigger, and often come with redundant fields, or sometimes they're not even paginated! This slows down the database I/O, requires more cache space, slows down the serialisation, slows down compression, requires more memory and bandwidth...
And then you also have the number of requests per page. Ten years ago you'd make one request that would serve you all the data in one go, but today each page calls a bunch of endpoints. Each endpoint has to potentially authenticate/authorise, go to the cache, go to the database, and each payload is probably wasteful too, as in the previous paragraph.
About authentication and authorisation: One specific product I worked on had to perform about 20 database queries for each request just for checking the permissions of the user. We changed the authentication to use a JWT-like token and moved the authorisation part to inside each query (adding "where creator_id = ?" to objects). We no longer needed 20 database queries before the real work.
About 15 years ago I would have done "the optimised way" simply because it was much easier. I would have used SQL Views for complex queries. With ORMs it gets a bit harder, and it takes time to convince the team that SQL views are not just a stupid relic of the past.
Libraries are often an issue that goes unnoticed too. I mentioned serialisation above: this was a bottleneck in a Rails app I worked. Some responses were taking 600ms or more to serialise. We changed to fast_jsonapi and it went to sub-20ms times for the same payload that was 600ms. This app already had responses tailored to each request, but imagine if we were dumping the entire records in the payload...
Another common one is also related to SQL: when I was a beginner dev, our on-premises product was very slow in one customer: some things on the interface were taking upwards of 30 seconds. That wasn't happening in tests or in smaller customers. A veteran sat down by my side and explained query plans, and we brought that number down to milliseconds after improving indexing and removing useless joins.
A few weeks ago an intern tried to put a javascript .sort() inside a .filter() and I caught it. Accidentally quadratic (actually it was more like O(n^4)). He tried to defend himself with a "benchmark" and show it wasn't a problem. A co-worker then ran anonymised production data into it and it choked immediately. Now imagine this happening on hundreds of libraries maintained by voluntaries on Github: https://accidentallyquadratic.tumblr.com
All those things are very simple, and you certainly know all of them. They're the bread and butter of our profession, but honestly somewhere along the way it became difficult to measure and change those things. Why that happened is left as an exercise.
> All those things are very simple, and you certainly know all of them.
I wonder about that. Most of the people I graduated with didn't know about complexity. Some had never touched a relational database, most probably didn't know about views. I doubt most of them knew what serialization mean.
I wonder if it’s a matter of background. I never really had tutorials when starting out. I never had good documentation, even.
(Sorry in advance for the rant)
I also remember piercing together my first programs from other people’s code. Whenever I needed an internet forum I’d build one. Actually, all my internet friends, even the ones who didn’t go into programming, were doing web forums and blogs from scratch!
Today people consider that a heresy. “How dare you not use Wordpress”.
My generation just didn’t care, we built everything from scratch because it was a badge of honor to have something made by us. We didn’t care about money, but we ended up with abilities that pay a lot of cash. People who started programming post the 2000s just didn’t do it...
I think it is visible that I sorta resent the folks (both the younger, and the older who arrived late at the scene) constantly telling me I shouldn’t bother “re-inventing the wheel”. Well, guess what: programming is my passion, fuck the people telling me to use Unity to make my game, or Wordpress to do my blog.
Let's do web development. Since you mentioned payloads: today they're bigger, and often come with redundant fields, or sometimes they're not even paginated! This slows down the database I/O, requires more cache space, slows down the serialisation, slows down compression, requires more memory and bandwidth...
And then you also have the number of requests per page. Ten years ago you'd make one request that would serve you all the data in one go, but today each page calls a bunch of endpoints. Each endpoint has to potentially authenticate/authorise, go to the cache, go to the database, and each payload is probably wasteful too, as in the previous paragraph.
About authentication and authorisation: One specific product I worked on had to perform about 20 database queries for each request just for checking the permissions of the user. We changed the authentication to use a JWT-like token and moved the authorisation part to inside each query (adding "where creator_id = ?" to objects). We no longer needed 20 database queries before the real work.
About 15 years ago I would have done "the optimised way" simply because it was much easier. I would have used SQL Views for complex queries. With ORMs it gets a bit harder, and it takes time to convince the team that SQL views are not just a stupid relic of the past.
Libraries are often an issue that goes unnoticed too. I mentioned serialisation above: this was a bottleneck in a Rails app I worked. Some responses were taking 600ms or more to serialise. We changed to fast_jsonapi and it went to sub-20ms times for the same payload that was 600ms. This app already had responses tailored to each request, but imagine if we were dumping the entire records in the payload...
Another common one is also related to SQL: when I was a beginner dev, our on-premises product was very slow in one customer: some things on the interface were taking upwards of 30 seconds. That wasn't happening in tests or in smaller customers. A veteran sat down by my side and explained query plans, and we brought that number down to milliseconds after improving indexing and removing useless joins.
A few weeks ago an intern tried to put a javascript .sort() inside a .filter() and I caught it. Accidentally quadratic (actually it was more like O(n^4)). He tried to defend himself with a "benchmark" and show it wasn't a problem. A co-worker then ran anonymised production data into it and it choked immediately. Now imagine this happening on hundreds of libraries maintained by voluntaries on Github: https://accidentallyquadratic.tumblr.com
All those things are very simple, and you certainly know all of them. They're the bread and butter of our profession, but honestly somewhere along the way it became difficult to measure and change those things. Why that happened is left as an exercise.