Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: Have we screwed ourselves as software engineers?
409 points by tejinderss on May 4, 2022 | hide | past | favorite | 415 comments
I cannot help but wonder, where is our software industry heading? There are overly complicated solutions to simple problems, huge push for moving to fancy stacks just for the sake of moving. Distributed systems? Kubernetes? Rust for CRUD apps? Blockchain, NoSql, crypto, micro-frontends and the list goes on and on. Its gone too extreme to the point where no one is exempt from these things anymore. Couple of years ago, I thought, its fine as long as I am not involved in this complexity, I can turn a blind eye towards it. But now, this unnecessary complexity has seeped in my day job as well. Managers start talking about "micro services", "writing" kubernetes operators in Go, moving away from python (because its too "slow"); someone recently gave a talk in my company, how to make a 500 line python script (which heavily involves in-efficient handling of IO) go faster with Rust. Someone else talks about that we need to move our poly repos into mono-repo because that where the leaders of the industry are moving to. Even recruiters started asking questions like "have u looked at modern languages like Go?"

I cannot help but wonder, that we have possibly screwed ourselves pretty bad, and there is no escape from it. The vocal minority tries to push these overly complex solutions down everyone's throats, and management loves this, because it creates "work" for the sake of it, but it doesn't add any real business value.

What are your thoughts on this? Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?



The way I look at it is: there are more tools in the toolbox than ever before. Which makes our judgement (the thing they really pay us for) even more important. Kubernetes, for example, is a specific solution to a specific problem. The solution is complex but so is the problem. If k8s give you the right trade-offs for your situation, then it’s not busy work.

Of course, there are plenty of project where judgement Is thrown out the window in favor of adding a buzzword to everyone’s resume. I’ve heard it called “promotion-based architecture”, as in you pick the technology most likely to get you promoted. (If that works, it says all sorts of not great things about your organization).

Regardless, I don’t think the availability of tools is the root problem. It’s and industry-wide lack of emphasis on identifying and understanding the problem first.


Promotion based architecture is self fulfilling prophecy at least in BI/ data world.

I see everybody around me moving to cloud, without really good explanation why. Only reasonable thing I can see as an pattern is that cloud experience on top of data things gets paid 30% more. It made me consider cloud a lot.

I was considering switching to cloud, just so I can put in my CV "experience with migration to cloud".

For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.


>For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.

Depends on the company. I've been working for marketing agencies for the last 15 years and they're generally staffed by, at most, 1 IT person who is in charge of a third-party vendor relationship that offers managed IT services. Those IT resources (internal or vendor) don't specialize in data and often don't know how to deal with it well, predictable workloads or not, and often offer up solutions which are not appropriate (cost or otherwise) whereas there are managed solutions from cloud vendors that do. BigQuery, for example, handles compute and storage for you and rolls it into one reasonable query price. No need to worry about any management of anything there and just sit Data Studio (included) or any other BI tool (Tableau, etc) on top and you're good to go.

I get your skepticism and welcome it, but you're being a little rough. We're cloud native at my current company (I did a partial cloud migration at my last which was completed after I left) and it makes my life leading a Data Engineering and Data Science team MUCH easier without upfront hardware/software costs or long-term contracts, which were STAGGERING and left us with much more difficult to maintain, upgrade, etc hardware that took up most of my job as opposed to almost none of it today.

YMMV.


Partially depends but company with 1 IT person that is not data focused, there is low probability that they will have that amount of data.

For anything above what we have I think it weirdly depends on country/ salaries.

In US, 100k per year is no brainer for cloud as even 1 FTE would cost much more.

In non-western Europe, 100k is deal killer as you can hire 2 senior DBAs and you still have enough money for quite a reasonable server.


With extensive experience in this vertical, (marketing agency and analytics), 200GB is a _partial_ day for one datasource, let alone table or total DB size.


but does team have 1 IT/ data person?

Edit: sorry I see that you are the same person I was replying to. Then to understand it better, who is consumer of so vast amount of data?


The company has 1 IT person and uses a third-party vendor for IT support. We have several data engineers and scientists, but we are not specialists as DBA or cloud infrastructure; it's just one very small part of our work.


Interesting, thats correct, its segment of the companies that I didn't take into consideration. Thanks


> 1 IT person who is in charge of a third-party vendor relationship that offers managed IT services

That is something completely different from migrating your inhouse applications to the cloud by introducing kubernetes. I think your example show nicely that there is not one solution for everything. As an industry we have to put way more emphasis on deciding tradeoffs by understanding requirements and risks.


I see everybody around me moving to cloud, without really good explanation why.

People just buy into the "cloud" marketing. They don't have the ability to think and reason, and so don't understand that "cloud" just means "renting someone else's computer."

I built a complex in-house medical system. Quick, reliable, and liked by the users.

I was in the middle of adding a major new feature when all of the management in the IT department quit. The new people immediately decided that the whole thing had to be "in the cloud." I was removed from the project, and they hired three people full-time to rebuild it.

That was three years ago. The new system is still not online. The users are still using my old system, and the feature I was working on never got added because my presence and input was not welcome because I "don't understand the cloud." So I got moved to other projects.

People talk about "the cloud" with the same fervor and language as members of a cult. And you will be an outcast if you dare challenge their way of thinking.


Given what you explained, it seems to me that the cloud was an excuse given by the new overlords to just get you out off the way. Cloud is fashionable now, but any excuse would work for them. This seems to be a power grab.


I don't think it's about power. Because of the way the organization is structured, I am isolated from the IT department.

Moving me out brings no advantages to either part of the org. In fact, it's probably a disadvantage to IT because it has to burn three people from its allocated headcount. Where previously, it had a bonus person that didn't come out of the IT budget.


It's always about power and politics when bone-headed decisions like that are made. You correctly identified that _you_ did not report to IT, therefore _you_ were a threat to IT's power. Maybe you could have been re-orged under IT, but they probably saw you as a troublemaker who wouldn't kotow.


At the risk of sounding cultish: I would argue “the cloud” is a bit more than just renting someone else’s computer. Hosting companies existed long before the “cloud”.

The difference is the abstraction layer. I would define “the cloud” as a layer that abstracts away the physical infrastructure. (Which works until it doesn’t but that’s a longer comment.)

You can even apply “the cloud” to your very own fleet of computers with the right software.

Like I said in the original comment, it’s a specific solution to a specific problem. Whether it’s right for your system I have no idea.


You can even apply “the cloud” to your very own fleet of computers with the right software.

If it's on-prem, it's not a cloud, it's a fog.

At least, that's what I've been calling some of my deployments.


> People just buy into the "cloud" marketing. They don't have the ability to think and reason, and so don't understand that "cloud" just means "renting someone else's computer."

It sounds you're trying to dismiss and downplay solutions to problems you're oblivious to.

Even if you want to simplisticly frame "the cloud" as "renting someone else's computer", keep in mind that:

* It's not one computer but as many as you'd like, and get them at the click of a button,

* These are computers which are managed 24/7 by a team of highly trained specialists,

* These computers can be located anywhere in the world and simultaneously in multiple regions,

* These computers are designed to be fault-tolerant and handle (and survive) way more stuff that can conceivably be thrown at them.

> People talk about "the cloud" with the same fervor and language as members of a cult.

I'm sorry to say but you sound like you're militantly opposed to a solution to problems you either don't understand or refuse to understand.

"The cloud", even when following the simplistic and clueless belief that it's just "renting someone else's computer", solves whole classes of technical and business problema that your box under your desk cannot solve.


It makes sense when you aren't a datacenter hosting company as your core business. Once you're done with your DRP and BCP it's highly unlikely that your server-under-the-desk (as it usually is) is worth the risk.

By the way, moving 'to the cloud' never means a specific thing because people have made words (intentionally?) vague to the point where you have to explicitly specify all the factors you take into account with your work in order to figure out which 'cloud' they had in mind.

Running a static workload doesn't require elasticity, but a 'cloud' isn't just elasticity. If you want "a program on a server with some storage that is always there" without having to deal with hardware, software, networking, storage, backup, maintenance, upgrades, service contracts etc. then an arbitrary virtual machine where someone else takes care of the rest makes total sense.


And its easy to compare the cost of the cloud running your hardware to the cost of physical hardware. However, its much more difficult to compare the indirect costs between the two, and that's where I think many people go sideways.


There's a finance angle behind cloud stuff too that's irrestible for the bean counters: cloud stuff is operations expense, on-prem is a capital expense. Unfortunately these folks are heavily incentivized to favor OpEx

I'm not a bean counter, all I know is those guys at my last job would rattle off about it like zombies. IIRC its a tax thing


Opex is fully deductible each year the expense occurs, wheras capex requires amortized depreciation over the life of the purchased item. Makes it harder to calculate taxes.


Should we call it Amortisation as a Service (AaaS) then?

Is that really the only benefit?


I mean, I'm not a bean counter. I'm just guessing that's why they like it.


It's not a tax thing, long-term leases or colocation is a liability for acquisitions. (You could also build a revenue-positive company, but that's the loser way out, apparently.)


Wrong. People has been leasing on-prem hardware for decades before the cloud existed.

Depending on taxation rules you might want to buy or lease.


At this employer it seems most of the stuff in their datacenter was purchased.


>>I see everybody around me moving to cloud, without really good explanation why.

In that case you certainly aren't looking very hard for that explanation.

The rationale of moving to "the cloud" is crisp and very well-defined, to the point where even AWS makes it their point to explain in very clear terms in their Architecting With AWS courses.

Basically it's all about being able to scale without bothering with procurement, not having to manage everything down to power bills and rentals, turning big capex into small opex, and being able to deploy globally and reliably with a click of a button.

That, however, comes at a steep cost. The more serverless you get, the higher the price tag.

Also, beyond a certain scale you're better off going back to managing your own infrastructure.

> Only reasonable thing I can see as an pattern is that cloud experience on top of data things gets paid 30% more. It made me consider cloud a lot.

People with cloud experience are paid more because they can deliver more value. A random guy with, say, experience in EC2 and CloudFormation and CloudWatch, can single-handedly put together a robust fault-tolerant webservice that's deployed globally and automatically scales to meet any demand fluctuation.


No one has given you a good explanation for why to move to the cloud?

Let me fix that for you! :)

I work for a Fortune 200 company who has their own data centers and has for decades. We just completed the construction of two new data centers about four years ago at a cost of $110 million dollars. Those new data centers are now at 70% of their capacity for power requirements. You should take a look at the specs on Intel's latest server-class processors. It's insane! We're talking half horse power and higher for each processor - and you want blades full of these things and then a rack of those blades and a row of those racks! Data centers now consume more power than industrial manufacturing! Most maddening of all - most of that power is going to go to heat, and guess what? You need giant chiller units to cool the place down. The power costs alone are insane.

You also lose agility. Need a new server? Go to your favorite cloud console and provision it. Better yet, use a serverless architecture and don't even worry about servers! Your own data center? Right now there's a 4-6 month wait time for new servers and storage equipment, and an 8-12 month wait time for networking equipment. Not exactly agile, is it?

You already know you need to staff to manage and patch your servers but you're also going to need staff to procure and manage your software licenses - and those licenses aren't cheap! Don't forget the ongoing tail - you pay for the license and then you get to pay 20% per annum forever after for that license. Those licenses also restrict your agility - you're pressured to use the software that's licensed in order to get better value. Try working with a procurement-based architecture!

You think you're going to avoid the software licensing hell by using open source? Good luck with that! That means you are responsible for the packaging, distribution, and support of whatever it is you're utilizing. That's more staff you need. They will inevitably miss patching some critical vulnerability that's going to land you in the news!

I can go on, and on, and on. But here's one thing I can say about applications that we've moved to the cloud: they cost 30% as much to host in the cloud as they do on-premise, and that's not even accounting for the entirety of all the costs I enumerated above! We have ultimate flexibility and agility, and we can quickly utilize managed open source platforms to solve business problems. On-prem? You lose all of that.

Faster, cheaper, better - that's why people are moving to the cloud.


Go to your favorite cloud console and provision it.Better yet, use a serverless architecture and don't even worry about servers! Your own data center? Right now there's a 4-6 month wait time for new servers and storage equipment, and an 8-12 month wait time for networking equipment. Not exactly agile, is it?

Sure, that all works, as long as you have infinite money. My last job was at a company that was heavily invested into a serverless architecture, using the full suite of AWS tooling. They did an audit of one of their codebases and they found that generating a particular piece of data cost $20, almost entirely in fees to AWS. The company's roadmap involved scaling up the generation of this data, so they embarked on a huge process of optimizing and refactoring, to try to get the cost down. This effort tied up probably 40-45 engineers for at least six months. Probably cost them roughly 3 million dollars in salary alone, let alone opportunity cost in terms of other projects not delivered. No one amongst the management ever seriously considered moving to an on-prem solution, even though it was blatantly clear that the only value the company's massive usage of AWS Lambda, SQS, RDS and other managed services was delivering was additional profit margin for Amazon's balance sheet, and resume line items for the engineers who now got to check off the "used Terraform" box when applying to their next job.


You must have missed the part where I said But here's one thing I can say about applications that we've moved to the cloud: they cost 30% as much to host in the cloud as they do on-premise, and that's not even accounting for the entirety of all the costs I enumerated above!

Every app I've migrated to the cloud so far has resulted in a 70% cost reduction, but I'm also not trying to take my entire application portfolio to the cloud. To wit, we're not planning on abandoning our data centers anytime soon and that's not just because we can't move our applications to the cloud fast enough. At this point in time we recognize there are some applications best kept on-premise - but they're a minority, maybe 20% tops of my application portfolio. That means 80%+ of my application portfolio can be moved to the cloud, makes sense to move to the cloud, and I can save 70% on costs by doing so. That's a fantastic deal!


Have you done an audit in the other direction? How many "cloud native" apps do you have, which would realize similar cost reductions from moving to on-prem? The point I was trying to make is that many companies, especially newer startups, don't even discuss on-prem as an option. They build everything cloud-native from day 1, which then leads to them painting themselves into a corner like I described above. They're having to totally redesign and rearchitect their application, not because their business requirements dictate it, but to conform with AWS' limitations.


It's May 6, 2022. You're launching your startup next month. Are you going to host on-prem? What, with server equipment backlogs now 4-6 months, and that's if you're a large enterprise customer? Network equipment backlogs are now in the 8-12 month range. Not to mention the cost of needing conditioned power, cooling, staff, space, etc. You're going to utilize the cloud. It'd be insane to do otherwise.

You do raise a fair point - what about the applications that are more economical to host on-prem? In my own portfolio I estimate 20% or so of my applications make more sense to host on-prem. What to do about those?

The most practical option is to mandate every application you build in the cloud have a plan for how to run on-prem. Note those that would be very difficult to run on-prem - those are your key cloud dependencies. For example, in my own portfolio I have a couple of applications depending on Lex and therefore would be very difficult to host on-prem.

Pay special attention to your so-called "crown jewels" - your apps that distinguish you from your competitors. If those were dependent on a key technology such as Lex and would be difficult to migrate on-prem then you need to identify other cloud providers you could migrate to. In this particular example I know Microsoft's and Google's clouds both offer similar services.

If you design and build your applications with no thought of the future then you're already in a bad position.


It's May 6, 2022. You're launching your startup next month. Are you going to host on-prem? What, with server equipment backlogs now 4-6 months, and that's if you're a large enterprise customer?

On-prem or cloud, why would I be waiting until 1 month before launch to provision anything?

You're pretending like on-prem is full of all these physical limitations, but Amazon, Microsoft, Google, Oracle, et. al. can just magically make hardware appear out of nowhere. It's the same hardware, subject to the same supply limitations, and I'm going to be paying either way. Either I pay more per hour for my cloud instances, or I pay a bunch up front to provision my on-prem servers.

Cloud makes sense if the load on your service is intermittent or unpredictable enough that you're okay paying AWS's premiums in order to have the ability to scale on demand. But if the load on the services is knowable, then hosting that service on the cloud is paying a 25% markup for no reason whatsoever. AWS is by far the most profitable portion of Amazon's business units. Those profits are coming from somewhere. They're coming from your business.


Open source clearly does not mean packaging your own software. This casts a shadow of doubt on the rest of your valid points.


It does if you're not paying for it. There are plenty of companies that will gladly handle the packaging, assure all the interdependencies work well together, and even support the effort. They also do release management, ensuring that everything is on the most recent version that all works together. That has a cost. You can either pay someone to handle that work for you (smart - you and many other companies are sharing that cost) or you can do that work yourself (not so smart - now you shoulder all the costs).

Just as an example, consider Hadoop and its ecosystem. Another example is Elasticsearch. There's work involved in making these platforms production-ready and keeping them and their dependencies up-to-date and ensuring all your applications in your portfolio are using the same set of software. Most organizations are not prepared to take that on. So they turn to a vendor who will do it for them for much cheaper than they could do it themselves.

That's why the saying is "free as in freedom, not free as in beer" because open source software is not free as in beer. At least not when factoring in the total costs.


> Open source clearly does not mean packaging your own software.

The packaging argument was made in the context of complying with software licenses. Responsible companies which perform due diligence on the software they run have to track the provenance of all software that ships as part of their dependency closure. If you want to ensure you're not vulnerable to lawsuits then you can't simply apt-get stuff from a PPA. You need to build it yourself, and track exactly what goes into that build.

Meanwhile, if you opt to run managed service from a cloud provider, you don't have to bother with that because that's not your problem (or liability) anymore.


But that should apply to all readily available container images on docker hub as well, right? Conceptually that is not different from some guys PPA.


> But that should apply to all readily available container images on docker hub as well, right?

Yes, and it does indeed apply to all readily available container images.

In fact, it applies to any and all software packages put together by third-parties.

I mean, who in their right mind downloads random stuff from the internet and expects to just drop it in production software which you build your business upon?


It is your problem though, your customers are affected.

Do people successfully lawsuit their cloud providers for downtime?


There are non-IT perspectives on this as well.

My company offers two deployment scenarios: host it yourself and cloud hosted (SaaS). Many of our customers choose the latter because their internal IT systems require more process, and they just want to get something up and running.


Just to add, at my last company it could take up to 6 months to get a VM provisioned on prem. We could provision what we needed in Azure on demand.


It's a bit dismissive to just focus on scaling as a cloud advantage. You also have lower maintenance, easier redundancy, lower chance of outages, easier backup and more such non-functionals. A 200GB DB costs about $50/month on AWS (other providers are less), that's not an interesting amount of money for most businesses.


Thats way off. With 64gb ram we have around $50 month on hetzner yes. But from asking around the other BU with very similar size and usage to ours, somehow they are paying around $2600-5000/ month on azure


Are they using Oracle or MSSQL where you're paying insane software licensing costs?

With Postgres[1], you would need to be using an absolute beast of a server to be paying that much, even if you're paying the 2x premium to go month-to-month, and a low-powered machine could be around $50/month for a 200GB DB.

[1] https://azure.microsoft.com/en-us/pricing/details/postgresql...


Yeah, its true that they are using MSSQL.

Based on link you provided, I calculated that 40gb memory would be $290 and 80gb $570 per month (in UK).

So price seems still quite steep, considering that same specs on-site would be ~$2000 one off. (8 months return on investment).

Then for this server, I honestly don't believe its easier to do maintenance in cloud, as you need to do lots of initial configuration there (be it IAM, permissions, firewall etc. ) compared to have it just in local network


> With 64gb ram we have around $50 month on hetzner yes.

A single box on Hetzner is by far not comparable to, say, using a managed service as AWS RDS. It's not even apples to oranges, and more of apples to freshly-squeezed orange juice served by a buttler. Think about it: do you get any form of fault tolerance with your single box?


Thats why I am saying that $50/ month is way off. The price for that would be much higher


> Thats why I am saying that $50/ month is way off. The price for that would be much higher

Ah yes I agree, sorry for not being clearer. I wanted to add on to what you said.

Even though I'm a big fan of Hetzner and a happy customer for years, it's important to know what we're buying and be mindful of what we're not having as a tradeoff.


There are 10, probably more, cloud consultants per developer. Hard to argue against that.

There is nothing to say against a cloud version of your office suit but otherwise it is quite underwhelming. Short lived applications with a 90% API because the developer want to keep the door open to imprison the users once the application has captured a critical mass of users.

I host on AWS because I don't pay the fairly high invoices. Less domain and TLS maintenance for me. Personally I rent a server and host my personal stuff there including software repositories.

edit: Guess that technically would also qualify as cloud, but I think a differentiation is necessary. Cloud mostly means product x or y instead of just another hosted server.


Hired onto a project as a specialist. Funders had hired a project manager also. Project manager wanted to use technology X ( which has long since faded ) instead of well entrenched and dev-hireable technology Y because technology Y had not yet entered the trough of disillusionment and anyone who could honestly say they had managed a technology X project could 10x their hourly. Money was ignorant about this level of detail. PM won, as they do. Whole thing a money pyre.


> Only reasonable thing I can see as an pattern is that cloud experience on top of data things gets paid 30% more.

You say that like a pay rise of 30% is not a good enough reason all by itself for many people.

> For next person commenting that it makes sense: it doesn't with 200gb database and super predictable workload, growth and usage.

Perhaps not for technical or business financial reasons, but those are not the only possible reasons someone might do something. As mentioned before it can make a lot of sense to migrate to cloud if it means you can get a 30% pay rise.


Hell, I'm working with ~200TB and it still doesn't make sense if you work out the costs.

As far as I can tell it only makes sense if you have a ton of data, and only ever use a fraction of it at a time, for one-off jobs, infrequently.


I call it Resume Driven Development


I worked with someone who did this. I'm glad he has moved on. He would not do basic tasks correctly. Had an attention span of a toddler. He was DBA and couldn't google simple problems. I asked him to do something he would say give me 20 minutes. 2 weeks later he would say "IT CANT BE DONE". Then, I would do it myself in a few minutes after googling the error.

Instead of applying knowledge, he would bring up all these buzzwords to meetings and not really understand what he was talking about. I get angry just thinking about it.


Hmm, incompetent folks exist and need to be let go. However, I think it is a slightly different category than RDD.


> I worked with someone who did this.

Nothing in your comment points to resume-driven development. You even failed to mention any project or design decision. You just decided to get angry at a coworker because of something.


Yes, this has been my observation as well. I also see this happening concurrently with what I call the "shiny object" problem. Developers have their eye on a shiny new library, framework, language, etc. and have a seated desire to use it at the expense of it being an inappropriate choice for the company.

I believe these two phenomenons are producing a positive feedback loop in the industry. Selected technologies address one challenge, but introduce complexity. The complexity becomes difficult to manage. So other technologies are incorporated to manage the complexity. In the midst of all this, core technologies are replaced, swapped out, or transitioned to under the desires of the dev(s). For example, switching from one js rendering library to another, while preserving the old legacy code. The complexity footprint keeps growing and it doesn't stop because the engineers themselves aren't entirely committed to the project. They can incorporate the new technology to the project, pad their resume, and bail to a new employer if things grow out of wack.


"promotion-based architecture" aka "CV-driven architecture" :)


No, it's called "résumé-driven design".


I’d argue the quality of tools is the problem. I am starting to view every new tool as just a new set of log messages I have to sort through to figure out why things aren’t working.

I spend more time trawling GitHub issues to find workarounds than I do actually using the tools.


> It’s and industry-wide lack of emphasis on identifying and understanding the problem first.

Or just no real incentives of the people involved to do so. As a dev, I don't get any real credit for biz outcomes.


the auld mortgage driven development strategy


This was very well put. +1


> “promotion-based architecture”

That's a great way to describe the phenomenon.


You are experiencing what I call "The Bisquick Problem". Bisquick is basically flour with some other stuff, like salt, premixed into it, and sold in a box in the USA. So instead of just buying flour and salt, you buy them together, which makes some things easier (like making pancakes), but it complicates literally everything else. You can't use it as flour, or as salt.

With software, the problem is even greater. You can use react, for example, but you will probably start with create-react-app, which adds a lot of things. Or you could start with Next.js, which adds a lot of things. You could use Java, but you will probably start with Spring Boot, or Dropwizard, which adds a LOT of things. Plus all of these starting points imply the use of new languages, configurations, and programs, in particular, builds.

In my view, all of these "Bisquicks" represent experiments-in-progress, with the ultimate goal of the systematic characterization of "software application", in general. In other words, they are systems of alchemy, pushing toward being a system of chemistry, which we don't have yet. So it is bound to be a confusing time, just as it was surely confusing to play with chemicals before Lavoisier systematized the practice.


I like the analogy, but it strikes me as missing the upside (and other potential causes of problems the author may be seeing).

Bisquick is a great solution if you have only one problem. I want more solutions like Bisquick: easy to map to the problem (pancakes -> Bisquick; not pancakes -> not Bisquick), hard to fuck up, low marginal cost above the inputs. It's great!

In the converse, we have lots of custom software solutions which are exorbitantly costly over the long term. Small companies without in-house expertise have to figure out how they will maintain software they depend on when their consultant (or the boss's nephew) leaves. Big companies with significant workloads and workforces can afford (and indeed, profit from: https://danluu.com/in-house/) high-complexity custom engineering, which is...fine...but even FAANG have trouble figuring out how to incentivize system maintenance and support.

At heart, I believe we haven't come to grips with the extreme disparity in capital costs vs unit costs of software, such that we don't really know how to pay for the unpredictable costs of bitrot and maintenance. As a result, every software development project is ultimately a question of "how are you going to pay for its ongoing support".


I think you're speaking to a different (but still legit) problem. The OP is asking about the nature of confusing complexity, as it might appear to a new programmer. The capital implications of making real, profitable software from interlocking systems of alchemy is indeed not a solved problem - but I would say that it's interesting point of alignment between capitalists and beginner software programmers! Everyone wants lower maintenance costs, in terms of both time and brainpower. The $500k+ FAANG superstar programmer is a byproduct of the fact that alchemy is much harder to master than chemistry.


> Everyone wants lower maintenance costs, in terms of both time and brainpower. The $500k+ FAANG superstar programmer is a byproduct of the fact that alchemy is much harder to master than chemistry.

I think I've lost the analogy here a bit, could you help me? I thought "alchemy" were general purpose systems meant to get you off the ground.

I'm nominally one of these "$500k+ FAANG superstars" (though I'm not actually a superstar) and nearly all of our engineering time is spent wrangling home-grown abstractions built specifically for our business needs, which don't feel like "alchemy" in this analogy. Certainly a few teams are more involved with AWS services, linux, and dev-tooling but that constitutes maybe 5% of the company.

Would you still describe this other 95% as "alchemy"? If so, why?


I would say that you've paid the opportunity cost of learning the interlocking systems that compose into your application, and the processes required to update, test, and deploy it. This composition occurs across various boundaries, inside and outside the process, the network, and so on.

The FAANG-ist is one that can pay that opportunity cost quickly and completely, such that they are able to reason about the system with confidence, particularly when it comes time to reverse a feature or bug request into a set of commits. Or, more likely, a set of experiments and commits based on the outcome of those experiments. Given that there is no unified language of application design, doing this requires detailed, specific knowledge of every sub-system involved, the abstractions their authors used, and the ability to translate between them such that you can imagine the entire causal change from input to output. This is hard, and it's why you make the big bucks. (Interestingly, I first wrote "bugs" instead of "bucks". :)


I think I understand your perspective now, thank you for such a good explanation! Also, my bugs are indeed big /crying


This is a very helpful view on things. It definitely seems like evolution is taking place. Many of the new solutions are a mixture of other solutions with a certain aspect being the most important.

NextJs, Remix, RedwoodJs are all its own solution in adding what the creators feel are missing from React in terms of server side functionality.

Eventually there will be a clearer picture of which tool to use for various tasks like internal tools, eCommerce, B2B or cross platform applications.


The problem usually isn't the next layer of abstraction. NextJs and Remix are just fine (I don't have experience with RedwoodJs). The issue is that developers tend to put "Here be dragons!" in their mental map of whatever is two levels down the abstraction chain from where they live.

When those layers are small additions, people end up not understanding (or even fearing) components of their stack that are quite close to the surface. It's hard to do quality software engineering when you can't reason about the foundation of what you're building.

But if you lived through the process of adding new layers or if you take the time to learn more layers of your stack, you'll be able to make the most of the tradeoffs inherent to adding the latest framework or library.


They say the only two ways in business to make money are bundling and unbundling. This is surprisingly applicable to software frameworks as well.


> Plus all of these starting points imply the use of new languages, configurations, and programs, in particular, builds.

Oh yeah, true, some people keep saying a language is just a tool trying to convince somebody to use it, but they forget, or omit for some reason, the fact that it's not "just a tool", it's a huge ecosystem that brings additional, enormous mental burden with it - build system(s), libraries popular within that specific ecosystem, language syntax, language quirks, project structure, its own conventions and so forth.

It's much simpler and more efficient (technically and labour market-wise) to write everything in a single language, as much as possible - unless you start going completely against the grain. Like, auxilliary scripts are usually written in cli-centric languages like Bash, as opposed to API-centric like Python, because you get maximum convenience using cli programs and composing them together.

But no, some people casually shoot themselves in the foot by jumping from language to language depending on the task because it's "just a tool". Ripgrep is a just tool. An entire programming language with its ecosystem is not.


As someone who grew up in a house that was never without a box of Bisquick in the cupboard, that phrase speaks to me.


I generally agree, but two things I'd like to point out.

If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago. I use python for web development, and I love it, but deploying webpages used to be copy an apache config and ftp/scp your files to some location. Now we need app servers and reverse proxies, and static files are served differently, and even though I've gotten used to it over the last decade it does't mean it's good.

The other thing is that MonoRepos are pushing back against complexity for the sake of complexity. Why create a new repo when a directory will work just fine? I think a ton of people got repo crazy because their coporate jenkins server only allowed one project per repo, but it is trivial to check the latest git hash for a directory and base your deployment on that. ...I have a project I inherited that has 14 repos for a webpage with maybe 5 forms on it. I've mostly handed it off at this point, but everytime I have to look at it I end up preaching about monorepos for weeks.


> If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago.

The problem with deploying PHP is that it immediately gives you something for very little effort, but the effort scales incredibly disproportionately once you outgrow your need for the bare functioning minimum.

I personally prefer the "modern" approach of dropping a single, statically compiled binary, which exposes a listening HTTP socket. In case of Go or Rust+Actix, this could sit directly on port 80, but as soon as you add HTTPS, ACME, virtual hosts, etc the story is basically the same regardless of language/framework choice, PHP included.


Oh I agree, I still get chills when I think about troubleshooting PHP.

I was really just making the point that the effort to deploy python as a webapp these days would have been considered overly complex to the average developer 15 years ago. Just how some of the current stuff seems to the OP, so maybe not all seemingly complex stuff is bad.


The debugging story around PHP is better than other major scripting runtimes like Node. With PHP and kcachegrind you can really understand where you're spending CPU easily. The tools for Node work but just aren't there yet. Another nice combo is Java + Mission Control.


The debugging story around PHP is better than other major scripting runtimes like Node. With PHP and kcachegrind you can really understand where you're spending CPU easily. The tools for Node work but just aren't there yet. Another nice combo is Java + Mission Control.

Fair enough, but I was thinking more along the lines of debugging logic. With python I can open up a local repl or jupyter, run the affected functions/methods, and from there I can get a nice traceback, modify code, or monkey patch something to figure it out. If I can't reproduce the bug locally, I can run a repl directly on the server to see how the code runs differently.

My frustration with PHP is just memories of making an edit, refreshing the page, and crossing my fingers. Then once it's working forgetting to save it anywhere else because there wasn't a repo to begin with. I know the tooling has got much better, and PHP developers actually use deployment pipelines these days, and it was mostly caused by myself, and my co-workers not knowing any better. Still just how I remember it. And I don't think I'm the only one.


Fair enough as well, but the internet seems full of people who’ve used php decades ago, and spread falsehoods based on that as fact. Your REPL complaint specifically is void these days - you can do exactly the same with a php interpreter, there’s kernels for Jupyter, tracebacks, repl locally and in any production environment (readline is baked in).

It’s incredibly frustrating to see people discuss the language without any actual knowledge of it. Modern php involves one of the most stable package managers of any language, an extremely fast runtime with a JIT compiler and opcode cache, asynchronous code and coroutines (well, native green threads at least, but proper coroutines via an extension), mature frameworks, stateful application servers, solid standards among frameworks, containerised stacks,… it’s a joy to work with, yet people complain about standard library function names and form handling in templates from 1999.


This is why you shouldn't run Python in your downloads directory:

    python3 -m http.server 8000
Concepts of "app" have changed and superstitious ritual has built up around the sacrament of deployment.


If you're running malicious code it doesn't matter which directory you're running it from.

And I hope noone is using http.server to deploy their python webapps. the documentation even has a warning right at the top

https://docs.python.org/3/library/http.server.html


When did you last use PHP? It's not the PHP of 15 years ago. It scales fine when properly written and deployed.


I didn't mean scalability in terms of performance, but scalability in terms of maintenance effort. It's easy to SCP some files in, but the moment you want atomic deployments, it gets unnecessarily interesting.


PHP is the only language I've set up production environments for, but I can't see any reason why it would be more complicated than other languages.

I have a PHP project that handles atomic deployments by pointing the web server at a symlink named "latest". New versions get deployed into their own folder in the "versions" folder, and then the "latest" symlink gets pointed to the new version. Super simple to set up, and as long as you delete old versions as you go (keeping, say 3 versions so you can roll back as necessary) that's pretty much all there is to it. This all gets triggered by a Gitlab CI workflow, so I can hit a button to deploy after merging changes.

Another PHP-focused way to handle this would be deploying into a new folder, then updating the nginx config to point to that folder and running `service nginx restart`.

You can also use one of the many other atomic deployment options that replace more than just the running code, at which point the only difference between deploying a PHP webapp and deploying a Java/Go/Rust webapp is what gets included in your new server/container when you build it.


That's my entire point: the increase in complexity is very non-linear. With Python, Go, or whatever else, you pay a slightly higher upfront price, and you get things like atomic deployments out of the box; that price is however almost immediately amortised by the de-facto requirement to set up HTTPS/ACME, etc.

With PHP it's easier to just get started, but you've already mentioned versions, symlinks, CI, etc - that's the non-linear increase in complexity, you have to add a lot more pieces to get good ROI. With Python or Go you can continue using SCP to deploy for as long as it suits your needs, because no code changes will be picked up until you restart the process. If you need rollbacks, e.g. Go doesn't need symlinks or versioned directories - the entire app is a single executable, so you can just keep copies of these. You pay for what you use, and the returns are more linear and immediate.

If you only have experience deploying PHP, I would sincerely recommend trying other languages/runtimes/frameworks, even if for no other reason than to learn from what the rest of the world is doing. For me, learning to deploy PHP correctly was also a horizon-broadening experience.


Atomic deployments with PHP is basically as simple as "git pull" in a temporary working directory and copying it to a release directory..


for your "personal blog" git pull can be used as deployment - don't forget to exclude the access to the .git directory in .htaccess ...

but for "serious" applications / deployment-routines this is never an option ... you have to use some kind of deployment & configuration mechanism.


Regardless of how you're deploying things, having unrelated projects in the same git repository might be simpler (maybe?) but certainly seems worse at the same time.


Regardless of how you're deploying things, having unrelated projects in the same git repository might be simpler (maybe?) but certainly seems worse at the same time.

Sure, if they're actually unrelated, or being managed by separate teams then split it up. Though I think the default should be to have one, and split it when there is an actual reason to, especially if it's for the same project.

The example I gave wasn't an exaggeration, 14 repos for one project. It was originally built and managed by one person who had them all in the same directory and would open them all in his IDE at once and basically worked on it as a mono repo. After that person quit, when others needed to figure out which repo to clone to fix things it was a nightmare.

I know monorepos can be extreme like google, but I just mean one per team, or at least one per project. You shouldn't have to worry about versions of your libraries when there is only one project using those libraries.

Edit:

For example, a project I did the layout for has a python backend, with a spa frontend, and some ansible playbooks for deployment, integration tests, and a few libraries only used by this project.

Each of those 5 things has a top level directory and we deploy a test server for every branch, plus most of us run a local server. We never have to worry about versioning between our own projects, because if they're in the same branch that's the version to use. If we split it into separate repos, then everytime we added a new field to the api, and needed to update the frontend and tests, we would have to manually specify which versions of all the repos go together to build a test server, or to even run a local dev server.


> Sure, if they're actually unrelated, or being managed by separate teams then split it up

What if they become managed by separate teams? Or two projects in separate repos become managed by the same team? What about a service that basically everything else in the company relies on (for Google, accounts and auth for example).

Better to just keep things in a monorepo IMO, even if they seem unrelated.


What if they become managed by separate teams? Or two projects in separate repos become managed by the same team? What about a service that basically everything else in the company relies on (for Google, accounts and auth for example).

Better to just keep things in a monorepo IMO, even if they seem unrelated.

If there are multiple teams committing to the same repo you need controls over who has permission to commit to which directories and maybe a policy for handling merge conflicts across teams. I'm not sure what the tooling is like around that, but I could see the benefits as long as someone very high up was on board and had enough of a technical mind to keep order.

As far as two repos becoming one or one repo becoming two, you can split and merge repos while keeping the commit history.

edit: Do you have experience working somewhere with a monorepo that stretched across multiple teams? if so, what was it like?


> As far as two repos becoming one or one repo becoming two, you can split and merge repos while keeping the commit history.

I don't have experience doing that but it sounds like a huge headache to me. Any tooling, deployment processes, links in documentation, cross-repo references etc would need to be updated right?

The company I was in before my current one used a mono-repo for the whole company, I thought it was pretty great! We had ~150 engineers, so not Google-scale but still many individual teams and services. Similar to the Google strategy, a pull request / code review would be against master, and then it would be merged directly to master. The tooling would automatically rebase your change on master before merging. Deployment would happen automatically or manually (the team could decide) and would be pinned to a git commit hash. If automatic, the CI tool would detect if there were any changes to your binary that required deployment (so a totally unrelated PR would not trigger a deploy). You could manually deploy an older commit hash if you wanted.


i'm wondering what engineering decisions drove the project to be split in 14 parts?

is it like micro-services thing or what?


Yes it was mostly micro services, with a separate frontend repo, and a separate repo for deploy scripts, some separate libraries that were shared between the microservices, there may have even been a separate repo for documentation, I don't remember exactly.

I think it really just came down to dividing the repos into chunks that the deployment scripts could use/trigger off of. Instead of developing in a way that makes sense for developers and bending the deployment to fit. Since it was all in one directory on his computer, it was basically a mono repo from his perspective. Committing from the IDE just committed to whichever repo was changed, and since he was the only developer he never saw the downside. When I had to take it over ghorg[0] really came in handy. It's a script to clone all repos from a user/organization on gitlab, github, and others. Then once I opened up all the repos as one pycharm project I was able to get some stuff done, but at that point I might as well just had one repo with a separate directory for each.

[0]https://github.com/gabrie30/ghorg

EDIT: I also just remembered that gitlab is much better than github if you are going to go the route of multiple repos for one project. Gitlab lets you create namespaces to group your repos, so if they're all in the same namespace, you could have documentation to just tell people to clone all of them.


In the last year, there has been a concerted push by certain influential engineers to split our mono repo up. This was first done by splitting things into two repos that were supposedly independent. But they really weren't. Naturally, there was code and configuration that we wanted to be common between the two repos. So now, the solution to every one of these problems is to break off more code its own repo. As the saying goes, as soon as you have two objects, soon you will want a third. It has become a complete nightmare to work with, and as far as I've seen so far has had zero tangible benefits.


Why does it seem worse? This is perhaps just each of our individual biases and values hiding behind a preference but I can't identify an objective reason why it's worse other than Jenkins or devtool of choice not handling it out of the box.

Don't get me wrong. CI/CD and other dev tools not handling monorepos well is a totally reasonable objective reason to not use monrepos. But it's also mostly about the tool not the monorepo concept itself.


> If you're using python for the web you're already part of the complexity problem, atleast from the perspective of someone deploying php 15 years ago. [...] deploying webpages used to be copy an apache config and ftp/scp your files to some location.

There is a MASSIVE difference in the value proposition of a proper deployment from version control and using something like docker w/ docker-compose to facilitate running your project locally (which is not present in your PHP example) versus what the OP is talking about, which is the idea that you should run EVERYTHING on Kubernetes and write Rust for CRUD apps.


I mean I tend to agree now, but I don't know how I'll feel in 15 years, I certainly felt much different 15 years ago.


I have just finished three years of beating a department of 50 developers into breaking up a fifteen year old mono-repo. The rewards have been quite considerable.

Mono repos come with a particular challenge: If you have five projects {A,B,C,D,E} in the same mono-repo, you definitely do not want to be building B,C,D and E every-time someone commits code to project A! This is unimportant at small scales, but as the team grows, building and continuously deploying 'all the things' on every commit just doesn't work out.

So the first naive solution is say "we can enumerate all the things that need to be built for project A". This rapidly breaks down when someone figures out they can abstract a shared dependency for A,B and D into some other part of the mono-repo.

So now we enter build dependency tools, first Make then some other flavour, then we jump straight to Bazel because someone read about it in a Google publication, then to some custom build scripting because Bazel didn't do this hyper-specific workflow thing someone wanted... In the end maintaining a mono-repo build process becomes a hyper-specialised job function that is almost always kicked to the wayside.

In small companies, fewer developers than you can count on your fingers, or truly huge Googles that have their own VCS flavour, it can be shown to work well: But I have yet to hear a story of something in the middle succeeding with a mono-repo.


I work for a company with about 1000+ engineers that run a Bazel + Git + GitHub monorepo. As with all workflows it has its painpoints, but I am quite fond of it. It only needs a team of 8 to maintain all the integrations and performance optimisations.

GitHub is taking the problems of monorepo user's seriously (https://github.blog/2021-03-16-improving-large-monorepo-perf...) and have been responsive to the bugs and poor performance that monorepo's typically cause and exercise.


Monorepos solve the problem of cross-project "externalities" (sarcasti-quotes) like cross-project testing and CI/CD commonalities.


If you believe that's all there is to those ideas, maybe you need to step away and think about them for a while. Sure, there's going to be some resume padding happening in larger orgs. But all those ideas solve real problems too.

I think you're just in a very negative space if you start with "Distributed systems" as something overly complicated. At some scale getting a bigger machine either doesn't make financial sense or is just not possible to implement efficiently. Some ideas are taken too far or implemented where not needed. But I'd rather recommend you to learn where each one of them started and why. Criticize for valid reasons, but don't become a curmudgeon.


>> If you believe that's all there is to those ideas, maybe you need to step away and think about them for a while. Sure, there's going to be some resume padding happening in larger orgs. But all those ideas solve real problems too.

They do solve real problems. The question is whether or not they solve the problem at hand, and if they create other issues in doing so.

I was re-decking a back yard bridge with a friend and he brought a framing hammer. I'd never used one before and I always had a finishing hammer and didn't know the difference. I learned a new tool and even ran out and bought my own for the project, which worked fantastically well. It's still in my toolbox and hasn't been used since. You just don't use that thing to hang pictures on the wall because it may well f-- up the wall a bit. Using the right tool for a job is way more important than using a particular tool for any other reason.


I’ve never liked this best-tool-for-the-job mental model with software engineering. A given project has multiple needs, and unlike more physical tools there is a very high marginal cost for each incremental tool you use. So there is a huge balance between many well suited tools or a few but more generalized, less fit tools. This isn’t to say there’s an obvious place where to strike the balance, but the “best tool for the job” metaphor undermines recognizing it, and I’ve found this balance to be at the core of good tool picking for a given project.


>> I’ve never liked this best-tool-for-the-job mental model with software engineering.

I don't either. As photographers say, the best camera is the one you have with you. Same thing, the best tool for the software job is the one you already have and everyone knows how to use. Why anyone would want to introduce these high-end fads to an organization without proper reasoning is beyond me.


That’s definitely part of it. Another part is just the various multipliers you get from having a small number of tools. Team mastery, more mobility of personnel (no/low ramp up times), unexpected opportunities for re-use, high quality onboarding due to narrow scope, etc.


Thank you for calling out BTftJ as a mental model. Recognizing the very idea of mental models is a huge step forward. Arguably, it could be used as a metric for how advanced an organization (or even society) has become.


I think the metaphor is more apt than you’re giving it credit for. You just don’t have enough experience with the trade offs involved with “the best tphysical ool for the job”. :)


I also dislike the BTftJ (best tool for the job) cliche.

Best according to who?

Based on what priorities?

Relative to what other options?

Based on what kind of experience and expertise with the various options?

My takeaway: BTftJ must only be a starting point for dialog and discussion. Shortly therafter, it is time to get real about each of our experience and biases. Otherwise, BTftJ is a only a thin veneer.


I agree with you. But dont add a new tool (even a more general one) unless it fills a gap, or has other specific benefits.


I use a framing hammer to hang pictures on my wall, and I've never messed up my wall.


Well that's because you don't drive the nail in all the way to hang a picture. Would not recommend it for molding for example - any time you want to drive the nail flush without scuffing the material around it. My bridge had plenty of cross-hatch marks around the nails when done ;-)


> But all those ideas solve real problems too.

All of them, except for blockchain. That one can go die on the trash heap of history.


Well, it's a good tool for money laundering and purchasing drugs.


That's not fair, it makes a great Ponzi scheme, too.


Keeping a permanent log for one's illegal activities seems... suboptimal.


It's not.


How many "darknet markets" accept anything else these days? What's the alternative, cash in the mail? Venmo?


> That one can go die on the trash heap of history.

As long as there are Rust jobs or blockchain related jobs there, you're going to have to cope and wait for a very very long time until that is ever going to 'die on the trash heap of history' to happen.


I'm not sure you realize just how quickly that can all go away (and does). Blockchain companies are a dime a dozen. They don't all last forever.


> I'm not sure you realize just how quickly that can all go away (and does).

So why hasn't it died quicker than it should have years ago as many have incorrectly predicted then?

> Blockchain companies are a dime a dozen. They don't all last forever.

And who said that the 'companies' did last forever? Why do you think I said as long as?

I'm just wondering if the whole thing is guaranteed to totally go away 100% and to be absolutely used by no-one since clearly someone also thinks so. That is my question.


I'll never understand the bitter hatred toward blockchain at hackernews. Are you guys just mad you knew about bitcoin when it first started but didn't get any? You can't possibly truly believe blockchain solves zero problems can you?


I know blockchain enthusiasts claim that it solves some problems, but in reality it fails to solve them because they are not fundamentally tech problems. You could think of it as a performance art documentary of a group of people learning to recreate the world's financial regulations in a bottle, but the price of admission is a trail of fraud victims. Furthermore, it is wasting the world's energy supplies, and eating up our attempts to move to renewable resources.


Blockchain itself solves trust problems. Decentralized database/ledger solves issues where trust matters.

As for bitcoin itself, which I assume the latter part of your comment is talking about:

It’s predictable trustless money. It’s not perfect. But where you totally Lose me is:

> it is wasting the world's energy supplies

This is the latest POLITICAL attack vector. When you parrot messages like this, you expose yourself as a political victim that is not educated about the topic. Literal parrot.

If you have an interest beyond being a political foot soldier, do your own OBJECTIVE research. Learn about bitcoin miners using renewables, how much energy the industry actually uses, the carbon footprint of payment networks and fiat etc.

It’s an effective political attack vector because in all likeliness you won’t do your homework. You’ll read my comment, experience cognitive dissonance, judge me as a bitcoin cultist, and move on with your busy life.


If you will point me to sources on those numbers you consider reliable, I'll take a look at them.


It’s a good store of value. Like gold used to be. As a currency I’m not sure it will ever work.


Access to the dollar is a real problem. Stablecoins help resolve that.

I am not a crypto stan but it has at least one usage.

e: Sorry, forgot it was verboten to say anything contrary to the "crypto has no uses whatsoever" line.


What’s crazy is the scale a modern computer can operate at. There _are_ problems that need more scale than that but they are the minority. Meanwhile it’s out of fashion to spend time improving application performance and instead people go horizontal early, with devastating complexity issues.


That's why I mentioned "doesn't make financial sense" too. If you can throw another $X/mth box at it, or have engineers spend weeks improving the system's performance for the same effect... there's a point where scaling out makes way more sense. Whether that leads to complexity issues is really case-by-case.


> At some scale getting a bigger machine either doesn't make financial sense or is just not possible to implement efficiently.

You don't even need to get scaling into the picture.

You do not get any form of fault tolerance by deploying stuff in a single lonely box. If you care about reliability and resilience, you have to have multiple deployments up and running at the same time.

Also, you already have a distributed system if you have a browser calling your server.

And lastly, if you happen to manage an service used globally or regionally and perceived performance matters then you have no good alternative to have regional deployments.


I don't disagree with you and your examples are definitely over-engineering / busy work. In my experience a lot of it is driven by the desire for young engineers to learn a new language. If someone paid me to move something to Rust, I would do it. I heard good things about Rust and I would love to get paid to learn it.

But has being a software engineer become easier or harder over the last 30, 20, 10, 5 years? I wasn't an engineer for that long but my impression is that programming today is a lot easier. Dev tools, compilers and linters are very good. There's also a lot more community documentation on stack overflow. Some of the complexity is hidden from the developer, which is good and bad. It can bite you in the ass later, but in 95% of cases its a good trade off in my experience. For instance, my preferred stack is Serverless on AWS. I can set up a single config and have cloud storage, an api, a database, logging, auth all permissioned and in a file I can check in. And with a generous free tier, it's pretty much free. I'll admit if something goes wrong its not fun to debug, but it's remarkably fast and simple for me to spin up a CRUD api.


No, it definitely has got worse, I've been doing this 15 years, the sweet spot was the Rails revolution. Before that a lot of frameworks were a bit too much magic, and not enough understanding of how browsers, http and html worked.

Simple MVC stacks went to all languages, jQuery front-ends doing enough but not a lot. JavaScript enhanced easy to reason about server-side stacks.

You used to spend a couple of days a year, yes YEAR, mucking around with tooling. Now it wouldn't be too much of an exaggeration to say you spend a day or so a week fighting some part of your stack because it's been over-engineered.

IDEs can't keep up so you have to run inscrutable command line tools that fail if you have deviated even slightly from whatever error-prone rain-dance some moron claiming 'best practice' has forced into the build process.

Programming used to be about writing code to solve business problems. The shift to DevOps has been a massive productivity drain and most stacks are now incredibly brittle.

The worse part has been debugging, which you touch upon. Native calls, simple call stacks, easy error logging. All gone.

Moving everything into hard to debug http calls has been a disastrous productivity sink.

The irony has been that as languages have got significantly better our productivity has actually dropped massively because of the ridiculous amounts of over-engineering in "modern" code bases.

I recently worked on a project with 2 devs that took 3 months with a modern stack. The prototype in a standard MVC stack I'd made to demo to the client took 2 days.

It's utterly ridiculous and sometimes I feel like the boy in the story about the emperor with no clothes.


People still do deploy production ready systems using RoR, Django/Python or whatever "sweet spot" framework you want to mention. Some run quite successful businesses.

You can't generalise from your experience over the last few months.

> Programming used to be about writing code to solve business problems. The shift to DevOps has been a massive productivity drain and most stacks are now incredibly brittle.

Some businesses _have_ to "shift to DevOps" in order to operate at the scale and resilience required.

Some businesses have unnecessarily migrated to over engineered infrastructure because monkey see monkey do.

Saying "everything is ruined" completely misses the dynamic.

As an earlier poster explained there are a lot more tools in the box now. Making the right choice requires experience, a good understanding of the problem to be solved and discipline in implementation.

Get it wrong one way and you end up with an over-engineered mess that takes forever to get work done with.

Get it wrong another way and you end up overwhelmed by traffic, unable to scale in response and forever fighting fires.


God I am sick of these apologetics every time someone expresses skepticism.

> Get it wrong another way and you end up overwhelmed by traffic, unable to scale in response and forever fighting fires.

To nitpick this specifically, over my 12-year career toiling over this stuff there has never been a scenario where this has required a radical rework to solve. Boring-ass B2B shit rarely requires that level of engineering and, at least in my case, the workloads were fairly predictable and increased in a linear fashion. The one time I did accidentally end of DDoSing ourselves, I temporarily stood up nginx instead of our aging Apache install and was able to serve enough requests to fix the problem. (We then transitioned our app to run in nginx)

It was one fire, and it took a little bit of brainpower to fix. Then the DO droplets were humming along perfectly, and last I heard they continue to do so to this day.

The operational aspects of this double-digit-millions-per-year business ran on a postgres database that compressed to 6gb.

The next business I worked for did billions per year in business value and, until HQ mandated migrating everything to GCP, was humming along perfectly fine on Heroku for a monthly spend well under five digits. Ironically, I think they initially wanted us to be on-prem but couldn't support our stack and would have left all of that to our devops guys. GCP was the compromise (oof!).


Sounds like you’re a bit too emotionally wrapped up in this, there’s no reason to get so mad at other commenters with different experiences. Maybe take a walk. It also seems like your convinced that your experience can be generalized to the entire massive industry, which is a little presumptuous.


> God I am sick of these apologetics every time someone expresses skepticism

That's a highly obnoxious response to a measured and reasonable comment.


Because it is repeating theme on these boards: someone expresses a nonconventional or old-school way of doing things, and someone else always jumps on with why overcomplicated cloud shit should be the solution to everything.

Most businesses are never going to be a Netflix or a TikTok or whatever. Yet their businesspeople are absolutely infected with this mindset. And don't get me wrong, it is a disease and its primary symptoms are exactly what OP is complaining about.

So instead of building practical, easy-to-maintain systems we cater to the dreams of excessively optimistic (can you even be anything else as management?) nontechnicals who think every idea is going to require the kind of scale and tooling that someone like Facebook has at their disposal. And they're backed up magpie engineers more fascinated with interesting than functional infrastructure.

Ironically, that very tooling and scale ends up demanding even more resources which demands even more tooling to manage. What a virtuously (hah!) profitable phenomenon for the vendors of these tools.

Good grief.


> and someone else always jumps on with why overcomplicated cloud shit should be the solution to everything.

I didn't say anything like that. the opposite in fact.

Next time try reading and understanding a comment before going off like an obnoxious jerk


You didn't say it, I did. Pot, meet kettle. This "obnoxious jerk" has an axe to grind.


well stop bloody venting on me.

jerk


What's with the name calling? Are we five years old? And you call me a jerk?


I was looking for a new job recently. Almost every single job advert listed "microservices".

So yes, it is affecting our entire industry. Every aspect of it.

There are very small number of organisations that actually have any sort of need of a microservice architecture.

Worse still, actually talk to these orgs, they'll say they actually have a "hybrid" microservice architecture. Which is basically the worst of both worlds, all the pains of managing microservices, without any of the benefits you get in a normally built application (derisively called 'monolith' with all the negative connotations that word has). Half your calls disappear into the black hole of HTTP calls. No pressing F12 on a method call and going straight to the code. No easy stepping through code in the debugger. No simple download the code and just press play and it all works.

I like solving business domain problems. Not tooling problems. Tooling problems are incredibly boring and frustrating to me. To a certain type of programmer, rather than actually doing their actual job, they absolutely love introducing tooling problems as busy work. Because the actual business domain problems don't interest them. Then switch role as they've got the new hotness on their CV before they have to maintain the craziness they've introduced to the code stack.

Case in point on a project I helped get over the line recently. I joined 1.5 years already done on the project, development has slowed to a crawl. Lead architect designed a system of DDD, event-sourcing, message queues, microservices. Just to add a new field I had to edit 10 files. To add a new form I had a PR which edited/added 40 different files. How it actually worked completely flummoxed juniors + mid-level devs, it was beyond them.

All for a 10 page form that would have at most 150,000 uniques in one month per year. Roughly 1 request per second, assuming a ten hour day and 1 request per form page. Child's play.

A standard stack would have easily handled that load, probably even on a VM. A dedicated server would never have gone over 10% CPU. It would have been massively easier to develop, and cost 1/10th in dev time.

At one point I had a quick go at re-writing a section without the trendiness. Just to see, as I'd never have got it through the politics involved in that PR. I switched the event-sourcing, microservices, 5-tier craziness for a simple, easy to understand, service. Took me 1/2 a day, tests passed, reduced DB calls. Over a thousand lines removed, 100 added. Absolute nuts.

Millions wasted on trendy architecture. Of course the architect left a year into the project for greener pastures.


> The worse part has been debugging, which you touch upon. Native calls, simple call stacks, easy error logging. All gone.

> Moving everything into hard to debug http calls has been a disastrous productivity sink.

There are a lot of good things coming out of the latest big experiments but this has been a major blow. I have worked on software where the intended debugging approach was to write some code, manually push it out to a shared dev environment, read CloudWatch logs for debugging. It is by far the worst way to debug code that I have ever seen. Things that would take me minutes to debug in a normal setup can take hours or days. Projects like LocalStack aim to improve this a little bit but it's completely counter to the ethos of many "cloud-first" developers.


Serious question - if it was so much better then, why are approximately zero new companies building a Laravel/Rails/whatever app and using jQuery for the front end? If it's that much of an advantage I would expect at least someone who is trying that (because surely some are) to succeed with their lean, mean tech stack. Why wouldn't you have just written the project in that standard MVC stack instead of a modern one?

You won't get any objection from me on the debugging point, it's much harder, especially when you're crossing environment - e.g. running the front end locally but maybe hitting a remote dev or QA backend. I will point out though that there are logging tools that support the pretty standard practice of having correlation/transaction/trace IDs on your requests, such that you put in a GUID from an error and it shows you the entire request and anything that request spawned.


Yea people absolutely still build companies with these tools. If you just want to start a Saas company as a side project you would do yourself a disservice if you didn't use something like Rails, Django or Laravel.

The problem is in larger companies, developers stopped caring about just solving the business problems and moved on to solving non existent technical issues to build resumes to go the next job to get more salary. They go to company to company like a parasitic infection leaving them to rot by introducing Kubernetes React, Go microservices with Rust cli tools.

Also it might be an issue of not being fulfilled in life outside of work.


APIs plus frontend can be simple too and I find SPAs better from a UX perspective.

The real biggie when switching to microservices is that you suddenly have to reinvent all the goodies a relational database gives you for free.


The majority are simply going to follow popular opinion regardless of the merits, and developer efficiency is often not that important. I also think the efficiency gains are bigger for smaller and inexperienced teams.

Also, people are getting used to app-like experiences and designers are designing for it. Building an app-like experience is more natural as a Single Page Application, which basically means taking on the modern frontend stack. There are places that push against this, but to do so requires buy-in to the engineering side over product and design. Even then, the engineering side has to be knowledgeable enough to not follow popular opinion and come to the determination that Laravel/Rails/Django is actually the right tool, which isn't always the case.


Indie makers famously do just that. Pieter Levels is making millions with PHP and jQuery.

I consult primarily in node and react with all sorts of transpilers mess and shitty packers. The stuff of nightmares - we waste so much time making things work - but simplifying the stack will never get traction among the not-anymore-technical principal engineers or even amongst the other developers who need a fancy cv for their next gig.

My side businesses use python, django, jQuery, old node.js without modules, rust, svelte.

Engineers hired in big companies want to work on shiny technologies and build their cv.


> Pieter Levels is making millions with PHP and jQuery

no, the reason Pieter Levels makes millions is not PHP and jQuery, but because he's also a brilliant sales/marketing/business person


OP didn't say PHP and jQuery were why Pieter Levels is making millions - just that they're tools he's using to do it.

Ergo, they are at least fit for that purpose. They aren't bad enough to prevent him from making millions.


> I recently worked on a project with 2 devs that took 3 months with a modern stack. The prototype in a standard MVC stack I'd made to demo to the client took 2 days.

Can you expand on the modern stack and that standard stack? Is the "standard stack" you propose easier to work with now than it was 15 years ago? I guess the "standard stack" is just out of fashion?

My preferred stack (aws serverless BE, nextjs front end or just aws app sync) keeps me in one language (typescript), with some stuff like GraphQL that you have to know. But the tooling around that helps keep my errors in compile time for the most part.


In that case it was a simple .Net core MVC stack, Vs an angular/Web API stack.

For some inscrutable reason you can no longer do a simple 3 page form with a results page without making a SPA these days without getting someone claiming you're doing it wrong, it's nuts.

Worse still, the angular app has all sorts of weird bugs. Auto-complete somehow screws with the validation of inputs, back button doesn't work properly as you lose all your data, URLs dont work, etc., etc., etc. There's also all sorts of craziness like the whole admin frontend is bundled with the client-facing part as the front-end developer didn't know how to split them up.

Utterly preposterous.

The problem is often that very skilled Devs give advice about incredibly specific stacks that only someone of their abnormally high-skill level can maintain.

Throw in a few juniors or a few mediocre developers and the whole project turns into a complete and utter mess.

And don't get me started on your stack that's mainly inappropriate for most applications.

GraphQL. Talk about the next NoSQL fiasco in the making. Perhaps you missed the whole saga of everyone doing new development in NoSQL and then a few years later we had the flurry of blog posts about "Why are we losing data? I didn't realize ACID was so important..."


What is it you don't like about GraphQL? It seems to do what it was designed to do pretty well. I take it the issue is more with the pseudo-databases it's used as a front-end for?


Because it's now the 'right' way to do data.

Even though it's not 90% of the time.


Huh interesting. I'm not that plugged in to tech fashions, but was looking at GraphQL just the other day. My conclusion was that it didn't fit though, mostly because I wanted something that supported truly ad-hoc querying and mutation. With GraphQL it's really more like an RPC protocol, so you have to figure out your queries ahead of time, there was nothing like SQL's ability to do ad-hoc querying.


Is being in one language really that much of a benefit? I've worked on several large production apps with node backends and I'd rather use almost any other backend language.

And I take issue with saying you "have to know" GraphQL, that's a pretty specialized tool for pretty specific problems. Most things are not graph structures.


Maybe I'm biased because I've been using node.js for 10+ years but node.js has a decent API.

Simple, slightly opinionated. You just need to be careful of dependencies and what kool kids are doing these days. GraphQL is a complete nightmare.

If you don't transpile anything and are conservative with your dependency you're golden.


> Moving everything into hard to debug http calls has been a disastrous productivity sink.

I'll admit for the vast majority of web apps out there, this is not needed, but there is definitely a scalability concern if your entire stack is a single ruby on rails mvc application.


This is the misconception that will just never die.

I've worked on multiple products with 1 million+ users running on stock Rails MVC backends. Everything scales just fine. Until you hit a DB hardware limit you can just keep adding more web instances behind your load balancer. And DB Hardware limits these days are astronomical.

If you're actually hitting the limit of scaling our web backends horizontally and you don't have the money to deal with the problem, you might need to take a hard look at your product.

And I'm confident this is true for whatever your backend is: Django, Rails, Laravel, whatever.


Your comment seems to imply that computer programming == web development. Would you say your comments also apply to embedded, mobile, games, data science/ML and scientific computing?


I've been in the industry since 1998. Things have gotten worse. Part of it is that in certain ways more is expected of software today but that's not entirely true.

Currently I'm involved in working on a "data intensive" app which munches a few dozen gigabytes of data and presents summarizations through a web based UI. Usually in the form of charts, time series plots, heatmaps etc. Adding small, incremental features to it is an ordeal. The "backend team" needs to mock and then provide the API endpoints, then the frontend team needs to "break down the work on the React components", then DBAs need to oversee the new queries. The security team needs to review risk impacts etc. One change is at least a few weeks of work.

Contrast that with the work that I did in 1998. It was a "data heavy" desktop application that visualized in 3d seismic data collected by ultrasound probes. The data volumes were in the order of hundreds of megabytes up to a gigabyte. The app was built as a desktop app using FLTK and OpenGL. It took about a day or two to roll out an incremental feature once we knew what was being asked. By two weeks it was in all of our customers' hands. While there was no Stackoverflow.com and we relied primarily on offical documentation with the help of "Effective C++" and "More Effective C++" the build/debug/test cycle was much shorter, the tool chain was much simpler and more transparent, the UI toolkit was much easier to grok and even though it didn't have an "elegant state framework" somehow we made it work well without turning it into a mess.

As an industry, we gave up on localized desktop processing in the name of "consumer preference" (or so we were told) while we assumed an immense amount of complexity in order to server the same computation from a centralized place.


I’ve been working in software engineering for 30+ years so I can say that yes, things are definitely much easier. Debugger in the 80s/90s were finicky beasts that were shrouded in esoterica and as a result, it was usually much easier to try to debug code by adding print statements than it was to actually use a debugger. I’m still somewhat amazed by the capabilities of contemporary debuggers.

Libraries outside those provided by the OS/compiler tended to be hard to come by. Certainly the universe of freely available library code that we have now was nonexistent (I’d argue that CPAN was a big factor in the spread of Perl in the 90s—well, that an the assumption that was widespread back then that CGI scripts had to be written in Perl).

As a community, we’ve collectively learned a lot of important lessons as well. Legacy systems like Perl and LaTeX tend to install common code in a single universal directory for everyone rather than having this be application/document specific as became the case with Java and the repositories will only give you the latest version of an artifact¹ which has tended to lead to stagnation since backwards compatibility becomes non-negotiable, although some lessons haven’t stuck (like the fact that Rust’s crates.io only has a single-level namespace for artifacts).

1. Not sure if this is absolutely the case with CPAN. CTAN, which was quite possibly the first centralized repo² does not manage multiple versions of artifacts.

2. I remember when it was first being set up, since there was no guarantee that any tooling beyond TeX would be available to consumers, there being TeX scripts to do things like encode/decode binary files into emailable ASCII format. The original CTAN FTP server in the UK also had the remarkable feature that it would generate directory.zip for any directory that you requested.


> I wasn't an engineer for that long but my impression is that programming today is a lot easier. Dev tools, compilers and linters are very good.

What sticks out to me when I'm looking at some older code editors/IDEs, is how crazily spartan they are - to the point where they are just inconvenient to use.


Some things are easier, but then we've made it so quick we become the lone typist implementing the vision of the all-important UX consultant.


Well, some things got easier and other things got harder. Some of the harder things are just due to higher consumer expectations or more competition, but, harder they are.

The easier things are mostly obvious. Better languages, open source libs etc. Things that are harder now than before:

1. UI. The web is not a good UI platform, sorry. Designing UI in the 90s was easier, except for the need to do manual memory management if you weren't using Visual Basic. Partly because there was little expectation of branded UI, so you could easily re-use large control libraries that came with the OS which were/are pretty feature complete and well documented.

2. Cross language interop. Microsoft had this nailed. COM was a beast, but it worked and there was an actual real market of cross-language, auto-bound objects and GUI controls (COM objects, OCX controls). There was actually a thriving ecosystem of languages on Windows which are now mostly forgotten (Delphi, FoxPro, VB6, Paradox, Visual Prolog etc). Nowadays cross-language interop is a joke. Transpile to JavaScript or go via a C FFI, maybe, if you're lucky.

3. Expectation of supporting multiple platforms. This causes a lot of dysfunction because the lowest common denominator is really low, especially if you treat the browser as a "platform". In the 90s Microsoft had a monopoly. That had its own problems, but, it meant you could write a Windows app, once, and everyone would accept it. Nowadays you want web+mobile, and "web" is not really a platform in the sense Windows was. If you're doing anything serious you still need a desktop app.

4. Over-specialization/staffing. Back then everyone was "full stack". The developer was also a DBA, at least to some extent, and you could just throw up a quick GUI Windows app that connected directly to the database. The DBA would then manage security and backup. It wasn't really a full time job, at least not on a per app basis. Nowadays even simple projects feel absurdly over-staffed. Do you really need a backend guy, a frontend guy and a devops guy for an ordinary LOB app? Probably not.

5. Process overkill. Waterfall is underrated. It got a bad rap because it requires you to understand your customer, and for your customer to understand what they actually want, and for them to not change their mind every five minutes. Not always possible. Nonetheless, agile has become some kind of monster. Half the words in the average agile methodology are made up and a lot of it is really questionable. If your business domain weren't totally unstable and your users weren't totally incompetent (often the case!), then you could sit down and write an actual spec, which people would read and sign off on, and then you could build it. And it'd work, and after the initial debugging / shakeout period, people would be happy. Many of those apps never broke! Just imagine the output of the typical agile web AWS-based web stack teams today lasting 20 or 30 years without a dozen rewrites along the way. Very hard to imagine that.


Developers/engineers/programmers do this because we crave complexity. Then throw in a dash of elite-ism/gatekeeping, a sprinkle of CS trivia driven hiring (aka leetcode interviews) and you will find these behaviors. Organizations may fear losing top people because other organizations, maybe brand-new startups, use shiny new tech to solve same-old-problems so it looks appealing. Its a shiny-new-tech arms race with a JS framework proliferation (I couldn't help myself to not take a jab at JS, sorry).

Here's a walk down memory lane for you about rewriting apps, circa year 2000: https://www.joelonsoftware.com/2000/04/06/things-you-should-... If you replace names and versions of things with "Go", "Rust", etc. it is pretty much what you describe.


My advice is : work with more senior people. It seems to me that people with 10/15+ years of experience will judge this hype train more severely than younger ones.

The dangerous spot is engineers with 5-10 years of xp who have become good enough at writing huge piles of unecessary code and have them work.


> judge this hype train more severely than younger ones.

Meanwhile the EM's are afraid to hire us because we're "tired" and not "team players" (ie have a back bone when we've seen this pattern 20x across our career with 0 successful outcomes)


This is what I was going to say. I think that the great problem of our time might be unforced errors.

I've said it before, but I take issue with just about everything that's happened in tech in the last 20 years. I've been programming since I was 12, so have about 32 years of experience. I feel that things really were the very best from about 1995-1999, but since then we've mostly had uninspired and brute-force solutions endlessly doubling down on the status quo, because that's where the money is apparently.

I just see such quick embrace of things like async, which I view as an anti-pattern. We're paying people $150/hr to build out complex solutions that nontechnical people were putting together in FileMaker and Microsoft Access in the 80s and 90s. Even our hardware has endlessly pursued DSP vector processing on GPUs, forcing us to manually convert our software to shaders or something proprietary, when scalable transputers and 1000+ core systems on a chip with local memories appearing as a single context (what we might call.. desktop computing) would have been so much simpler and better. I could go on about this stuff literally forever.

What's the solution? It's so simple that it's right under our noses: make the opposite decisions from the ones we have been making. Old school. Practice radical inclusion and hire people immediately based on their credentials and experience, rather than putting them through endless interview rounds. Bootstrap, and when you make it, pay it forward and help others make it. Get away from all this insubstantial profit-oriented disruption stuff and solve the actual problems in people's lives like how to reduce their dependency on handouts from the rich under trickle-down economics. We need automated food/clothing/shelter that's too cheap to meter, and we need it yesterday.


100% yes.


Still, part of the industry is moving towards simple solutions.

A refreshing experience was a mobile app Apple device, with Swift and Swift UI. It was a real joy, works as expected, produces concise code, small files, live preview and reasonably fast build time. Sure, it's closed environment, but last time I felt so productive doing UI dates back to Visual Basic.

Counter-example: a simple web app, nothing fancy, and my node_modules filled with around 500MB of files, hundreds of declarations of injected things everywhere.

But nobody forces us to use Kubernetes, nobody forces us to climb the Rust learning curve, nobody forces us to use this multi-platform framework that solves all the problems of the universe.

I try to stick to standard solutions, oft proposed by the vendor: Kotlin on Android, Swift on Apple, c# on Windows. Server code: stick to Java, or try the simple Golang (another refreshing language).

Also, I try to stay late to adopt tech, just starting to be confident in Docker and will see in a few years if Kubernetes could be useful.

But, an architect needs complex solutions to justify its job, a team lead needs as many dev as possible to tell at the next family dinner, and the new dev wants to try this new fancy tech to put on his resume. So they are all fine with that. Just don't tell the company ownership.


> Sure, it's closed environment, but last time I felt so productive doing UI dates back to Visual Basic.

The "closed" nature seems to have made such IDE's better integrated such that you didn't need "layer specialists" for each layer: you just "did it".

And the rocket science needed to get "responsive" UI's right is crazy. If only 3% use an app on a mobile device, you bloated your UI by a complexity factor of about 10x to get that extra 3%. The labor math doesn't support it. Vulcan accountants are puking. (And mobile friendly apps tend to waste screen real-estate, increasing scrolling and back-and-forth navigating. GUI multi-panels are a productivity miracle, use 'em!)

WYSIWYG is cheap, easy, and consistent; you can save a lot by telling responsive to go to Bloat Hell. (Maybe someday a responsive UI framework will make it easy, but that will probably arrive with flying cars, hover-boards and Mr. Fusion.)

Being obsessed with "web scale" when most biz apps have only a few thousand users is also a resource drain. Stop putting phallic symbols into your stack, people! A dinky winky is sufficient for 95% of apps.

Choice of sub-parts by itself is good, but if it has a psychological side-effect of creating a layered mess, then perhaps a KISS Bouncer of some kind is needed to trim and factor the options. Otherwise, "cool" ends up trumping boring-but-productive. (I have more to say about his elsewhere around here.)


Two thoughts on this:

1. This industry absolutely has, and has had for a long time, a problem with "oooooh, shiny!" chasing. We collectively obsess over using the latest and greatest, newest and shiniest, "sexy" technologies of the day. And sometimes (often?) this obsession overrides good judgment and we try to use this stuff regardless of whether or not it's actually a better fit than something older and more prosaic.

2. However, sometimes the "new, shiny" is actually better, for at least certain applications. And we should always be willing to use a newer, better, "sexier" tool IF it actually helps to solve a real problem in a better way.

Unfortunately (1) often seems to trump (2) and we get stuck using the "newest and shiniest" for no particularly good reason, other than the simple fact that it is the "new shiny".

I have no expectation that this trend will ever abate.


It's another way to favor incumbents with huge resources: it's an anti-competitive psyop to raise barriers to entry and stymie startups in endless shiny thing chasing, because the incumbents are scared of the disruption, so those big players that move a bit slower, seek ways to slow down the scrappy players that can move fast, while those "fast movers" get drunk on the koolaid of buzzwords manufactured on blogs from those incumbents, and trends like being agile, microservices, etc, while willingly injecting themselves with massive technical debt from overcomplicated frameworks, and nobody's allowed to question it otherwise it's civil flame war.

Even if you don't believe it's a conspiracy, you have to admit that: that dynamic favors incumbents, the big frameworks often come from the big incumbents, and if you are a big incumbent even if you weren't manufacturing this conspiracy but you saw its possibility I mean why not take advantage of it?

The outlier is the scrappy indie developer like Peter levels who runs his multi-million dollar a year business on his own basically on a single machine using PHP and only recently started using git. That may be an extreme example but it paints a picture of what radical effectiveness and efficiency looks like and it's vastly different to the Kool-Aid but don't mention it otherwise the mob will come for you.

May the no-code & indie scene save us all. Amen


> nobody's allowed to question it otherwise it's civil flame war.

With enough years in this industry I'm starting to see how much of the culture is just "cargo cult Silicone[sic] Valley!" -- People are doing things just because the latest/greatest unicorn did it that way, and frequently there is a form of Zen/Chan disregard of meaning or learnings from the past. "Move fast and break things" often plays out as "Disregard knowledge and act like there wont be consequences" . I frequently see threads/posts on HN by experienced people laughing that "we've known this since the 70s" in reference to The Mythical Man Month, or other bits from Computer Science, and yet most managers think they know better than decades of industry experience.


Did you learn the word 'incumbent' yesterday?


Aaw, so cute. My smart piece threatened you and you want to prove you're smarter than me. Nice try but you failed.... I learn all the time, you should try it. XD


I saw your original post and your cringe edited one. I won't even comment.


Oh don't cringe, it was made for you, Haha. I guess you're cringing at your own daftness in the face of my brilliant comment. You were meant to see it. You mean this one? https://en.m.wikipedia.org/wiki/Pine_Gap

As for you not commenting, it seems that would be a good idea on your part. You keep trying, and keep failing. Do you often do things you say you won't? You lack self discipline? Anyway, thanks for obsessively checking my edits.


Take meds, get vax, and touch grass.


Is that what you do? How come it's not working for you?


Agreed.

You should know that in the past when OOP was not common, we had to work a little harder doing things like managing our own memory or building a LAMP server to publish our web pages.

There was a thriving market for language and UI add ons. The result was that each company had their own internal dev tools and recruiting people outside of the company who had experience with those tools was nearly impossible.

All that said, we were at a point where entry to programming was easy (think Visual Basic in the 90s). The quality generally went down as everyone was pushing their "first project" as if it was a polished product. Finding actual good programs on PC is close to the situation on mobile where most of the apps are trash.


OOP to me would be classified as the industry blindly moving towards unnecessary complexity. It is indeed the definitive example of how the industry over-engineers things.

While still prevalent today, there's a huge sentiment against this paradigm. Modern languages and frameworks such as React, Golang and Rust show how the tide is turning against this.


Three object oriented languages prove how there's sentiment against OOP?

Or are you saying that Rust/Golang/React are simplistic in a world of over complexity. React I would generally agree with, the other two not really.


All three are moving away from OOP.

Functional components in react. Zero classes in golang or Rust.

It's actually quite an obvious paradigm shift. No classes with these languages yet every language prior to this has classes.

Yet I don't understand why people still insist on calling go and rust object oriented or why they try to argue this point when there is such an obvious change. I mean sure you can twist the language into something that looks like it's oop but come on man. Haskell is OOP if that's the case. Let's not argue about whether these languages are OOP or not. The point is there is an obvious movement AWAY from OOP with substraction of popular and critical syntax and features.

For React , the trends are harder to see. See here: https://hackernoon.com/react-functional-components-are-the-f...

The react team recommends functional components over classes, and react itself was derived from functional languages and concepts. I believe the team ideally would like to see ReasonML as the future and react with js syntax is more of a necessary in evolutionary inbetweener thing.


Functional can be tricky to debug, as the frowned-on "intermediate state" of imperative programming makes for great x-ray examination points during debugging.

The backlash against OOP is partly because existing OOP engines are limiting in many languages. Passing, grouping, and custom-scoping of "blocks of code" (BOC) should be more flexible so that we are not forced into hierarchies or spaghetti scoping. We need new languages that make managing and scoping BOC more flexible. I want to define the scope, not let Nadella or Bezos do it. The distinction between a lambda and method would then be fuzzier. BOC's would no longer be forced to be one or the other. (One these days I may make a proof-of-concept language.)


Imperative and OOP are not the same though.

None of the languages I mentioned are FP. Though FP has benefits and many of the languages or frameworks mentioned above are either moving towards FP or borrowing concepts from FP.

You're not wrong about intermediate state. I shouldn't have emphasized FP here that wasn't my point. An imperative program that is just imperative without OOP or FP tends to be simpler then OOP itself.


React is extremely complex internally


I think react was an attempt at simplification. OOP isn't the only thing that influences complexity. Simply switching a project from OOP to a more FP like paradigm doesn't necessarily make the framework simpler because of other confounding factors.


Very true, but unlike backend languages you can generally get away without knowing much about the internals. Unless you're making dev tools of course.


I think that as developers, we need to resist these trends and go with working stuff. But that's difficult, because for every developer that will go "Java is fine", there will be a dozen, usually younger developers, who are hyped up to use whatever is cool at this point in time.

But this is where the senior developers and architects should come in; they need to make a firm stand. Make things like https://boringtechnology.club/ mandatory reading. Have people that want decision power write ADRs so they are made to explain the context, the problem, the possible solutions, the tradeoffs, and the cost of picking technology X to solve problem Y.

It's too easy to bung in new technology or change things around; the short term, initial cost is low and the long term cost is not visible, because it quickly becomes "this is how things are done around here". Make it more expensive to change things.

And make people responsible for these things. If an individual advocates for technology X, they have to train and hire for technology X as well, and be responsible for it long-term. Learn to recognize a "magpie developer", the type that will always go for the latest shiny; you can use those to introduce new technology, maybe, but keep them away from your core business because they will make a change that impacts your product and your hiring, without taking long term responsibility for it.

anyway, random thoughts.


I'm a proponent of the boring technology school of thought, but it's not a great look when the images on that site don't load. Apparently laY loading can now be done with a single img attribute. And those flat color slides should be pngs.


One hypothesis (but not a single answer) is that complexity creates jobs. Engineering something complex and clever creates job security and consulting hours. Fads, trends, and ideas come and go like the tide, and it makes the big wheel go around. Kubernetes, which 99% of developers have no use case for, is indeed “job security” for what could equally be achieving achieve with Unix and a few she’ll scripts. (I’m being deliberately provocative here).


This is an important point. Looking at open job reqs can give the impression that X is very successful, when the reality is that any organization using X has an explosion in how many developers they need, and also a high burnout rate in the ones they have. Plenty of other developers quietly and more-or-less-happily working with non-X, and you never see their job in the job boards.


There’s no ‘we’. We don’t coordinate. We don’t design up front. Did Brendan Eich consult ‘us’ before he unilaterally made JavaScript the fucking ‘assembly language of the web’ for the following 25 years? No, he just got it in there, and the whole industry that burgeoned afterwards did whatever they could with what was already there. Who has time (or clout) to design a sane application stack and appropriate tools, when there’s money to be made?

There’s no we. There’s a million moths slapping into everything and being drawn to various bright, often false, lights.


> Who has time (or clout) to design a sane application stack and appropriate tools, when there’s money to be made?

Sun Microsystems had a go, but as you say, unfortunately it wasn't a money-maker.


It has been these waves for the last forty years I have been doing this for a living. "The New Thing" shows up, people run around like chickens with heads cut off chasing "The New Thing", then "The New Thing" becomes "The Old Thing". Wash. Rinse. Repeat.

What you have to do as a developer is try to keep up with the hundred new things, possibly dabble in them to see what they are, and decide for yourself how much effort you want to put into that particular "thing". You have to use judgement or you will burn yourself out.

I never bothered with Pascal. I learned enough Java to be dangerous, but it didn't really apply to my problem/solution domain. I did learn C++ and also learned to distinguish between when a solution looked like an object (C++) and when it did not (ANSI C). If anyone tells you to 'always use C++' ignore them.

I learned Perl, because I found it more useful than AWK for large problems, but AWK still reigns supreme for 'one liners'. Then I learned Python, and discovered that the problems then fall into Python or AWK. I rarely use Perl anymore.

I tried my hand at Go. I don't find it very satisfying. I am looking at Rust.

Everything else I have ignored, by virtue of choosing what problem domains I am interested in solving.

So that software engineers are not screwed, not by any margin. Just choose the problem/solution domain that you want to work in, narrow down the tools you want to be competent in, and move forward. Try to avoid the 'Look! A Squirrel!' response mode as much as possible, but do poke your head up to see what the world is doing on occasion. But be aware, a lot of it us useless noise.


Software Engineers are generally intelligent people.

Intelligent animals need stimulation or they get bored and depressed.

I think collectively, "let's move to Rust" is at least partially because we're not challenged enough by writing the same CRUD app for the 20th time in the same language we've been using for the last 5-10 years, and we want to leave our mark in a new ecosystem by implementing whatever is missing.

Some people want to optimise for "fun/exciting/different" while others seem to be aiming for "known/just works, incidentally boring".

We probably need to find the right middle; how do we keep it fun and challenging while keeping it simple and maintainable.


On the web backend side, I find that rust has a nice future in support of other backend. considering the difficulty to manage websocket in PHP or memory with NodeJs.


To keep it fun someone probably has to do the boring work of making things backward compatible.


Highly depends on where you work. In my company we stick with "use boring stuff" and have a limited amount of "innovation spending". I look at the complexity of these other things mostly as a computer science theory and ways that language and solving problems could be done.


I'll take the devil's advocate:

* All these new tools, they give us options, no? Use the right tool for the job, the ability to switch if something becomes old/unmaintained.

* Is this actual complexity or perceived complexity given your experience? The node ecosystem looked very complex for me (someone coming from Python) until I actually got into it. Now it seems pretty run-of-the-mill.

* Is k8s really all that hard? Build a container and you don't have to worry about provisioning it and deploying it again.

There may be good reasons to use some of the technologies you pointed out. And that's a strong may because I can easily come up with arguments in the other direction in addition to yours. I say all this to mean you just shouldn't dismiss it because it seems hard. It may be and it may not be, and if it is it may still be worth your time if the payoff is great enough. There you have to do the legwork to figure that out.


> Is k8s really all that hard? Build a container and you don't have to worry about provisioning it and deploying it again.

No, the deployment of a container to Kubernetes isn't hard. And it better not be, that's supposed to be the advantage.

What's hard is literally everything else about it. And that may be a fine tradeoff if you are at the scale to need it and have a team to manage it. But there are many organizations where that tradeoff does not make sense.


> What's hard is literally everything else about it.

Absolutely. To me it only makes sense in large orgs that can dedicate resources to managing the platform. But I did bring it up because I, as the user of such a platform, don't have to worry about writing CloudFormation or Terraform because the infrastructure is provided to me. But yes, it's tough otherwise.


Unneeded complexity is the greatest enemy of the software engineer.

Unnecessarily complicated is the default. Choose the elegant thing wherever possible (its not always possible, but often is).

Actively avoid complexity, or it will shackle you.


>There are overly complicated solutions to simple problems

You sure about that? Sometimes the seemingly simple problems are quite complicated. Partly because we are building software for a world that is fraught with (security) landmines.

But point taken, sometimes you can overcomplicate the architecture.

>Distributed systems? Kubernetes? Rust for CRUD apps? Blockchain, NoSql, crypto, micro-frontends and the list goes on and on.

Each of those are particular tools for particular problems (though I'm not sure why Rust for CRUD apps is so terrible).

>moving away from python (because its too "slow");

Not only is it slow, but the lack of compiler support for typing leads to an inordinate amount of (stupid) runtime problems. I say this because I recently inherited an entire inventory of python software built up over the years at my current employer. Right now, I have a bug backlog full of runtime blow-ups (dating back years) because of careless typing. Coming from the unsexy world of C# and Java, still trying to see why Python would ever be used for anything but scripting and (maybe) prototyping - it's slow as molasses and no compiler support.


My thoughts: quit working at tech companies. There are tons of small a medium sized companies that have business outside of the tech industry that need software development done. Many of them will choose to have it outsourced, but many of them don't. In the last 10 years of my 20-year career, only 1.5 of them were at a tech company (3 years ago), and they were definitely the worst.

I work at a foreign language instruction firm. I'm making a virtual reality training environment for them. It's the best job I've ever had. I don't have anyone micromanaging my work, because nobody understands my work. I barely understand their work, and that's ok. We understand that about each other and we actually collaborate.

In the last 3 years I've not once been yelled at, talked down to, berated, cajoled, pressured into working overtime, any of it. I've not seen it happen to anyone else, either. I have an office of my own. I can work from home whenever I want. People just trust me to be an adult and do my work and it's the greatest thing ever: basic human decency.


For me it's a miracle each time my PC boots. We often forget the sheer marvel of modern computers and need to appreciate what we have. Remember Steve Jobs on stage showcasing how you could send an email with an iPhone. Back then it was amazing, but now it's common and we're all jaded about it. We need to recapture the joy of computing, not build large overarching abstractions.


The iPhone is very pretty but the software was not technically impressive. I have much older handheld PCs that could send emails and can still do more than the current iPhone.


You completely missed the point.


Can you do augmented reality on your handheld PC? How about edit HD video?


Are you kidding? It has always been this way in one form or another.

It’s a peculiar feature of human nature that we want to make things more complicated than they need to be. The more something relies upon a combination of our skills, and the more esoteric those skills, the more insulated that thing is from outside influence, ownership, and control.

My bet is the frustration you feel is less about complexity and more about your inability to affect change. You’re just one of many competing solutions to the same set of problems, and people will think your ideas are just as complicated because they’re not their ideas. They understand their own ideas better than they understand yours. Vice versa.

And we all live under this umbrella, together. I think that’s why the biggest asset you have as an engineer is to influence people who make decisions. Unfortunately, the best way to influence them is to convince them you have important, complicated knowledge they don’t. Self reinforcing loop.


The ecosystem is flooded with over-engineered bespoke solutions, and is sorely lacking in standardized approaches and best practices. It's a core reason why software development cannot reasonably be called engineering, as engineering principles are either never applied or discarded on a whim.

It won't change until we can form a guild (professional association) and turn it into a bonafide profession. Right now, code that one developer creates may be unrecognizable by another developer, even though both are working in the same domain. It would be a disaster if one lawyer could not follow a brief written by another or a doctor could not decipher which techniques were used by another to perform a particular surgical procedure.

"Just because you can drive a car with your feet if you wanted to, doesn't make it a good fucking idea!" --Chris Rock.


Yes, but not in the way you describe. Software engineers have power right now, we should be unionizing (even if the union is only pushing for things like IP clauses and non competes to be less draconic). Build the union while we are strong so it's there when we are weaker.

A union doesn't have to be a huge monstrosity. It can be simple and fight for a few basic standards in the industry.


The purpose of a union is to create a labor cartel which tends to standardize the price of labor above the rate at which the market would likely set it. IP clauses and non-competes are a small part of the issues plaguing our industry. Putting constraints on the labor supply is probably not a good thing; it leads to the market for labor relocating to less union-friendly climes. Ask anyone from Detroit how well that worked out.


Software engineering doesn't rely on factories, and less labor friendly areas produce worse products (ask Boeing about that.)

Apple and Google aren't going to just shut down in California because a union asked them to give some concessions that materially improve engineers lives.


> The purpose of a union is to create a labor cartel

No, that is not the "purpose of a union"

Some unions, especially in the USA, have functioned in such a way, but this is far from universal.


'Labor market' is a purely American concept. You shouldn't be switching jobs constantly and relocating. It takes a toll on personal life and makes no sense anyway.


It's still a "market" so long as there are supply and demand, regardless of how frequently a given participant is conducting transactions in that market.

The housing market is probably a good example. Many participants probably only purchase property once or twice in their lives (less even if you consider the case of a married couple buying a house, that's 0.5 purchases per person).


I've worked for three different employers and have never left this spare bedroom. Relocate? Where? To my living room? :-)


Unless you live in a communist country, you are part of a labor market.


I don't think there are any purely communist economies operating in the world today, and even if there are, there is still a market. It's a badly distorted market that performs poorly, but a market nonetheless. If you have human beings involved, there's a market; it goes with the territory as part of the human condition.


Workers coops are neither labor market nor necessarily communists. If everyone democratically operates the company and shares equally in profits, there's no wage labor being performed.


Among many offers, I chose a company on absolutes, not relatives. The culture is good, and I want to make serious products that serve a purpose. I get paid less than my peers but enough for a living. Is that communistic?


You are simply pricing in external variables to your compensation, which is part of how markets work.


I get what you're saying but I wouldn't program web apps for 3x the pay even. I don't "feel" the American way of switching jobs and being a general purpose programmer.


Too late for that now. With Remote Working becoming mainstream, you'll need to coordinate workers globally to create a union.


Everyone just needs to send me $1,000 a month in dues and i'll start sending google and amazon some very strongly worded emails on our behalf.


I'll do it for $950/month!


You landed on the surprising root cause:

“Doesn’t add business value”

But do you know how the business makes money (the actual processes)? Can anyone tell you how to add value in concrete terms?

Because in over a decade of consulting on technical leadership, Agile, lean and DevOps, the most consistent issue I’ve seen is that those questions are unanswerable for almost anyone in almost any company.

In the absence of a clear path to value creation, everyone optimizes locally for “best practices” because…

the root problem is almost all decisions have to be explained to people who know next to nothing about your area & you need to still sound rational.

The local maximum for that usually is “this is how _____ does it & it’s the new trend now.”


The complexity caused the variety, not the other way around. Networked systems are inherently complex. Most of the technologies you mention are attempts to solve that complexity in some way, and the ones that stuck ended up being ideal for specific use-cases but not others.

The industry trends towards the most useful solution, not the simplest one. React isn’t internally simple, but it killed the frontend JS framework experiments which used to come out daily because it really established a useful paradigm that covers a lot of the web GUI usecases.

The process is messy but it’s not illogical


And nature is messy. And this is a very natural way for us to do things. And much in the spirit of evolution. And competition in an open market.

It's a healthy thing though some software projects probably should just go and die already.


> moving away from python (because its too "slow"); someone recently gave a talk in my company, how to make a 500 line python script (which heavily involves in-efficient handling of IO) go faster with Rust.

I'm pretty good about shutting those types of talks down in my own org. Usually when "slow" is mentioned you have to take the presenters word for it. Rarely do they include metrics. And if they do once we delve into the code it usually becomes obvious why something is slow. Usually "slow" comes from using bad abstractions.


It's always been complex, and getting more complex, it is likely that you are just becoming aware of the complexity, when you thought it was simple. There's also the phenomenon of companies using tools/techniques that others are using without understanding why others are using them. I work for a company that needs k8s, kafka, and distributed systems. The previous company I worked for did not need that, at all, and so they didn't. The company before that didn't need them either, but thought they did and tried to move their single, relatively simple ETL pipeline to k8s, and it was a disaster.

But companies that need those tools, really, really need them. We don't use "nosql" (hate that term, it is a really dumb term anymore, sql use or not is completely orthogonal to the problem, "non-relational" or "non-OLTP" is better) databases because we think they are cool tech, we use them because traditional, relational, OLTP databases don't work for our use cases. But if someone comes to me and asks what database they should use, I always say "postgres", unless they can present a compelling reason postgres won't work.

The problems we face are tremendously complex, though, and only getting more complex. We fight back with tools, but there are years where the tools are failing to keep up.


agreed on postgres being the default choice

postgres delivers 200% on top of what you need and if you still aren't satisfied, there are countless forks and extensions


When my last company grafted yet another version of React onto our aging Rails app (making it the THIRD such front-end framework present in that codebase) to make a SPA loan application form, where the only externally fancy thing it does is real-time vaildation, I knew the realm of ridiculousness was long past us -- instead, we were deep behind enemy lines in the zone of absurdity


Yes, but it has very little to do with any of the things you mentioned. Software became so prevalent that it became a mantra, "Software is eating the world" but people in IT have little to no choice in what's put on the plate. They've been convinced that positions of responsibility and authority are bad and should be left to the MBA's. Eschewed most industry groups that provide some semblance of protection that nearly every other professional organization has adopted. Doctors, lawyers, actuaries, accountants, all have professional organizations that are powerful and provide some protections. I believe that most of the pathology that you see is a result of a push and pull between management and IT as people in IT seek those protections in other ways.

Just one example, Scala. First, I'm not criticizing the language itself. It has it's place, but what I saw was programmers trying to create a protected space that would provide higher bill rates. Java was everywhere and hiring a Java developer was easy. Scala was new and had steep enough learning curve that you could drastically shrink the candidate pool while at the same time selling the shiny new toy to management. They could create complex, arcane code that kept new developers from getting up to speed while providing the excuse that they were inferior developers and weren't smart enough to keep up. It didn't work for very long as management caught on that they weren't getting much other than higher labor costs. Go seems to be the latest incarnation of that while Rust is a bridge too far to sell to management.

So it's this back and forth, provide something to management that they can sell to their superiors something new. Management buys into it as long as they can get promoted before it inevitably blows up and the developers who sold it move on to new projects, rinse and repeat.


> They've been convinced that positions of responsibility and authority are bad and should be left to the MBA's.

Meh. From my experience, many developers actively don't want to go into management, because usually your whole day is filled with management crap and you can't go and actually code any more. And developers who do switch to management often end up as miserable bosses because their bosses don't care about "leadership trainings".

Additionally, many companies have the non-management track end at senior level, which means zero career progression for those who do not wish to transition to management.


>> your whole day is filled with management crap and you can't go and actually code any more.

These are clearly the wrong people to be pushing into management. Good management (it does exist) includes people who have coded, but are willing to give that up to enable others to do that. They get satisfaction from being enablers and making space for their underlings to be creative and make decisions.

Further, there are many excellent managers who don't have a typical developer background, but can recognize what success means for their team within an organization and how to achieve it. I've been managed by many excellent managers with backgrounds in chemical engineering and the classics.

The developers you describe should decline these positions and find a better fit where they can make better use of their time. Choosing to accept positions like this hurts them, as well as others.


Because software is so malleable, and its inner complexity largely invisible when it is actually running, the complexity tends to grow as much as it can (because it’s easier to add than to remove), that is, until it reaches the point where it is barely (not) manageable anymore. Hence we tend to live on that boundary of maximum still-barely-manageable complexity. Any time a new tool or technology makes some things simpler, the complexity just expands in some other direction (for example in the number of tools).

The only way to counteract this natural course is to explicitly and continuously take the time to simplify and consolidate things and to bear the extra cost of that continuous effort. But the incentives are stacked against that. As long as it (barely) works, the one taking shortcuts and increasing complexity, or just adding something new, will have an edge. It’s also much easier to create yet another leaky-abstraction layer on top of an existing system than improving the underlying system, because the latter is already in use by too many parties and the necessary changes cannot be done without breaking compatibility.

Another factor is that the field is still learning (e.g. type systems, how to best handle concurrency, distributivity, etc., not to speak of changes in the hardware having an effect on what works best, e.g. cache locality, parallelism, GPUs, etc.) and to some degree is still in its infancy. Maybe at some point in the future we will have it all figured out and reach a point of stability where we can concentrate on just making everything as simple and coherent as possible in the ways we by then know work best. But maybe not, and certainly not within our lifetimes.

So, yes, for the time being I’d say there is no real escape. But you can probably find a niche where things are calmer and slower, and stay away from the areas that are the most crazy and quick-moving.


I agree that we lean on things that are often too complex for our given tasks (the industry encourages it) but I'm going to push back with a little Rust rant. I'd argue that RUST is actually represents a simplification not a complication. Perhaps GC was an over-complication. Perhaps app/system language separation was an over-complication. Perhaps the idea that memory/cpu limits don't matter because there is always more on the way because hardware keeps improving royally screwed us. Perhaps OOP is a disastrous over-complication. Perhaps energy costs and efficient resource management is actually a hell of a lot more important than we believed it was. Look at all the bloat we blindly accept in the name of productivity which in many cases is dubious. Rust may not be the be all end all of languages but it does shine a bright light on the brain rot that has consumed the software industry.


I think the problem is that the majority of architects copy/paste solution for big companies problems (Google twitter ...)but 99.99% of business have not those problems, and when you tell a manager that this is used in google and it will cost the company 0$(open source) ,he will say ok do it


The fact that a FAANG spent hundreds of man years on their bespoke solution, already implies that theirs is not a solution for small problems. Small problem solvers should not look at FAANG for solutions, but at other small businesses.

More often than not, small problems require small solutions...


This question is getting a lot of attention. As such, it is consuming our time. I’d suggest it can be improved and revised. Currently, it is rather vague. I suspect the author could take more time and make it clearer. There are a lot of interesting themes — it deserves to be unpacked and clarified.


True that, but perhaps asking the question and getting these answers is part of how that happens, and the end result is a longer-form blog post somewhere.


It's the last stage of the second Dotcom bubble. The industry has been taken over by parasites and the well of innovation left over from the ARPA/PARC years is starting to run dry despite some of the better ideas having yet to have seen their heydey. It will be interesting to see what happens as the money-spigot dries up as the Federal Reserve starts to implement Quantitative Tightening over the next few years. I'm optimistic as all of this might, "reset the board" enough to slow trends toward digital enclosure. Worth looking at:

https://web.law.duke.edu/pd/papers/boyle.pdf


Standup and fight.

I am:

https://htmx.org


I also like htmx.


Finally.


I feel like this is the natural cycle of abstraction. Things get more and more abstracted and virtualized as they become easier to build and manage, but then the cost of all those abstractions begins to add up, and a movement of ultra-simple, specific performant solutions springs up, outcompetes all the bloated abstractions but suffers inflexibility, then begins to get abstracted and the cycle repeats.

I don't think this is a bad thing at all. Every time we learn things, every iteration software improves. The pendulum swings back and forth -- mainframe to PC to server-side to browser based apps to whatever's next, and every one of those offers benefits to what came before.


It’s more a cyclic thing than a pendulum.

We are back to mainframes with the insanity of serverless, lamba functions, etc.

Now your code runs on the mainframe again. Debugging is super hard. And of course aws happily takes your money for the time you spend on their “mainframe”.


I think things are more complex - overly complex. This is partly driven by rates and CV embellishing - its great for the individual to implement some new technology for their CV and for the better money. The complexity begets more complexity, and the cycle goes on...

I think the industry is waiting for AI to come through. They want the business analysts to be able to write their specs in English, and have the AI do the coding. In such a scenario lots of developers will lose out - some will still be needed - but from a business perspective, this will be even better than outsourcing.


It seems to me the 'problem' has very little to do with software engineering and mostly just with marketing a significantly wider gradient en skills and quality.

"Back in the day" you really couldn't get things done if you didn't actually know how things worked. Today, you can do a little learning (which is still a dangerous thing) and based on low quality requirements create a buzzword-friendly applications with 1000 dependencies you neither know about nor have to check for. That was however just a side-effect of something that is a good thing: composability.

But that dives into the technical side of things, in reality, the marketing of technologies as a product and the involvement of human middleware (management) in things they have no business being involved in causes most of the perceived problems. That is not something that is really caused purely by software engineers, nor can it be solved by just them.

Two ways to go about it could be:

  1. Having all the human overhead go through the same requirements and QA process as everything else
  2. Be better at marketing your own solution (but make sure it has the correct technical and business underpinnings)

This doesn't work in legacy hierarchical work environments, and you're essentially just screwed if you are stuck in one of those. Best to either stop worrying about the technology in one of those situations, or move on to somewhere else.


I rant about the excess complexity of our CRUD dev tools all the time. Example:

https://news.ycombinator.com/item?id=31217253#31240227

It's roughly 3x the complexity and labor compared to the 1990's desktop-oriented IDE's like VB, Delphi, PowerBuilder, Clarion, FileMakerPro, etc.

I realize deployment (installing, updating) was harder compared to web apps, but I'm not sure it has to be either-or in terms of simplifying deployment at the expense of development. Oracle Forms seemed to do the CRUD job sufficiently without installing a new EXE for each app update. It seemed almost a "GUI browser". A state-ful GUI markup standard may help us get closer to that again.

OF was not perfect, but we should've learned from what worked well and improved upon. We threw out the productive baby with the bathwater.

We have over-focused on social media and "web scale", but ordinary CRUD still does most the real office work. Making our apps "mobile friendly" has crippled them, despite the fact most real work is done with mice. It's time to return to YAGNI, KISS, and real GUI standards. It's not about nostalgia, it's about NOT excepting the waste and bloat our current dev tools now have. "Hello World" has a zillion lines of code behind it now.


Wholeheartedly agree.


A lot of things are over-engineered, but this has always been true.

It is not necessarily fair to say that the majority of software engineering jobs actually require or involve the en vogue tools.

1. Just because tech stacks gain traction in headlines does not mean that they are truly mainstream, but rather that they are of significant interest to the community where the links are submitted/discussed.

2. Recruiters and job ads are written to target software engineers and are gamed towards this goal, dropping buzzwords left right and center, sometimes quite nonsensically. Front-end jobs quite frequently demand that you have experience with Angular, React and jQuery to work on something that turns out to be a Vue.js app, and so on. So this can also make certain tech stacks and frameworks appear more prevalent than in fact they are.

So, yes, there are lots of overly complicated tech stacks out there, but no I don't think anyone is screwed. Often those tech stacks will have been chosen to solve a specific business problem and then it's not overly complicated, it's appropriately complicated.

If anything, there's just more noise to filter out when selected a place to work. Lots of buzzwords and nonsensical jargon dropping, or indeed questionable decisions for the solution of a relatively simple business problem, are good indicators for places you at which you probably shouldn't work.


Yes it’s gotten worse.

1. The cloud allowed to increase the available computing power at the expense of simplicity. This brought in the whole devops suite of problems (kubernetes, microservices and what not).

2. The data science hype brought in Python everywhere, which creates contention both culturally and technically.

3. The rise of mobile, means you no longer can escape portability.

4. And then general hype about the last new thing. I don’t that changed fondamentally.

I think things will get better eventually cause 1, 2 and 3 are still relatively new.


The new data tools I've seen are complex under the hood, but offer elegant user experiences, giving the best of both worlds.

You referenced a 500 line Python script being refactored with Rust and make me think of the Polars project: https://github.com/pola-rs/polars

Polars uses Rust to make DataFrame operations lightning fast. But you don't need to use Rust to use Polars. Just use the Polars Python API and you have an elegant way to scale on a single machine and perform analyses way faster.

I'm working on Dask and our end goal is the same. We want to provide users with syntax they're familiar with to scale their analyses locally & to clusters in the cloud. We also want to provide flexibility so users can provide highly custom analyses. Highly custom analyses are complex by nature, so these aren't "easy codebases" by any means, but Dask Futures / Dask Delayed makes the distributed cluster multiprocessing part a lot easier.

Anyways, I've just seen the data industry moving towards better & better tools. Delta Lake abstracting all the complications of maintaining all the complications of plain vanilla Parquet lakes is another example of the amazing tooling. Now the analyses and models... those seem to be getting more complicated.


Or you just replace Python with Nim and get the performance of Rust.


Eventually all things will be abandoned and/or rewritten. It will all be thrown away and then there will be more work ahead ($$$). I cringe thinking about the wasted life, though.


While alot of this is valid it isn't just a result of managers being enamored by conventions (though that is part of it)

In the cloud if you want to run a performant platform you typically can also run it much cheaper if you migrate away from maintaining actual systems.

The problem is that the DevOps and system engineering jobs have become much much much more complex in order to accomidate the cloud and as a side effect developers now have to meet them halfway as the line between the two blur.

If you want to run a product that processes a million records a minute you are likely going to want to go serverless and that means writing atomic lambda operations. We are shortly not going to live on a world where you can just do all this on your laptop which will be good in some ways and bad in others.

You will never have to worry about environments anymore you just write code against the aws,Google,or azure SDK and it will run on an obviscated identical system you are never aware of...which also has it's pros and cons.

You are right for most companies. Normal SaaS products need to get over themselves and realize kubernetes might not be that useful but this complexity exists because the larger companies were having trouble maintaining the old way of doing things at the scale the world demands. As long as millions of new users adopt the internet every year this complexity is only going to get worse. The world of 2030 doesn't exist without kubernetes and rust and lambda imo...for better or worse it's going to keep getting complicated.


what you’re referring to is called cargo-culting

small companies copy their technical and even hiring decisions from behemoths like Google

why? market powers!

the unfortunate reality, is that these companies can’t compete elsewhere, so they use hype technology that allows them to better market themselves (on the said conferences for example)

the employees can also use this opportunity to put “managed Kubernetes cluster” on their resumes to get more job offers

solution for you would be to find a company that doesn’t focus on technology, but on the problem itself


You're mixing up a lot of things. Overall, things are often born out of a need. The problem start when it's not a business need but a career advancement (or other political) need. Think about React. We'd be probably better off as an industry if that wasn't so popular. I hope someone at FB got their promotion for building that massively complicated framework - and I hope they learned what KISS means after they read the codebase of Preact, which achieved the same API with a fraction of the code.

Using Go or Rust instead of Python is not inherently more complicated, it's just a different language.

NoSQL is not complicated but it's fairly useless for most of its users (despite being so popular). At the same time, it has its uses for companies that need massive scale (think Google, not your average startup).

Kubernetes is fairly complicated but it can be the easiest option (even if it's not the most resource efficient) to do something because of the ready made tools available for it.

Don't worry anyway, we haven't screwed up ourselves, we just created tons of artificial work we can spend our employers' money on and that we can use to inflate our cvs and possibly land some more money in the next role.

When you build your own company, be conscious of this, and just use jQuery and PHP like Pieter Levels does.


You're not wrong. The problem is the fancy stacks actually solve some stuff, but they bring their own disadvantages with them. No-one is denying that kubernetes is extremely powerful and you might need that in some use cases, but then you're suddenly writing operators for it, instead of business logic for your customers.

I think the problem is that things we're building stuff to solve specific problems, and then expanding each of those tools until they become massive and need other tools to help them. So docker solved a problem but then it created problems that you need kubernetes for, and so on.

One of the reasons I'm working on darklang is that I think the root cause of this complexity is solvable. The solution, in my opinion, is to build tools that cover multiple layers of the stack - that removes join points where you might be tempted to customize.

For example, firebase covers multiple layers, you might otherwise need a DB, a connection pooler, a firewall, an api server, an autoscaler for the api, a load balancer, etc. But instead, the only surface area you have is the firebase API. There's lots of similar tools that cover multiple layers of the stack like this, netlify, glitch, darklang, netlify, and prisma are some examples.


Sounds to me like you've worked at trendy tech companies and want to keep working at trendy tech companies. That means you're going to have to work with trendy technology.

A massive amount of companies, maybe even the majority, don't do this. They use what works and upgrade when needed, not when its cool to use the new thing. They just don't tend to pay like trendy companies and don't look as good on a resume as trendy companies.


As someone who purposely -doesn't- work at a trendy company, it seeps in here too. People abuse us to get jobs at those trendy tech companies, basically.

Understand we have a relatively small web presence with not a lot of traffic. High dollar, low volume type stuff. About 6 years ago someone decided to move all our stuff to self hosted kubernetes and microservices, because reasons. As you may imagine, he took a job as an infra guy for a much larger company a few months later, leaving others to figure out and clean up the mess.

Not long ago, another did the same thing, but with graphql. Why? Because it's a graph! How cool. Again, left for a sexy tech company not long after, and now our tiny api service is stuck under gazillion lines of autogenerated code.

All of this points to the real problem I guess, which are middle managers. They eat buzzwords up like candy and trust that new and flashy means that person is smarter.


Yeah I get it. But that is an issue with technical management making a long term decision without planning correctly.

Sounds like your managers are either looking at those trendy tech jobs too (and want to have the buzz words on their resumes) or they think the trendy tech will attract candidates.


> they think the trendy tech will attract candidates.

This. Management says semi-openly that the job itself is too boring to atract solid candidates, so they at least need to make the tech stack interesting and good for CV. Working with hot stack is basically part of compensation.


I think every single field has complexified throughout history. Agriculture now has dozens of chemicals and large machinery. Woodworking now has advanced machinery and a handful of ways to do the same cuts. Music evolved from Gregorian Chant to Bach to Jazz. Software is no different, humans invent more tools and techniques in each field over time.

I don't think engineers are willingly screwing themselves. Does anyone here choose to adopt something they know will screw them over? We may be forced into decisions by higher-ups or by colleagues or associates, but those people generally have some reason behind their actions, they don't willfully screw engineers for fun.

The field as a whole, none of us individually can control where it goes. If your org sticks with proven older tech, it will do zero to prevent new frameworks from cropping up everywhere else. If you adopt any newer technology, you're now becoming a user, increasing its relevance, helping to test it and prove it, finding bugs and errors.

So no, "we" have not "screwed ourselves". It's simply human nature to complexify and add more tools over time.


You're right.

There are a few problems.

First, the gray beards that expect everyone to know "the basics" had built stuff so complex and convoluted that nobody can use it 100% correctly without their domain knowledge. It's fine, computers are complicated, but expecting everyone to keep it all in their heads is unreasonable. So people buried that stuff below a layer of abstraction, but that didn't solve the fundamental problem and so even these higher level tools are convoluted and cumbersome.

Then, you've got the people doing this on purpose to ensure that they are unfirable. This is pretty self explanatory, but there's a perverse incentive to overcomplicate your job so you come off as indispensible, it's like CIA and wall street lingo but for devs.

Of course I'd doesn't help that many people just go along with it for a paycheck.

You've got people that want to sell their cool shiny thing as a solution to anything and everything, who cares about the consequences. Everyone knows that these decisions ripple through time, but they don't care about that.

And finally, there's the guys that just don't know what they're doing, bit off more than they can chew, are in over their heads.

All of this leads to miles of technical debt, an industry made of it, increasingly unusable systems that require teams to understand and maintain.

I don't know that there is a solution. I don't know that it could happen any other way. But I do know that regardless of that, these systems cannot stand the test of time when built this way. If you want a future where computers serve humans and are ubiquitous, this path won't get you there.


You as a developer have the power to say no.

So many engineers have no backbone - use your leverage. You are the one writing code, not some PM.

There are sane escape hatches today that will give your team productivity multipliers and allow you to blitz past these "resume driven development" companies.

Render.com, traditional server-side rendered frameworks, etc.

Advocate for yourself and your team. You will be surprised how much leverage and control you have.


> Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?

That depends on the individual developer. For example, I'm working to clean up the mess that has become app dev w/ JavaScript (https://github.com/cheatcode/joystick), but I expect many will dismiss it short-term because it's not "what everybody else is doing" (despite being far simpler and clearer than the state-of-the-art).

And therein lies the problem: groupthink. There are very few people asking "how do we simplify this" or "how can we make this more clear" and a whole lot of people trying to impress each other with their galaxy brain knowledge of unnecessary tech.

The good news is that it's not a technical problem, but a cultural one. People are afraid to think independently and so they just go along with whatever the "best and brightest" say to do (which is usually an incentivized position due to existing relationships/opportunities).


You have screwed yourselves.

I'm still doing my projects with LAMP technology. With my own framework with a 150 lines kernel, routed with FS, looking for the maximum simplicity as principle number one.

Postmodern web development lost the Doherty threshold.

I measure my page load in tenths of a millisecond. Average page generated in 1-9 milliseconds, including the tipical 2-6 simple SQL local queries.

Your complexity is my competitive advantage.


Excellent question. Esp. the final bit where you ask about 'the business value'. It is actually closer to just 'value' - as experienced by the users.

If you are in a position where you see piling up complexity does not bring in more satisfied users and more money that is a great time to set up a simpler competitor that will do things on the cheap in a less complex way.


Complexity is bad when you are designing and maintaining a system. The ecosystem of humanity's software development isn't something you are designing or maintaining, here complexity is good because it provides abundance and diversity of tools and solutions. Don't need it? Don't use it. But stop advocating either as the right or good way for everyone.


As someone who is relatively young, these "over-complications" have been the norm for me, which is one the reasons why I find it difficult to relate to HN especially in the development of web apps.

HN's perspective on FE Development is that it should be a meager skill that can be on-boarded with ease, and there is frustration with frameworks (most notably, React), because what was once done with HTML/CSS and a sprinkle of JQuery has exploded into an actual field and specialization.

And, personally, I think it's warranted.

The explosion in demand for what the front end requires is just in correlation with the specialization of the field.

15 years ago, we were just doing blog posts with form submissions, now, we're trying to pave the way so that applications like Photoshop can be accessible in the web.

HN is getting old. There's no doubt about it.

And the young ones are starting to lap you guys.

I just hope I don't grow as bitter and hostile when my brain no longer can pick up new esoteric material with ease as some of ya'll.


I would argue with notion that this complexity is unnecessary.

But most importantly — if it really is, as you say, you can profit of that. Open a consultancy, and solve client's problems without using these over-engineered solutions. If your competition truly wastes a lot of time, then you would be able to solve the same problems faster and in a more effective manner.


That is a nice theory, but then you need to do sales, and if the people in charge are swayed by buzzwords like microservices and monorepos, and your marketing language only goes as far as, say, "reliable, proven technologies", you'll be passed by.


We lean into complexity because it's easier. Simple solutions are actually much more difficult to create.

Software engineers are lazy, don't want responsibility, just want to have fun and be creative. That's not a recipe for good engineering. The industry will continue to chase its tail as long as we don't treat it like a real engineering discipline.


> Someone else talks about that we need to [...] X [...] because that's where the leaders of the industry are moving to

This is the definition of cargo cult, and companies in our industry have a higher than average tendency to behave this way.

Most technology, principles, methodologies, programming patterns, project management patterns etc etc, are subjective, as in they work well for certain projects... not all. Even the massive over complexity we see, for example in containers, are sometimes worth it, they have their place. The issues come when people start behaving as you have found by copying what others are doing because they make the over simplistic relationship between their chosen tools and their success as a business.

Either convince your peers, or even superiors that mimicry is a poor basis for technological choices (best argued by doing the analysis yourself and pointing out the real world applicability), or find a different company that understands this (they do exist).


The opportunity is to locate experienced software engineers from before this nonsense, and create new, efficient software that runs circles around the new modern complexity.

My last employer did this: a facial recognition developer in the enterprise space; where every competitor in the industry has a server stack for their solution, we had a single integrated application replacing the entire competitor stack, and the entire thing can run on an Intel Compute Stick perfectly fine. The kicker is such a solution is exponentially less expensive to own and operate, and is exponentially less expensive to create because the types of people with these tight skills cannot find work, they are burned out game developers with extremely high optimization and complex simulation experience. They look at the complex world of web/mobile/modern development and simply want to cease writing code. I find them and we create enterprise killers.


good luck selling something inexpensive to enterprises

department heads need a reason for big budgets


Who said the solution is cheap? Our expense to create is exponentially lower in every respect, and our customer's expense to own is significantly lower and less complex. That does not indicate the price of the product is less, in fact the price is a tad higher because of these efficiencies.


So, you have valid points, but...

1) some of these things (e.g. node, microservices) already peaked a few years back, being overapplied and now the pendulum is swinging the other way

2) others (e.g. Kubernetes, React, monorepo) were developed at large, profitable companies that others wish to emulate (or work at someday), so they find excuses to use them. This case takes longer to reach a point where things swing against it, because everyone wants to pretend their company is the size of FAANG or will be soon, but the same process of overapplication and backlash happens eventually

3) in the midst of all that noise, there are some new things which are in fact a good idea for most developers. I don't know Rust or Go, but perhaps they are examples of that.

The key for us as developers is, unless we wish to work at FAANG, try to spot (3) in the forest of (1) and (2), and don't let (justified) annoyance at (1) and (2) blind us to the fact that (3) is out there as well.


The reality is all the tools you mentioned solve certain problems really well.

Need container orchestration? K8s is the best on the planet far and away

Need accelerated compute? Rust is a fantastic language that saves us from c++

These tools are all fantastic and we should be very grateful we have them. If people are using them outside their use cases then that’s just bad engineering.


I'm also jaded from all the new frameworks and "paradigms". (Similar to how every exploit must have a catchy name nowadays.) However, I genuinely love the innovation and ingenuity of software engineering. The industry will find simple solutions, but not in the way you think: the language and mental models will advance, making what now seems complex into a simple thing.

Phone calls are stupid complex nowadays compared to the old point-to-point wiring, but we can still very easily "pick up the phone and dial." It's an abstraction/mental model that's held since PBXs became automated.

When I studied machine learning 20 years ago, it was barely used, and everything was "from basics." The applied stuff was very simple, like an auto-encoder. Today, the way you think about, and teach, ML is not "a matrix here and a vector there," but in combinations of ANN layers.


Blame it on the "move fast and break things" mentality. The level of deprecation in this industry is really bad. And it is to a large extent trend-driven.

There is also an increased amount of saturation it seems in the development community. Many people who learn to leetcode before they learn the basic why's.


I see it as a tech debt bubble cycle and almost inevitable in any industry/field. It will always ebb and flow.

On one hand it make work unnecessarily complicated and in some cases creates political problems because the more complicated a solution is the more governance, and the more governance, the more politics. A lot of these solutions get put in place because the people promoting them are incentivized to be the one who found the "solution".

On the other hand it creates new opportunities for those who have the courage to not accept other people's assumptions about what "good" is and to find out for themselves and separate the wheat from the chaff. If you can adeptly use Occam's razor to decide for yourself what works and what doesn't you'll be ahead of the curve. Just keep calm and code on.


Sounds like you're getting caught up in the hype. You don't have to use any of those tools or technologies to do your job, and indeed the vast majority of people working in software don't.

The only people who are screwed are the people who follow hype. The rest of us are just fine.


It’s hard for managers sometimes to keep this stuff out even if they want to. Engineers like to play with tools and always always always over-engineer.

I try hard to walk my talk here but I catch myself doing it too. Simplicity is much harder than complexity. It requires more thought and deeper conceptual integration. Right now I am rethinking some older things and trying very hard not to second system effect it.

On top of this you have an industry pushing this stuff and cloud vendors who love the added cost it brings from managed services and more overhead. Cloud makes money off complexity. Makes it harder to move too which improves lock in.

Lastly you have the fact that our industry is cash flush. There has been little need to trim the fat. Just raise more VC or add more billable SaaS or… we’ll crypto comes with its own casino revenue.


> Simplicity is much harder than complexity.

Amen!!! The skill of practical parsimony is way under-valued. Warren Buffett said one of his top "skills" is saying "no" to financial gimmicks and peer pressure.

Somebody e-shoot the packrats.


Another manifestation of problems in higher education. It’s not just software.

If you read up on the sociology of professional specialization you’ll learn that most technical complexity in a field is there for competitive purposes. Jargon exists more exclude and obscure than to facilitate.

So one predicts less productivity as competition increases lead to complexification of professions. This is all because higher education is broken. One of the functions of higher education, perhaps it’s most important function, is allocating human capital efficiently. It’s fully derelict in this, preferring instead to sell credentials to labor that labor doesn’t need, at the expense of the debt holders and students, to the delight of corporations. The result is zero productivity going back to the early 70’s.


I personally feel like it was somehow worse in the days of enterprise Java BS. Check out the daily WTF and it doesn't really seem to have substantially changed over the decades: some folks in the industry will have great success with some technique that happens to work for their particular case, others will repeat, some successfully, some not, a myth grows from the successes that people do hear about, there's intense FOMO (what is _your_ micro service strategy), and at the end of it there's "cargo cult technical strategy" from people with little understanding of the circumstances in which something is applicable but try to get to success by applying successful people's techniques regardless of circumstances.


This is why i moved to data science so i can focus more on solving problems than picking frameworks and libraries. We are not completely immune to this problem, but by and large the tooling ecosystem is much smaller and the focus is on problem solving and not the tech stack.


Definitely not immune but far from perfect. I was talking to folks at PyCon about this problem. There’s definitely “framework fatigue” in data engineering.

Luigi, Airflow, Argo, Prefect, Dagster, bash + cron, MLFlow. Pandas, Dask, Spark, Fugue, etc.


I miss the days where anybody could learn HTML/CSS/JQuery and make web pages, and Rails, or a LAMP stack was enough server-side.

Building a website today requires learning a ton of different tools, languages and frameworks. All of them being moving targets, so by the time your website is done, some of its components are already deprecated.

And you could go a long way with a $5/month shared web hosting service. Now, "the cloud" is not only super expensive, but it's also very hard to even guesstimate how much the next bill is going to be.

Most people who would have made their own website now just turn into using platforms such Squarespace, or even don't have a website any more and only rely on Instagram, Twitter and TikTok.


I think the market is always right. As complexity increases in areas as you describe, it will create opportunities for solutions that simplify things.

IMO, it's one of the reasons that Phoenix LiveView is so appealing for people because it removes so much complexity from building otherwise complex tooling.

I actually just had to come face to face with this because I've been developing a lesson plan to teach my son to program...and after looking at everything I settled on Linux command line + HTML/CSS + SQL. Then the decision came down to which language to teach and I narrowed the field to Ruby, PHP and Elixir.

Ended up settling on Elixir simply because of the functional style and total capabilities without having to introduces tons of additional technologies.


> I think the market is always right.

How long does it take to be always right? A lot of these comments at best imply significant lag, and at worst imply the market is perversely wrong.


It means that enough options exist to solve the use cases that people have and if there's any type of gap that isn't being addressed, enough people will see the opportunity and address it. People will flock to that solution if they feel it's beneficial.

Pretty much always works if there's not interference.


There are two separate arguments here, that growing complexity in software stacks is bad and adopting modern languages is redundant.

I don't know where you work, but the majority of enterprise jobs out there have always been focused on what's dominant in the industry. Today that is, as you say, Blockchain, NoSql, crypto, and micro-frontends, etc. While hearing trendy words like that make me want to hurl, that's how these run of the mill companies operate. They don't have time for an optimal bare bones approach, or building things from scratch. Again I could be completely wrong, maybe you work somewhere really cool that does more exciting work outside of business logic, react development, and dockers that dock other dockers into kloud goobernety docker sockets. But the point I'm trying to make is 90% of tech companies aren't very pretty, and as I'm sure you know that's part of why they pay so well and are stable.

Non developers in tech like recruiters always seem to focus on new languages as if they're inherently better. And in some ways they're unwittingly right, there's less issues with backwards compatibility at least, thus more room for new features that eventually become concerning backwards compatibilities. And of course in some ways they're wrong, newer languages are less mature, and some argue that programming hasn't really changed since the 70's. This isn't very reassuring when compelling features are GC, serialization, concurrency, and really dumb things like attractive syntax sugar that you care less about when you're in the woodwork anyway.

That Python to Rust talk does sound kind of stupid though. Almost like a higher up with enough power to make subordinates listen to them ramble about their favorite sports team programming language. I almost want to guess that whatever it was could've been done in C++ 20 years ago, but like I said, language wars are stupid and trivial. Interpreted languages are slower though, but that's pretty obvious.

I don't think we're screwing ourselves into something we're locked into, these are just dominant in enterprise roles.


I think the problem are not the fads and hype trains (those have always existed and will exist - remember "network computing", "netpc", "thin client", "i-anything", ".com", etc?).

The problem is:

a) Inexperienced developers that confuse jumping on hype with "modern" and sound engineering, especially when the project is not something to be deployed and forgotten about but something that will need to be maintained for a decade or more (will your Kubernetes or blockchain be still around in 10+ years?).

b) Clueless managers that allow it to happen (or, worse, actively push it)

c) Spineless hucksters that would sell you the Moon as long as they get their provision.

Neither is the fault of the technology or the engineers who have created it.

Heck, I have recently witnessed a representative of a company manufacturing mining excavators (this type of equipment: https://daemar.com/wp-content/uploads/2018/12/dreamstime_m_8... - company is not Daemar, though) giving a breathless talk about how they "innovate in metaverse" by giving their customers the opportunity to buy NFTs of the pictures of their excavators. Seriously, not making that one up ...

That's just general lack of common sense, general lack of understanding of who your market is and what your customers are actually asking for (hint, NFT it probably isn't unless you are in the business of yet another crypto Ponzi scheme) combined with FOMO.

And the company management either gets it - and tamps down on it or the company will go out of business at some point.

This is not really about software - all of those things have their places and can have great benefits when used in the right way for the right purpose (not because it is trendy, modern or because the competition is doing it too) and by people who actually understand them (and the consequences of deploying them).


"Tell me you don't work at FAANG/MAAMON without saying you don't work at FAANG/MAAMON".

Google/Meta/Amazon are the last bastions of sanity. They use stable technology that works. And they keep taking more and more of the total software market as a result. The methods of avoiding this kind of buzzword-driven development are increasingly only extant within big tech. Other companies make due with perpetual-junior developers and people who can't hack it at big tech. These companies will never develop an engineering culture, and they'll never break away from the "CTO heard about web3, and now we all must use web3, somehow" dynamic.


There's a big leap from "a few people at work are overengineering things" to "we have screwed ourselves as software engineers". I don't think that you can generalize to the entire industry based on your experience.


Just because Ruby and Python are great doesn't mean that they ought to be used for every project. I love being able to write code that compiles to a single statically-linked executable in Rust that pushed me to write more correct code from the start. I also appreciate what Erlang/Elixir offer in terms of fault tolerance, extensive pattern matching and functional programming. There are so many ways to solve problems and that's a good thing. Every one of these languages has tradeoffs, though. People don't move to fancy stacks just for the sake of moving. They're trying to solve old problems by creating new ones!


You can go a very long way with terraform, html, JavaScript, and golang/Java/python/rust/whatever API language you prefer.

If these things aren't at 100% you're just adding to problems with more things, not solving them.


The pendulum always swings back the other way.

Perhaps after a downturn, things will revert to the mean.


To what degree is this valid logic?

Under what contexts does “reversion to the mean” apply?

Over what time frame?

The “mean” implies one dimension. What quantity are you referring to?


https://www.econlib.org/is-there-a-swing-of-the-pendulum/

> Is There a Swing of the Pendulum?

> By Pierre Lemieux

> People are often tempted to see social (including economic and political) phenomena in terms of a “swing of the pendulum.” In this perspective, problems such as wokism (just to give an example) will be corrected when the pendulum swings back. I suggest that this approach is easily misleading and seldom useful.


From Wolfram Math World

> Reversion to the mean, also called regression to the mean, is the statistical phenomenon stating that the greater the deviation of a random variate from its mean, the greater the probability that the next measured variate will deviate less far. In other words, an extreme event is likely to be followed by a less extreme event.

It is a stretch (and invalid, generally) to extrapolate this well-defined phenomenon to a complex system. There are many complex systems that maintain or increase their complexity.


> There are many complex systems that maintain or increase their complexity.

Increased complexity is only achieved with additional energy input. im suggesting that if energy (read: money) going into the system decreases rather than increases, there will be a reduction in complexity.


Consider a counterexample: an open source project that loses favor / funding / contributors. Does it become less complex? Probably not.


when it comes to software, it is easier to engineer a complex solution over a simple one. so in that sense, i would agree with you.


The software industry has significant energy input.


> What quantity are you referring to

How about global IT spending (measured in trillions). Or perhaps spending in "new technologies." If there is a downturn, there would probably be a reduction in IT spending, and so many of these overly complex application solutions could get cut, delayed, complexity downgraded, etc.


Not everywhere is like that. Smart small companies will eschew this unnecessary complexity to get more done with less.

For example, we have deliberately stuck with a single Node.js/Next app in a single repo using Postgres running on Heroku. We are 5 engineers now and plan to keep it that way for the foreseeable future, even as the team grows.

There is some complexity we probably don't need – the JavaScript ecosystem is notorious for this – but what we use is all reasonably boring tech at this point, and it allows us to stay productive as a team, focusing on delivering value instead of just maintaining things or chasing trends.


Just say no. I know this isn't easy. I'm the tech lead at my company and I've continuously steered us away from stuff because I couldn't understand why we needed to do things differently and no one else could come up with a reasonable argument for why anything of the things that we had that worked needed changing. I have dabbled with all the things but not found a compelling reason to change anything. Sometimes people go off on a tangent and sometimes they discover something useful but I'm in no rush to get there. What we have is fine and we have stuff to do.


Software ate the world, but didn't digest it quite properly, and now the world is in many ways broken.

I think some of the complexity stems from trying to make digital things that simply aren't or shouldn't be.


I believe there's some level of real innovation in all the new trends, but I would say the majority is just recycling of old ideas implemented at a different layer (e.g. CommonLisp/Smalltalk vs. Java vs. Javascript, jails vs. docker, Thin clients vs. SPAs, ...) and hyped by some BigTech™.

The industry seems to be constantly spinning tires, putting a lot of effort in rediscovering mostly the same things every decade, while really hard problems remain unaddressed. That's clear when you see most important algorithms published before the 80's.


> management loves [overly complex solutions], because it creates "work" for the sake of it, but it doesn't add any real business value.

Don't managers understand that development is a constrained resource? They have to choose which projects move forward, where people are assigned, and increasingly, which outsourced service to use because they don't have enough in-house resources to turn to.

My cynical view of the move to complexity is management (or their C-level superiors) are often sold on new platforms or "standards" that require it.


No. It's fine. Idk how long you've been in the industry but the Bad Idea Graveyard is already a mile high. It's great to see innovation and competition and it necessarily will include some duds or some things that inexplicably succeed. I've seen a lot of orgs experiment and then back off a lot of these kinds of things. Sometimes they end up getting strong adoption. There's plenty of smart ways to manage it. The industry keeps growing and evolving like crazy and the overall trajectory has been nothing but positive.


The internet blew the doors off conventional business. Instead of a local community, you're now exposed to 5 of the 8 billion people in the world. Most of the increase in complexity is coming from corporations. They have a tendency to run with a lot of inefficiency. New technologies are going to be introduced at an ever more rapid pace and fragmentation will only increase. I wouldn't say we're doomed, quality of life around the world is rapidly improving. It's a lot to stay on top of and overwhelming to be sure.


Not to mention shoehorning MongoDB into everything because "MySQL is slow".

This is at a startup that doesn't even have 100 concurrent users, and their data and queries are nothing special.


I'm over here jobbing from home on my couch (#winning) thinking about how screwed it feels that my task for the past couple days has been to send transaction details to a "webhook" so that a template email can be sent to the customer for 3rd party compliance purposes.

Why the heck can't we trigger an email from our internals? Oh, we don't even host our own email... because we're using a different company to host ALL our emails, documents, filestorage, etc...

i'm_in_danger.gif


I think the tools may have become too good.

There are so many different ways to build web services and the hardware (CPU/GPU/RAM/network bandwidth) and the software (OS/Nginx/Python/PHP etc.) have become so good that at the end of the day, they all work, more or less, which means that such complexity can always be justified.

I feel like software written for embedded systems to work with physical world suffers less of these issues because the environment is just less forgiving.


> I cannot help but wonder, that we have possibly screwed ourselves pretty bad, and there is no escape from it.

Just focus on making something great, and don't get too caught up in all the fashion. Software lasts way longer than people think. No one cares what brand and type of hammer a builder uses to make an amazing atrium. Likewise, no user once though, this video editor would be better if it was written in rust and ran in kubernetes.


In general, people only move their career by pushing for new things which inevitably become increasingly complex overtime as all the low hanging fruit is gone.


I laugh when these "brave new age" practices find their way to academia, where they actively interfere with every step of the intentionally perpetually-unfinished, half-assed, single-use software that they make.

But why are s/w developers worried? As long as tons of advertising money find their way into glorified blogs, they will get paid, no matter how much complexity they invent to justify their workload.


Explicit tools are complex on the face - implicit tools are complex in implementation.

Kubernetes is complex and FTPing some rb files would be simpler: until one of about 145 different situations arises that kubernetes forced you to accord for ahead of time.

Whenever you find yourself complaining about the complexity of a tool: ask yourself “am I smarter than everyone in my industry, or do I possibly not understand the problem entirely?”


I was thinking about this yesterday as it relates to infrastructure and hosting systems, then I stumbled-via a semirelated article-the phrase "cloud repatriation".

https://deft.com/blog/cloud-repatriation-isnt-a-retreat-but-...


I don’t think you’re wrong in your observation (though perhaps a bit hyperbolic in the doom) but I’m perplexed why you think there’s “no way out”.

Software is malleable, people are generally smart. It may take longer than you hope it does but things will shake out just fine as teams/companies are forced to look critically at their infra spend vs utilization and adjust accordingly.


I think about it more from the perspective of "building stuff that is useful and interesting". I can very quickly build a lot of cool, useful stuff with JS + Node + React + Postgres.

Yeah there is a lot of overbuilding and BS in our industry, but I don't think we're unique in that regard. It is safe to block out the noise and focus on what excites you.


The reason for this is Resume Driven Development and promos for flashy projects. Management in tech companies is completely broken.


This is why, as much as people love to complain about it, the big tech companies might actually be doing hiring right. Hire for the core competence and skills and not for the frameworks/languages. The thing that is missing in the process is the core skills should be tested only once, not every time you interview.


I believe this issue was discussed about 7 years ago. I believe this article still holds true.

https://pingineering.tumblr.com/post/116038532184/learn-to-s...

(let me know if someone has a better link.)


I’m deeply saddened that they chose MySQL as their hill to die on. I understand why, but boring technology doesn’t have to corrupt your data silently by default.

As for better links, I’m sure the concept of “choose boring technology” evangelises and explains the broader point that your article makes: https://boringtechnology.club/


Just a quick google search of the author. I guess he died on that hill and got resurrected to being an internet icon, feel free to DM him your thoughts.

https://twitter.com/martyweiner


Agree with everything except the monorepo comment. The polyrepos I've experienced were more complicated than the monorepos


ALL systems and processes (not just IT) tend towards complexity well past the sweet spot ( up until the sweet spot they were beneficial.)

So to answer your question - most of the industry will not move to simpler solutions. It goes without saying that a small fraction of the industry does require those complex systems, but they are relatively rare.


Before you categorise something as 'unncessary complexity' maybe it is worth taking some time to understand whether or not the problem that your company is trying to solve aligns with the goals of idea being presented.

We probably are doomed if there is no push back and debate with the vocal minority. Silence is often mistaken for complicity.


First software ate the world. Now software is eating software too. My point is, what you write makes sense if the target isn't moving. Writing software will inevitably become more complex because the envelope is always being pushed.

What you say about needless complexity is a very valid point, but it's just growing pains imo.


I wonder if "growing pains" fully captures what's going on here. It might just be natural for people to grab more layers and tools when the run into problems. Everyone loves to demo how the new thing works with "just a simple YAML file."

Honestly everyone would be better off doing everything in code (python, bash, go, rust, c it doesn't matter) directly. They're easy to debug, flexible, and everyone already knows how to work with them.


I suppose grabbing more layers and tools instead of thinking deeply about why you're having the problem and resolving it at its core is the growing pain I'm referring to.

On the other hand perhaps the industry is being ever-increasingly led by a younger workforce who already came into this ecosystem, and there's less chance of a full retro/introspective about why things are they way they are.


Yes, these things are done for IMPACT, personal impact to get that raise/promotion. Most of it totally unnecessary for the 99.999% cases. Same with leetcode questions. You start with leetcode at the gate and then continue with micro-fe refactor to get a raise.

Industry is infested with people who hate programming but love status.


I get that complicated problems have complicated solutions, but what I don’t like is pushing that complexity onto the developers/users. Better interfaces would make all the difference in the world.

Whoever develops the “Visual Cloud” IDE for writing scalable web apps where everything “just works” will be about $10B richer…


The problem isn't with the tools you list. In my experience, management is looking for a silver bullet to solve all their problems or to use something as a marketing term. It seems many non-tech companies are chasing the tech that actual tech companies use even if their use cases don't justify it.


cars today are more complex than the model T, though we could have settled for faster horses ;)

I don't think we're screwed.

I agree at times complex solutions are prioritized for the wrong reasons - e.g to create more work, buzzwords, or to look nice for hiring and investors. But ultimately these are tools with tradeoffs.

I happen to like K8s, monorepo, and Go because they solve problems that I have personally run into. I think crypto goes too far and doesn't really solve anything.

In terms of complexity, I don't see these tools as going from algebra to calculus, but more like re-learning variations of algebra over and over - sure its tedious, but its not rocket science.

However if you don't like dumb industry trends that don't create business value you can always go work for a series A startup. They DGAF about the frills or buzzwords, they just want fast results.


We're not programming like Turing award winners. We're not treating software as the science to make a product but as the mechanics to put it together and patch it.

We should be less original and try to copy mathematics - their theorems are valid for thousands of years. Our codebases last maybe a decade.


I have a thought experiment: if these tools are indeed not adding business value, and organizations are becoming "unnecessarily complex", then it should be easy to undermine their position in the market with a product that chooses the tools that you deem to be simple, right?


Browsers are the next Russia - we rely on what they offer because that's the easiest solution nowadays but that all is a huge pile of technical debt waiting to blow up eventually. We should be using a well-engineered cross-platform native UI technology instead.


I think it's the polar opposite. We've created a problem for businesses and a great sitution (high pay, high job security) for developers.

The developers don't really lose in this situation unless they are owning the businesses.

I'm not saying this is a good thing. Just assessing the reality.


Paradoxically we as engineers have to trust more to business people who are focused on finding the most economical and effective solution to a problem.

Engineers love to overengineer, because they can. And because it’s a lot of fun.

And then they end up shooting their own legs with unjustified complexity.


This is the kind of question that Rich Hickey (inventor of Clojure) dealt with here: https://www.infoq.com/presentations/Simple-Made-Easy/


Sounds like the place where you work is going bad. Companies have this lifecycle where they start off small and scrappy with a good product, then if they succeed, become big and bloated and bureaucratic, where the product doesnt matter any more.

I'd suggest go work somewhere else.


I think the author is suffering from common problem these days... "I see one thing is broken, therefore it's all broken." Instead of taking this approach try to think how you can improve things instead of seeing what is wrong - see what can be fixed.


Yes luckily FAANGchads like myself are helping the industry by constantly job hopping to max TC.


The question to ask is has software gotten better? I say yes, cloud, mobile and web have exploded and are if much higher quality than they were in the past.

So I wouldn't say we've screwed ourselves.

Would we be better of if we took a different path? No one can or will ever know


Most blithely, no, software "engineers" are making out like bandits.

Recently, one of Alan Kay's talking points has been that "software engineering is an oxymoron", and I couldn't agree more. What he means by this is that, instead of the principled approach to design and development characteristic of other engineering disciplines, software people do little more than what amounts to tinkering. Partly this blame lies in the shift to agile methodologies, adapted whole heartedly with little understanding of what the old style process was doing. Projects, moving incrementally, are stuck in local maxima in the name of "product-market fit".

That's the demand side of things; you've described the supply side pretty well. Developers like dealing with problems, so they naturally and unconsciously seek out more complexity. If you look at how even mediocre developers can make >200K easily now, it's not hard to see how that's a massive problem for everyone. All this complexity, especially from getting the various separately developed components to work together, gatekeeps the profession and business of making software. I'm at one of the companies that doesn't spend the most to hire, or have the shiniest perks, and let me tell you, we're desperate to get anyone we can get. This is unsustainable, and I worry we need to solve it before AI takes the means of programming out of our hands.

So, what is to be done? There are plenty of examples of software that gave the non-programming masses a means to build. Spreadsheets like Excel are by far the most popular, and have driven corporate computer adoption since VisiCalc came out in 1979. When they were simple, scripting languages like PHP and Perl could be handled by a non-engineer, as long as the admin side was handled. But I think the most interesting cases are those of full, contained programming and authoring environments, like SmallTalk and HyperCard. By being the entire system, they could cut out all the accidental complexity brought on by these interfacing components, and instead let users focus on building software. Importantly, they don't deal with the concept of files for code - instead it lives alongside whatever object it's relevant to. For better or for worse, object-oriented code is easier to reason about and empathize with. The more imperative code gets, the more the programmer is forced to play computer, which I think is the determining factor in gatekeeping programming today. The way forward is having the computer explain itself, be visible, and unsurprising, which modern stacks seem to be moving away from.


YES. The software industry produces enough unnecessary complexity to keep everybody busy.

Furthermore, it's becoming more and more hype/marketing driven.

Solutions are adopted because they are popular or "cool". CV-driven development is becoming the norm.


I think you're overreacting (and I think the comments here are overly negative).

Web tooling is better than ever. I can very quickly spin up a full-fledged production grade app with very little investment. I don't worry about blockchain or NoSQL or any of that. I just use tools that make me a productive engineer and that's ultimately what companies are interested in. If you're worried about recruiters asking you if you've looked at modern languages, then you've got some bigger fish to fry. If you don't know the language, the answer you should feel like giving is "I can learn anything, and I'd be happy to prep for the job."

I'm currently working on a statistics website for a game named Smite. The ingestion engine is powered by Go/Redis/PSQL/Docker, and the frontend is Next.js deployed on Render.

This is hardly complex. The Go binary reaches out to the Hirez API service, requests some data, caches it on Redis (in case we need to run the ingestion multiple times during development and to avoid service quotas), and then stores the data in a normalized data structure in Postgres. With Postgres I can now run SQL queries on top to gather stats about the playerbase and the games. All of this is done on my local machine. My MacBook has about 1 TB of hard disk space, which used to be unheard of a couple of years ago, so I have no worries about my database growing to a size I can't manage (old matches are also pruned and removed).

The next part is the frontend part, which is what I'm working on now. But this is also super simple. I'm using Next.js to statically render a website using SSG. I basically reach out to the Postgres database locally, grab the data points I need, render the UI into static HTML files, and then I just take that build, push it to Git and it triggers a job to deploy it on Render. All of this tooling is ridiculously and refreshingly simple.

I think you're really overthinking it.


Even as a British person, I am not sure if this is sarcasm and a wonderful example of a satirical take or not.


If you want it to be satire, sure :D


I think you are mostly wrong about this.

Newer programming languages (the ones from 10 years ago like Go and Rust) are much better than the ones from 30 years ago like Java and Ruby. This doesn’t mean that they should be used for everything but especially the simplicity of Go is always putting a smile on my face whenever I can use it. Compare that to Gradle Maven Spring Boot whatever Java stack - there goes your unnecessary complexity.

What you also have to understand is that many of the things you complain about are solving non-technical problems. Monorepos are great at breaking up silos between teams and enabling vertical development of features across the stack in an organization. They come with added complexity in terms of tooling and automation needed. It’s a trade of that might look bad if you only take the tech aspects of it into account.

Kubernetes in the cloud and its sister systems may look more complicated to you as a developer, but if you compare it to managing a physical data center including all the staff needed to operate and maintain it, it’s really much more simple especially when dealing with hardware failures, dynamic scaling etc.


Also, folks on the business side feel like they're missing out on something if your solution is simple. When they meet their compatriots for dinner, they come back and ask me, "We don't use AWS?". They don't care that the bill is 1/10th and probably think you're inept for proposing a simple RDBMS-based solution instead of some monstrosity using NoSQL and micro-services :-)


Cloud pay-as-you-go may help balance this out. With an in-house cluster, the capital cost is paid, sunk. With a cloud deployment, there is a tangible impact of wasteful code right there on the balance sheet every month.


A few months I found out that front-ends in separate git repos aren't cool anymore - it's back to monorepos now: https://rushjs.io/


I think there's a huge amount of 'boring' software development going on that doesn't touch any of kind of stuff. Java and C++ still run a lot of things.

I'm still suspicious of Guava, let alone Rust.


A question for the people here who have a favorable view of kubernetes: what is your level of experience with it, and what problems does it solve that aren't already solved by cloud managed services?


I use DigitalOcean's managed k8s to host quite a few services over the last few years.

There's definitely some complexity and learning curve involved, but it comes with some nice advantages for my particular use case:

- Low vendor lock-in: I actually migrated to DO from a different hosted k8s, and I was able to reuse the majority of my configuration. - Reproducibility: your projects are derived from the resources you upload, so it's hard to end up with a setup that you can't easily reproduce (or migrate!) elsewhere. - High flexibility: I can do some relatively strange things with e.g. routing without k8s batting an eye. There isn't really ever a point of "oh no, I've found something I can't really, do". - I've found it to be cheaper than cloud managed services once you're hosting as much as I am.

More generally, I like the...standardized(?) style. It feels like a sort of "build your own cloud", but the blocks you use look like everyone else's, despite the total product looking a bit different for everyone. I can use k8s managed, in a business setting, or fully self-hosted, and the essentials still work the same way.

A lot of bad experiences wrt complexity I hear come from running the cluster yourself, but nowadays, distributions like k3s make this, dare I say, pretty easy. If you want to use. VPS, DO managed k8s is very nice.


I think things will settle down, but not for a few decades more. It's still early days for software, so there's lots of churn and reinventing the wheel, and new possibilities keep surfacing.


If the solutions are indeed overly complicated, then eventually natural selection will weed them out, assuming that eventually the supply of money funneled into these things becomes limited. There's a precedent for this in the way that mid-2000s startups eschewed the heavyweight J2EE and/or "object request broker" architectures for simpler HTTP calls.

But whether or not they're overly complicated, I think the reason why these things are grating is because they're less fun than coding up solutions to problems 15-20 years ago. Configuring containers is a pure exercise in versioning hell, and with the emergence of devops, it's impossible for developers to avoid.


If we didn't artificially complicate everything, people would realize that most programming isn't fundamentally very complicated and isn't worth a six-figure salary.


you are right, but the problem is that most of us engineers are too caught up in the chasing of the next hot tool or language or busy grinding leetcode and complying with whatever the next ridiculous demand is by hiring companies and their recruiters.

nobody is focusing on the key issues of our industry and what is important and what needs to be changed, to have healthy, productive, respected and comfortable professional careers. to make this happen a shift in focus needs to come about.


Making technology decisions based on what others are doing is not engineering, it's lazy. Solutions should be analyzed and determined based on the problem not out of FOMO.


https://mcfunley.com/choose-boring-technology

This is an important writing on the topic


It's true but I'd be careful what you wish for. When the industry finally does coalesce around sets of simple, solid, dependenable tools compensation takes a dive.



I agree with the general sentiment, but I would add that Go is a very simple language. It's probably the simplest language I have ever used (besides C).


A very simple language with low expressiveness leads to very big bloated codebase.

k8s is an excellent example.


All this thrashing & change is normal. There are many reasons, but here are a few:

- Trend-following (Mgmt. FOMO) is real

- Resume-driven development is real

- Sometimes the 'new' stuff is better


Just let them create these complex monstrosities, eventually it will open up opportunity for simpler tools and systems that will eat them for breakfast.


Something I think the industry and everyone in it should put into practice more: Simplicity maximises the amount of work not done.


The problem comes from this apparent ease of communication. I don't think things will get much better till people get used to it.


Given that we profit from greater need to write code, maybe this is annoying, but otherwise keeps us in demand.


agree, some folks just redo things because "I can do it a different way", that might be over-simplified the situation, but essentially it is for these folks. the reasons could be different, resume, personal preferences/skills, even could be work politics ...


100% agree with all your points.


Wait until it breaks one day and no one can understand it enough to fix it !!!


We are already there.


Will industry move towards simple solutions after experiencing this churn down the line or are we doomed forever?

IIRC I found this site because of this essay by Paul Graham:

http://www.paulgraham.com/icad.html

TL;DR: don't worry about industry, what's the most efficient way of doing things? Do that.


Just spin up a Rails app, pop on a Postgres DB, tailwind CSS, job done.

(Only half joking.)


It has always been this way. You are getting old and it will get worse.


We went from soap to nswag generated openapi clients. Full circle.


This is just the Blub Paradox in a different form.


> where is our software industry heading?

It was never our industry. There was a brief window ~2008-2010 where software engineers had a lot of power within their orgs, but at the end of the day we were always the laborers, never the owners of this industry.

Capitalist loathe a monopoly on skill, it gives labor a dangerous amount of leverage.

The people who own our industry, mostly venture capitalists and other investors are interested in capturing value at all costs and limited this power that engineers had.

This was the drive behind "MVP" and "ship it!" cultures that are partially responsible for this mess. But complexity is also valued by management because it reduces the ability of an individual engineer to have an impact, thereby reducing their monopoly on skill. In addition we've seen an industry pop up which focuses exclusively on rushing in new, minimally skilled, looking to make a quick buck devs. These are people only know how to fiddle knobs in a big complex machine.

This is also why the hiring process is even more awful today than it was a decade ago. A decade ago anyone who was passionate about programming and had a github repo willed with cool projects could get hired. This has been transformed into a machine that seeks to make sure every engineer is the same, trained only to pass a series of algorithmic puzzles from leetcode and hacker rank. These are even different than what they emulate: the old google challenges where hard but given by devs who knew what they were doing. Half of the algorithm puzzles I've been given in recent years are clearly by devs who only understand what the answer is, but don't really have any deeper insight into the problem.

> are we doomed forever?

Only until this latest wave of tech (it's not really a bubble) crashes. Once demand for software skill plummets then it will likely be like the dotcom burst valley of 2004-2010. The only people doing software where people who cared about it, and because salaries crashed many good engineers found other niches they could apply their skills in. That's when you saw some really interesting problem solving going on in the field.


Why would there be no escape from it? Just do something else if you don't like the current state of affairs. No need to get all pearl-clutchy about it.


Yes. This sh*t is unmanageable.


Nim is easy to use but performance.

I'm releasing a (web) development platform for it this month. Just getting the code ready.


I don't believe the issue being addressed is a lack of web development platforms here.


The author mentioned CRUD with Rust, which sounds like they think Rust is overkill for CRUD.


There is a pattern that has been bothering me that I think feeds in to this but I haven't had the time to fully flesh out an essay on it yet.

We as a profession spend a lot of time _solving the same problems_. This isn't necessarily a bad thing, different implementations allow for specialization in unique but important ways. Where I think we've gone wrong is that we can no longer generically re-use a lot of code between code bases because a lot of those libraries are written in dead-end languages.

What I'm referring to as dead-end languages are any programming language where you can't use library code independently outside of its ecosystem. Golang, Erlang, Javascript, Python, Ruby, the entire Java land of languages are all one big ball of intertwined dead end ecosystems, even Rust to a lesser extent. Any library written in one of those languages is locked in to that ecosystem and will never have a chance at becoming a generic foundational building block for systems outside their ecosystem.

One of the reasons we're even able to rapidly build so many complex systems is the foundational libraries like libcurl that have "solved" a problem well enough and is reliable enough that it is effectively an easy default decision to use them. These are libraries that have more or less solved some hard problems sufficiently that other engineers can mental model them away without knowing the implementation or protocol details.

I've seen others compare these modern methods and tools to old-school in-house one-off development and how difficult that made things. This is the same effect but rather than lock in at a company level, its lock in at a language or library level (don't get me wrong this is generally better than random in house one-offs). If you're familiar with the Golang net/http package that mental model can't be transferred to another language and there is no way to expose that functionality to a language other than Golang due to how the language itself is designed.

As frustrating, old, decrepit, and unsuitable for a lot of things as the C ABI is, any language that can produce a library that exposes its functionality using the C ABI is table stakes right now to avoiding the sprawling landscape of language lock-in. Even in languages that support exporting libraries using the C ABI, there is always that concept of 'other' that seems so problematic to me. It's not _bad_ writing a library in Rust, but that boundary between Rust-land and the other is uniquely un-interoperable or overly repetitive in its own ways requiring layers of abstraction and special behavior to work. For example if you have two separate system libraries written in Rust that do all their work behind the scenes using tokio are they actually going to be sharing that runtime? No. There is no common libtokio.so file on the system, no cooperation or resource management between the two, and no common library to update if a security vulnerability gets detected (for the pedantic, I'm referring to pre-compiled distributed libraries as a system building block not the common source you can compile on your own). This specific problem of bundling specific versions into the compiled artifacts makes inconsistencies in the systems running the code have a lesser effect, but you end up having to deal with log4j like situations where you're entirely dependent on the packager, maintainer, or vendor to handle your security updates and trust that they got it right.

I think one of the big reasons we're experiencing this spiral of complexification comes from the fact that we're not generating those foundational building blocks any more. There is no refinement tuning out the complexity of the system and distilling best practices into library defaults. There is no common underpinnings being generated that can be maintained, understood, and diagnosed system-wide. We can't reason about this utterly shattered set of walled ecosystems.


I don't think we're screwed.

I think there's a lack of theory for software complexity. Complexity is a loaded word with several definitions, so when I used it here I mean software complexity in the sense that we don't have a theory to explain how things should be modularized and grouped. In addition to just modularizing things and grouping things, How you group and modularize protects your code from future technical debt, but each modularization may also come with an associated performance cost. There just isn't a theory that unionizes all of these things together. There isn't even a theory that explains just the modularization part without the performance cost.

When we don't have a theory for something we have a word for how we operate in this realm. "Design." This sort of stuff still exists in the realm of design. Anything that lives in the realm of design is very hard to fully optimize. Industries that incorporate the word "design" tend to move in trends which can ultimately be repeating circles because each change or paradigm shift is a huge unknown. Was this "design" more optimal then the last "design"? Is modern art better than classic art? Who knows? In fact the needle can often move backwards. The actual answer may be "yes", the current design is worse then the last design, but without a quantitative theory giving us a definitive answer we don't fully know, and people argue about it and disagree all the time. There are art critics but there are no math critics.

Take for example the shortest distance between two points. This is a well defined problem and mathematically it's just a line. You don't "design" the shortest distance between two points. You calculate it. This is what's missing from software. Architecture and program organization needs to be calculated not designed. Once we achieve this, the terms "over engineering" and "design" will no longer be part of the field.

If you squint you can sort of see a theory behind software architecture in functional programming. It's sort of there, but FP doesn't incorporate performance costs into the equation. Even without the performance metric, it's very incomplete, there is still no way to say one software architecture is definitively better than another. There may never be a way. Software may be doomed for a sort of genetic drift where it just constantly changes with no point.

The complexity of software will, however be always bounded by natural selection. If it becomes too complex such that it's unmaintainable, people will abandon the technology and it will be culled from the herd of other technologies. So in terms of being "screwed" I think it's fine. But within the bounds of natural selection, there will always be genetic drift where we endlessly change between technologies and popular design paradigms.

https://www4.di.uminho.pt/~jno/ps/pdbc.pdf


Trends, trends everywhere!


Lordy, yes. It's human nature:

https://xkcd.com/2347/

You are awesome and cool and raking in the kudos and bucks if you are piling yet more stuff (especially big, complex, and unstable stuff) on top.

You are a stupid nobody loser if you are the dutiful maintainer in Nebraska.


My cheeky answer:

> Any headline that ends in a question mark can be answered by the word no

https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...


Welcome to senior.


> Rust for CRUD apps?

Are all CRUD API’s insensitive to performance? Correctness?


No, but Rust is a low productivity, high formality language; one needs to make a tradeoff whether it's really that important. Given that most CRUD services are just a layer on top of a database.

This is the reality of software development; you have a budget and you have a goal. If all your budget goes up on making it correct without finishing it, you have a problem.

Anyway, it's just a CRUD app, it doesn't have to be as formal.


>No, but Rust is a low productivity, high formality language;

Setting aside performance (Python is slow as molasses) that still requires some qualification because there is the 'maintenance' dimension. I would take Rust over Python for CRUD any day, because the 'formality' is a feature not a bug for maintenance. I would take Java/C# over Rust because the former balance performance and formality very well. In fact, I wouldn't use a dynamically typed language like Python or Ruby for backend infrastructure code if there were any performance or long-term maintenance requirements.


I'm far from being proficient in rust but my productivity writing crud apps in it is pretty good (just throw some Rocket + SQLX and you're flying).

Rust is a very high level language, it's not like writing assembly. You rarely run into complicated situations with the borrow checker when writing trivial crud apps.

The ecosystem is not as mature for web apps compared to other languages, so it may be more efficient to pick something else, but ultimately the most important factor (when building trivial stuff) is picking a language you and your team are comfortable with. Whether that's Rust or node.js, it won't matter much.

If I were to make something else which is better supported in Rust, eg. a videogame, using bevy + ecs would run circles around a bunch of other languages and their game frameworks.


Claiming that Rust is a low productivity language, generally, is too broad of a claim to be even be plausible.

The parent commenter may have some narrow or situational definition of productivity.


I asked the question rhetorically. The answer is "no". Characterizing an application as CRUD does not exempt it from some mix of performance goals.


I believe simpler solutions will occur through the ballyhooed "decentralization". What actually decentralizing tech entails is a rethink of the types of business that are in play and therefore the technologies needed for them.

Right now the assumptions are heavily entangled with the platforms: we know that our audiences are on "X". Therefore "X" becomes part of our strategy, and our end goal(from an evolutionary success standpoint) is to make them standardize on our "Y". This happens from all parties: devs, consumers, forms, governments.

Thus when I boot up Windows I'm confronted with a cacaphony of different updaters, notifications, etc. All of them trying to exploit the platform I'm currently using to pull me deeper into some other platform.

If we fast forward this process, we can see that the nature of computing ecosystems is to be a jungle that exceeds understanding. And in jungles, there are a multitude of niches. Compatibility is situational. Although one could point to apex predators, they don't exactly "rule" the jungle.

Which means that the appropriate goal to achieve in a software's lifecycle is most likely a sustainable niche that only needs to know about a few things. But our industry is not doing this yet. Why? Because software has not eaten the software industry yet.

That is the culmination of all these decades of churning on code: eventually we end up with software that is better at coordinating information and activities for society than any human-mediated organization could be. And you look at the technologies we have, and assume there's a logistical function to them, and it's like: OK, maybe AI can do that. Maybe blockchain can do that. Maybe cloud and no-code frameworks can do that. Maybe if you bodge those things together, you end up in a place where the professional developer isn't dealing with as many details, like photography vs painting. And if that's really the case then you don't have to write nearly as much of an app: it will start hooking into the ecosystem readily, instantly presenting the views on information that you need and filtering noise for you.

We haven't had a really fundamental realignment of the economy since the end of World War II. And if you look at movies from then, the economy that emerges is sensible within its concepts of how economies should move forward: information was still expensive and while many novel things could be mass produced, you needed firm structures to coordinate them(Newspapers multiple times a day! Icebox and milk deliveries! Mail-order houses!), and you needed a new set of infrastructure to animate this action. Highways, supermarkets, shipping containers, and TV were all representative of where the world was going: an ecosystem of "products and services". And you could learn what products and services a city had by walking through the phone book and making calls.

But over the past few decades, it's saturated into an "attention economy". There are so many goods available that you'll never know about all of them, so the information systems have to take up the task of digesting it and leading us towards our best lives. So the task of making software simpler is also a task of making economic coordination simpler. And we are still going to be using the products and services framing for some time, but it's likely to get weird.


Does software evolve? Do we evolve it?

1. Legacy codebases accumulate not only a ton of "normal" tech debt but larger codebases within larger orgs have the battle scars of a sort of "forced evolution". Bandaids on bandaids, pulling in new frameworks and patterns all requiring often heavy transplants. Like a future archaeologist finding artifacts of the steam engine, combustion engine, nuclear reactors and lithium batteries you can infer the good intentions but unlike the original implementors you can clearly see, with hindsight's 20/20, the "unforeseen" side effects they couldn't (or were just incentivized not to). Unlike those societal-scale energy innovations the microcosm of human engineering optimism and naïveté that is your organization's legacy codebase had a more rushed "artificial" evolution that was likely a casualty of the kind of short-sighted, rushed product-roadmap dynamics we're all too familiar with. Less a million-year evolved shark and more a "cute" purebred pug. Can we go right to the shark? Probably not. We'll make hundreds of thousands of pugs before we ever get to the shark. In the meantime the optimist will see these pugs as forcing-functions that help evolve the encompassing ecosystem at-large to be more conducive for the entrant of a "shark".

2. You ask "have we screwed ourselves?" and I think this might imply too much confidence in the perception of how "separate" we are from these iterative codebases. To go back to the evolution metaphors, we're just introducing mutations under real-world conditions. Each product of that evolution is a reflection of those conditions more so than the potentially great ideals any single stimuli in the petri dish possessed at the time she took part in the engineering. By the very nature of large-scale, cooperative engineering we bake-in our collective foibles and the engineering disciplines we hold at any given time are just one, usually smaller, dynamic at play.

3. I may sound pessimistic, over-deterministic or that I think our efforts are futile in light of larger dynamics but I'm not, I'm sipping coffee right now, I'm good. Acknowledging that we have an outsized perception of our affect in these situations may help relinquish you from the oftentimes infuriating pain that accompanies the mundane banalities of daily software engineering. Champion the better ideas, sure, but maybe with a good-humored flexibility that comes from knowing today's great ideas are just memes bootstrapping the evolution of tomorrow's much greater ideas and in this chaotic soup there's some beauty.

Trust me, this is not the Zen outlook I'm able to stay within at all times (or even most of the time) but I want to try to and also remind myself why I thought all this software stuff was cool in the first place. I'll end with a quote I'm reminded of.

“For the simplicity on this side of complexity, I wouldn't give you a fig. But for the simplicity on the other side of complexity, for that I would give you anything I have.” ― Oliver Wendell Holmes


In an inflationary environment, this is fine. Just wait when we have interest rates at 12%.


This comment is irrelevant to the point of being a non-sequitur




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: