Hacker Newsnew | past | comments | ask | show | jobs | submit | da_chicken's commentslogin

Sure, but not needing a failure to cascade to yet another failsafe is still a good idea. After all, all software has bugs, and all networks have configuration errors.

Yes, exactly this. Judging a document font based on how well it functions as a programming font is weird.


What about the launch rail on an aircraft carrier?


Sweden's population is only around 11 million people, and you're geographically concentrated in the southern mainland provinces or near Stockholm. Both of those make thing a lot more practical to manage and make it a lot harder to abuse because you don't have the scale to make profit as attractive, or the distance to make oversight more difficult. You're also relatively culturally similar.

It doesn't seem like those should matter so much, but it really does make everything about democracy easier.

Things get much weirder when the population isn't so low or isn't relatively concentrated.


I mean, I can do all my voting, tax filings, etc. etc. All the way from Mexico, with no issues. You're right that most of that must of the Swedish population resides in the south, but, as someone who grew up in Northern Sweden, it's not like we're marginalised or anything, not really.


So, the line BE is just the line CB extended. It's the same line. And we know that the angles of a triangle add up to 180. And we know that the line BD is defined as perpendicular to AB.

That means the angle ABC and angle DBE must add up to 90. But that's also true of the angles ABC and angle CAB. That means that angle DBE and angle CAB must be the same. Both triangles ABC and BDE are both right triangles, so that means angles ABC and BDE are the same. So they're similar triangles: They have all the same angles.

Additionally, the point D is just at a point so that the length of line segment BD and the length of line segment AB are both the same: c. Since we know that the hypotenuse of triangle ABC is c, and the hypotenuse of triangle BDE is also c, and we know they're both similar triangles, then these triangles must be congruent as well.


Thank you! Rephrasing for my own understanding: The point of attack that I was missing was that angles BDE and ABC are equal, and now we have two equal angles (which immediately gives us the third) and one equal side, so we're good to go.


I can't follow your reasoning at at. "Drop the height h" is completely ambiguous.

And the nice thing about Garfield's proof is that all it requires that you know is the area of a right triangle and the basic Euclidean premises. You can easily get the area of a trapezoid from that.


> I can't follow your reasoning at at. "Drop the height h" is completely ambiguous.

I'm referring to the classic proof where you drop the height perpendicularly to the hypotenuse from the opposite corner.


It's easy to know what day of the year it is because leap days are at the end.


What do you mean? The .Net ecosystem has been generalized chaos for the past 10 years.

A few years ago even most people actively working in .Net development couldn't tell what the hell was going on. It's better now. I distinctly recall when .Net Framework v4.8 had been released and a few months later .Net Core 3.0 came out and they announced that .Net Standard 2.0 was going to be the last version of that. Nobody had any idea what anything was.

.Net 5 helped a lot. Even then, MS has been releasing new versions of .Net at a breakneck pace. We're on .Net 10, and .Net Core 1.0 was 9 years ago. There's literally been a major version release every year for almost a decade. This is for a standard software framework! v10 is an LTS version of a software framework with all of 3 years of support. Yeah, it's only supported until 2028, and that's the LTS version.


The only chaos occurred in the transition from .NET Framework to .NET (Core). Upgrading .NET versions is mostly painless now because the breaking changes tend to only affect very specific cases. Should take a few minutes to upgrade for most people.


Except it is a bummer when one happens to have such specific cases.

It never takes a few minutes in big corp, everything has to be validated, the CI/CD pipelines updated, and now with .NET 10, IT has to clear permission to install VS 2026.


If you can't get permission to update/change IDE, the company processes aren't working at all tbh. Same if cicd is in another department that doesn't give a shit.


That is pretty standard in most Fortune 500, whose main business is not selling software, and most development is done via consulting agencies.

In many cases you get assigned virtual computers via Citrix/RDP/VNC, and there is a whole infra team responsible for handling tickets of the various contractors.


Similar story at my prior job. Heck, we still had one package that was only built using 32-bit .Net Framework 1.1. We were only just starting to see out-of-memory errors due to exhausting the 2 GB address space in ~2018.

I love the new features of .Net, but in my experience a lot of software written in .Net has very large code bases with a lot of customer specific modifications that must be supported. Those companies explicitly do not want their software framework moving major supported versions as quickly as .Net does right now, because they can't just say "oh, the new version should work just fine." They'd have to double or triple the team size just to handle all the re-validation.

Once again, I feel like I am begging HN to recognize not everyone is at a 25 person microservice startup.


I might be missing something but the combination of 'we mustn't break anything' and 'we can't test it without 2-3* team size' sounds like release deadlock until you can test it..

The migrations where I've worked at have always been a normal ticket/epic. You plan it in the release, you do the migration, you do the other features planned, do the system tests, fix everything broken, retest, fix, repeat until OK, release.

Otherwise you're hoping you know exactly how things interact and what can possibly have broken, and I doubt anyone knows that. Everyone's broken things at first sight seemingly completely unrelated to their changes at some point. Especially in large systems it happens constantly. Probably above 1% of our merges break the nightly in unexpected places since no one has the entire system in their head.

Or you're keeping a dead product just barely alive via surgical precision and a lot of prayers that the surgeon remains faultless prior to every release.


On the migrations... read the comments through this thread.. there are many, and none have mentioned any significant pain points at all, just hypothetical ones from people like you who aren't actually actively using it.

As to the CI/CD pipelines... I just edited my .github/workflow/* to bump the target version, and off to the races... though if you're deploying to bare metal as opposed to containers, it does take a couple extra steps.

As to the "permission to install..." that's what happens when companies don't trust the employees that already write the software that can make or break the company anyway... Defelopers should have local admin privs, or a jump box (rdp) that does... or at the very least a linux environment you can remote-develop on that, again, has local admin privs.

I'm in a locked down environment currently, for a govt agency and hasn't been an issue. Similar for past environments which include major banking institutions.


Each one is their own anecdote.


You're describing a specific case of working in a big rigid enterprise. It doesn't have anything to do with .NET itself, does it?


Guess where most .NET developers employeers happen to be?


I have no idea about most .NET developers. At my current job (a public software company in US with thousands of employees) it's up to engineers to decide when to upgrade. We upgraded our main monolith app to .NET 10 in the first week.


For me, customers IT and their management decides.


I've been using .Net since late 2001 (ASP+) including in govt and banking and rarely have had issues getting timely updates for my local development environment, and in the past decade it's become more likely that the dev team controls the CI/CD environment and often the deployment server(s).... Though I prefer containerized apps over bare metal deployments.


Some devs get lucky.


You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"

NIST can adopt and recommend whatever algorithms they might like using whatever criteria they decide they want to use. However, while the amount of expertise and experience on display by NIST in identifying algorithms that are secure or potentially useful is impressive, there is no amount of expertise or experience that guarantees any given implementation is always feasible.

Indeed, this is precisely why elliptic curve algorithms are often not available, in spite of a NIST standard being adopted like 8+ years ago!


I'm having trouble understanding your argument. Elliptic curve algorithms have been the mainstream standard for key establishment for something like 15 years now. The NIST standards for the P-curves are much, much older than 8 years.


> You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"

If we did that we'd all be using Dual_EC...


I don't agree, and this feels like something written by someone who has never managed actual systems running actual business operations.

Operating systems in particular need to manage the hardware, manage memory, manage security, and otherwise absolutely need to shut up and stay out of the fucking way. Established software changes SLOWLY. It doesn't need to reinvent itself with a brand new dichotomy every 3 years.

Nobody builds a server because they want to run the latest version of Python. They built it to run the software they bought 10 years ago for $5m and for which they're paying annual support contracts of $50k. They run what the support contracts require them to run, and they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features. All it does is introduce a new way for the system to fail in ways you're not yet familiar with. It adds ZERO value because all we actually want and need is the same shit but with security patches.

Genuinely I want HN to understand that not everyone is running a 25 person startup running a microservice they hope to scale to Twitter proportions. Very few people in IT are working in the tech industry. Most IT departments are understaffed and underfunded. If we can save three weeks of time over 10 years by not having to rebuild an entire system every 3 years, it's very much worth it.


Just for the context, I am employed by a multi-billion company (which has more than 100k people)

Here, I'm in charge of some low level infrastructure components (the kind on which absolutely everything rely on, 5sec of downtime = 5sec of everything is down)

On one of my scope, I've inherited from a 15 years-old junkyard

The kind with a yearly support

The kind that costs millions

The kind that is so complex, that has seen so less evolutions other the years that nobody knows it anymore (even the people who were there 15y ago)

The kind that slows everybody else because it cannot meet other teams' needs

Long story short, I've got a flamethrower and we are purging everything

Management is happy, customers are happy too, my mates also enjoy working with sane tech (and not braindamaged shit)


Yes, this is the key distinction: old software that works vs old software that sucks.

The one that sucks was a so-so compromise back in the day, and became a worse and worse compromise as better solutions became possible. It's holding the users back, and is a source of regular headaches. Users are happy to replace it, even at the cost of a disruption. Replacing it costs you but not replacing it also costs you.

The one that works just works now, but used to, too. Its users are fine with it, feel no headache, and loathe the idea to replace it. Replacing it is usually costly mistake.


But that software was probably nice, back in the day

It slowly rot, like everything else


Or it doesn't. Because "software as an organic thing" like all analogies is an analogy, not truth. Systems can sit there and run happily for a decade performing the needed function in exactly the way that is needed with no 'rot'. And then maybe the environment changes and you decide to replace it with something new because you decide the time is right. Doesn't always happen. Maybe not even the majority of the time. But in my experience running high-uptime systems over multiple decades it happens. Not having somebody outside forcing you to change because it suits their philosophy or profit strategy is preferrable.


My guess is that most stuff is part of a bigger whole, and so it rots (unless it is adapted to that ever-changing whole)

Of course, you can have stuff running is constraint environment


Or more likely the 'whole' accesses the stable bit through some interface. The stable bit can happily keep doing it's job via the interface and the whole can change however it likes knowing that for that particular tasks (which hasn't changed) it can just call the interface.


Sounds like that is a different issue. I prefer to avoid spending a few weeks migrating software that i understand and support to a new OS when i dont have to. Some of it is 30 years old, but it has had all the bugs worked out.


You're talking about software. The other person is talking about OS. Big difference.


This is exactly the same thing: OS is nothing but software. And, in this specific case, we are talking about appliance-like stuff, where the OS and the actual workloads are bundled together and sold by a third party


> I am employed by a multi-billion company (which has more than 100k people)

In my personal experience, this could mean that you're really good or that you're completely incompetent and unaware that computers need to be plugged to a power outlet to function.


Having started my IT career in manufacturing this 100%. We didn’t have a choice in some sometimes. Our support contracts would say Windows XP is the supported OS. We had lines that ran on DOS 5 because it would’ve been several million in hardware and software costs to replace and then not counting downtime of the line and would the new stuff even be compatible with the PLCs and other items.


> .. they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features

Oopsie you got pwned and now your database or factory floor is down for weeks. Recovery is going to require specialists and costs will be 10 times what an upgrade would have cost with controlled downtime.


Not at all, it depends on the level of public exposure of the service.

In a factory, access is the primary barrier.

It's like an onion, outer surface has to be protected very well, but as you get deeper in the zone where less and less services have access then the risk / urgency is usually lowered.

Many large companies are consciously running with security issues (even Cloudflare, Meta, etc).

Yes, on the paper it's better to upgrade, in the real world, it's always about assessing the risk/benefits balance.

Sometimes updates can bring new vulnerabilities (e.g. if you upgrade from Windows 2000 to the "better and safer" Windows 11).

In your example, you have the guarantee to down the factory floor (for an unknown amount of time, what if PostgreSQL does not reboot as expected, or crashes during runtime in the updated version).

This is essentially an (hopefully temporary) self-inflicted DoS.

Versus an almost non-existent risk if the machine is well isolated, or even better, air-gapped.


> Versus an almost non-existent risk if the machine is well isolated, or even better, air-gapped.

Anyone else remember stuxnet?


I don't understand this comment. What exactly do you think LTS is?


Kernel live patching takes care of everything.

There's a difference between old software and old OS. Unless you've got new hardware, chances are you never really need a new OS.


I can't upvote this hard enough. It's nice to know there's at least one other person who feels this way out there.

Also, this is the most compelling reason I've seen so far to pay a subscription. For any business that merely relies upon software as an operations tool, it's far more valuable business-wise to have stuff that works adequately and is secure, than stuff that is new and fancy.

Getting security patches without having feature creep trojan-horsed into releases is exactly what I need!


I'm reminded of the services that will rebuild ancient electric motors to the exact spec so they can go back on the production line like nothing happened. For big manufacturing operations, it's not even worth the risk of replacing with aa new motor.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: