Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Differentiation from Linux is a hot topic whenever FreeBSD is discussed. I could get behind “converge to a fixed scope, and polish relentlessly” as the primary ethos.

Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished? At the moment, all OS releases are just waymarkers on an endless, goalless roadmap of perpetual development.



>Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?

I don't mean this in a rude or dismissive way but these comments make me want to pull my hair out. Deciding your software has a fixed scope and is finished doesn't stop the wheels of progress from turning. Worst case, the rest of the world will decide the scope you chose was crap, for reasons beyond your control, and then your project is dead. I get that it is an engineer's fantasy to build a theoretically "perfect" system but such a thing can't exist in practice. I think you'd have a better chance of proving P=NP or something.


I disagree.

There is no good reason why one should strive for a piece of software to be eternal.

It would be much better to say, well this is what we wanted to accomplish. The program does a good job now.

Later it might be time to build something else.

You build a nice warm and cozy cabin in the woods, and you like it. Keeps you warm and dry. Success.

Now 15 years later, the area is no longer in the woods, rather it is now zoned within a city to have duplexes.

You can then think that you built such a great building that now you need to remodel it into a duplex, and then a quadplex and then a skyscraper.

I think saying "my cabin was very good, now I have to let go of it and and build something else" is a much better approach.

It is nearly delusional to think that you will write a program that will be good enough to be prepared for what the future brings.

We need ot let things die and build again.


I don’t mean finish FreeBSD forever and go home - I mean finish a release.

Set some goals, release software that achieves those goals, and keep fixing bugs that are found.

If you have some new goals (support new hardware, add new feature), work on those in the next release. But don’t abandon the old release, because it still works to achieve the original goals.

What I’m suggesting to avoid is the situation where a bug is found and the user is told “it’s fixed in the next release”. If the bug compromises the goals of a release, it should be fixed in that release!

Sure you’d have to support old releases for much longer, but that’s not the hard part.

The hard part is deciding what the goals are in a precise and coherent way (for this purpose, goals should be the set of promises made to the user, which when broken give rise to a bug). I don’t think any large software project has good discipline here. I don’t think it’s been done before for an OS. But I’m talking about differentiation here.


Thanks for the clarification. In practice what that means to me is you need more people backporting fixes to the old branch. My experience: it's boring, thankless work and it doesn't pay unless there is some serious investment behind it, think heavily-audited systems deployed in government offices that have to stay the same for decades.


Off the top of your head, what is there in Office 2021 that you genuinely want to use and isn't in Office 2019?


Python scripting support?

There is a good reason for new features. But it might be nice if those features came as part of an official set of extensions around a stable core, to get the best of both worlds.


If they're just extensions they increase the support matrix greatly. Trade offs.


Not necessarily. They're all features that currently exist in the core product, making that product larger and more error prone. Thus, the support story is either the same, or smaller if one of those features isn't activated.

Having them run as extensions doesn't mean that there's a public extension API, you can keep that interface all to yourself. It means that from the developer's perspective, there's a core offering that the most senior members can handle, but the smaller features are both bundled, and can be more easily passed to other teams.

It's the same idea behind using libraries, instead of shoving all the code into one place. But it forces you to have a single structured interface to extend the product, instead of touching every other file when you want to add something.


Spend any amount of time in the FreeBSD kernel, and you'll figure out that "polish" aspect is just a myth. The Linux kernel is MUCH cleaner.

Kernel subsystem are written by a couple developers at most, in private, and submitted as a +20k/-5k single file diff on a public ML for "discussion".

Some developers I spoke with privately were able to make things move a bit by getting their commit bit and start making aggressive changes without discussions / consensus. That wasn't my style.


> Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?

Hardware eventually fails. Replacement hardware is eventually incompatible with previous hardware. "Finished" would in this case be a synonym for "dead".


Freeze a driver ABI, release drivers separately; ta-da, your OS is still finished within its scope, and new hardware works.


> I could get behind “converge to a fixed scope, and polish relentlessly” as the primary ethos.

> Wouldn’t it be great for FreeBSD to be the first OS to have a release that was actually, truly finished?

By OS do you mean just the kernel or also the userspace?

Then I think the challenge is to determine what gets included under the purview of relentless polishing.

Assuming you mean just the kernel, then probably bug fixes are included. But what about drivers for new hardware? And when researchers at Stanford develop a new file system algorithm with foo bar baz properties? Or a new scheme for sandboxing user programs?


Maybe all new features in new Major version? New FS=Next. But drivers should be buildable by the HW vendor right? And shouldn't the next kernel version be not crazy-hard to port the driver?


> Maybe all new features in new Major version?

I don’t think I understand the (many) purpose(s) of OS versioning. Vendors use them to make specific maintenance commitments? “We will continue to make bug fixes for version XYZ until 2025.”

And if end users care about a particular program, “Okay, version XZY of my favorite OS supports programs A and B.”?

> But drivers should be buildable by the HW vendor right?

The vendor may not create a driver for this OS. “Only 10k people use this OS, therefore it’s likely not profitable for us to make a driver for it.”


All the companies making tivoed devices with walled gardens on BSD/MIT-basis would absolutely love it, if the worldwide group of enthusiasts would finally come together to eliminate most of their in-house software development costs by delivering the perfect free solution, that only needs some branding, marketing and lock-in applied.

Sarcasm aside, I agree, that lots of software could feel more "finished"/refined. I'm trying to think, which of the xkcd comics fits the best here...


MS-DOS was pretty much finished there at the end. People still run the last version.


The non-portable part of MS-DOS essentially is implemented in the BIOS. When PCs no longer support BIOS booting then MS-DOS will not work, but until then it is sustained by a stable interface.


Novell Netware let us load MS DOS over the network and boot locally.

Did you mean - MS DOS uses BIOS function calls for functioning?


Doesn't it just use interrupts, directly?


Yep




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: