Hacker Newsnew | past | comments | ask | show | jobs | submit | more koverstreet's commentslogin

soon :)


"Experiment" is a misnomer. Rust has been around long enough and demonstrated more than enough real advantages in writing reliable code that we know it's what we want to be doing.


But WHERE is that code in the kernel? That, I think, it the OPs point. Where is that demonstration?


The main real-world example of Rust kernel code is the Asahi GPU driver, which has not merged upstream yet, but it does use the upstream interfaces you're seeing.


It's been moving slowly at first because you need a lot of bindings done before you can do interesting work, and bindings/FFI tend to be fiddly, error prone things that you want to take your time on and get right - that's where you deal with the impedance mismatch between C and Rust and have to get all the implicit rules expressed in the type system (if you can).

It'll go faster once all the bindings are in place and people have more experience with this stuff. I've been greatly looking forward to expanding bcachefs's use of rust, right now it's just in userspace but I've got some initial bindings for bcachefs's core btree API.

Real iterators, closures, better data types, all that stuff is going to be so nice when it can replace pages and pages of macro madness.


See, for example, the binder driver merged for 6.18. It's out there, and will land when it's ready.


In discussions like this, I sometimes feel that the importance of related work like the increasing use of Rust in Android and MS land is under-appreciated. Those who think C is fine often (it seems to me) make arguments along the lines that C just needs to have a less UB-prone variant along the lines of John Regehr and colleagues' "Friendly C" proposal,[0] which unfortunately Regehr about a year and a half later concluded couldn't really be landed by a consensus approach.[1] But he does suggest a way forwards: "an influential group such as the Android team could create a friendly C dialect and use it to build the C code (or at least the security-sensitive C code) in their project", which is what I would argue is happening; it's just that rather than nailing down a better C, several important efforts are all deciding that Rust is the way forward.

The avalanche has already started. It is too late for the pebbles to vote.

0: https://blog.regehr.org/archives/1180 1: https://blog.regehr.org/archives/1287


Oof. That's a depressing read:

> This post is a long-winded way of saying that I lost faith in my ability to push the work forward.

The gem of despair:

> Another example is what should be done when a 32-bit integer is shifted by 32 places (this is undefined behavior in C and C++). Stephen Canon pointed out on twitter that there are many programs typically compiled for ARM that would fail if this produced something besides 0, and there are also many programs typically compiled for x86 that would fail when this evaluates to something other than the original value.


Some parts of the industry with a lot of money and influence decided this is the way forward. IMHO Rust has the same issue as C++: it is too complex and a memory safe C would be far more useful. It is sad that not more resources are invested into this.


I'm entirely unconvinced that a low-level† memory safe C that is meaningfully simpler than rust is even possible, let alone desirable. IMHO Basically all of rust's complexity comes from implementing the structure necessary to make it memory safe without making it too difficult to use††.

Even if it is though, we don't have it. It seems like linux should go with the solution we have in hand and can see works, not a solution that hasn't been developed or proved possible and practical.

Nor is memory safety the only thing rust brings to the table, it's also brings a more expressive type system that prevents other mistakes (just not as categorically) and lets you program faster. Supposing we got this memory safe C that somehow avoided this complexity... I don't think I'd even want to use it over the more expressive memory safe language that also brings other benefits.

† A memory-safe managed C is possible of course (see https://fil-c.org/), but it seems unsuitable for a kernel.

†† There are some other alternatives to the choices rust made, but not meaningfully less complex. Separately you could ditch the complexity of async I guess, but you can also just use rust as if async didn't exist, it's a purely value added feature. There's likely one or two other similar examples though they don't immediately come to mind.


I don't think so. First, Rust did not come from nowhere, there were memory safe C variants before it that stayed closer to C. Second, I do not even believe that memory safety is that important that this trumps other considerations, e.g. the complexity of having two languages in the kernel (even if you ignore the complexity of Rust). Now, it is not my decision but Google's and other company's influence. But I still think it is a mistake and highlights more the influence of certain tech companies on open source than anything else.


> First, Rust did not come from nowhere, there were memory safe C variants before it that stayed closer to C.

Can you give an example? One that remained a low level language, and remained ergonomic enough for practical use?

> Second, I do not even believe that memory safety is that important that this trumps other considerations

In your previous comment you stated "a memory safe C would be far more useful. It is sad that not more resources are invested into this". It seems to me that after suggesting that people should stop working on what they are working on and work on memory safe C instead you ought to be prepared to defend the concept of a memory safe C. Not to simply back away from memory safety being a useful concept in the first place.

I'm not particularly interested in debating the merits of memory safety with you, I entered this discussion upon the assumption that you had conceded them.


> Can you give an example? One that remained a low level language, and remained ergonomic enough for practical use?

They can't, of course, because there was no such language. Some people for whatever reason struggle to acknowledge that (1) Rust was not just the synthesis of existing ideas (the borrow checker was novel, and aspects of its thread safety story like Send and Sync were also AFAIK not found in the literature), and (2) to the extent that it was the synthesis of existing ideas, a number of these were locked away in languages that were not even close to being ready for industry adoption. There was no other Rust alternative (that genuinely aimed to replace C++ for all use cases, not just supplement it) just on the horizon or something around the time of Rust 1.0's release. Pretty much all the oxygen in the room for developing such a language has gone to Rust for well over a decade now, and that's why it's in the Linux kernel and [insert your pet language here] is not.

BTW, this is also why people being are incentivized to figure out ways to solve complex cases like Rcu-projection through extensible mechanisms (like the generic field projection proposal) rather than ditching Rust as a language because it can't currently handle these ergonomically. The lack of alternatives to Rust is a big driving factor for people to find these abstractions. Conversely, having the weight of the Linux kernel behind these feature requests (instead of e.g. some random hobbyist) makes it far more likely for them to actually get into the language.


I don't think there are many new ideas in Rust that did not exist previously in other languages. Lifetimes, non-aliasing pointers etc all certainly existed before. Rust is also only somewhat ready for industry use because suddenly some companies poured a lot of money in it. But it seems kind of random why they picked Rust. I do not think there is anything which makes it particularly good and it certainly has issues.


"Lifetimes" didn't exist before. Region typing did, but it was not accompanied by a system like Rust's borrow checker, which is essential for actually creating a usable language. And we simply did not have the tooling required (e.g. step-indexed concurrent separation logic with higher order predicates) to prove a type system like that correct until around when Rust was released, either. Saying that this was a solved problem because Cyclone had region typing or because of MLKit, or people knew how to do ergonomic uniqueness types because of e.g. Clean, is the sort of disingenuous revisionist history I'm pushing back on.

> But it seems kind of random why they picked Rust. I do not think there is anything which makes it particularly good and it certainly has issues.

Like I said, they picked Rust because there was literally no other suitable language. You're avoiding actually naming one because you know this is true. Even among academic languages very few targeted being able to replace C++ everywhere directly as the language was deemed unsuitable for verification due to its complexity. People were much more focused on the idea of providing end to end verified proofs that C code matched its specification, but that is not a viable approach for a language intended to be used by regular industry programmers. Plenty of other research languages wanted to compete with C++ in specific domains where the problem fit a shape that made the safety problem more tractable, but they were not true general purpose languages and it was not clear how to extend them to become such (or whether the language designers even wanted to). Other languages might have thought they were targeting the C++ domain but made far too many performance sacrifices to be suitable candidates, or gave up on safety where the problem get hard (how many "full memory safety" solutions completely give up on data races for example? More than a few).

As a "C++ guy" Rust was the very first language that gave us what we actually wanted out of a language (zero performance compromises) while adding something meaningful that we couldn't do without it (full memory safety). Even where it fell short on performance or safety, the difference with other languages was that nobody said "well, you shouldn't care about that anyway because it's not that big a deal on modern CPUs" or "well, that's a stupid thing for a user to do, who cares about making that case safe?" The language designers genuinely wanted to see how far we cold push things without compromises (and still do). The work to allow even complex Linux kernel concurrent patterns (like RCU or sequence locking) to be exposed through safe APIs, without explicitly hardcoding the safety proofs for the difficult parts into the language, is just an extension of the attitude that's been there since the beginning.


Rust isn't perfect, but it's basically the most viable language currently to be used in software such as Linux. It's definitely more of a C++ contender than anything else, but manages to be very usable in most other cases too. Rust 1.0 got a lot of things right with its compile-time features, and the utility of these features for "low-level" code has been demonstrated repeatedly. If a language is to replace Rust in the future, I expect it will take on many of the strengths of Rust. Moreover, Rust is impressive at becoming better. The work for Rust-for-Linux, alongside various other improvements (e.g. next trait solver, Polonius and place-based borrowing, parallel rustc frontend) show that Rust can evolve significantly without a huge addition in complexity. Actually, most changes should reduce its complexity. Yes, Rust has fumbled some areas, such as the async ecosystem, the macro ecosystem, and pointer-width integers, but its mistakes are also considered for improvement. The only unfortunate thing is the lack of manpower to drive some of these improvements, but I'm in it for the long run. Frankly, I'd say that if the industry had to use only one language tomorrow, Rust is the best extant choice. Really, I'm open to

And, it's really funny that GP criticizes Rust but doesn't acknowledge that of course blood, sweat, and tears have already gone into less drastic variants for C or C++. Rust itself is one of the outputs of the solution space! Sure, hype is always a thing, but Rust has quite demonstrated its utility in the free market of programming languages. If Rust was not as promising as it is, I don't see why all of these companies and Linus Torvalds would seriously consider it after all these years of experience. I can accept if C had a valid "worse is better" merit to it. I think C++, if anything, has the worst value-to-hype ratio of any programming language. But Rust has never been a one-trick pony for memory safety, or a bag of old tricks. Like any good language, it offers its own way of doing things, and for many people, its way is a net improvement.


For example cyclone, checked C, safe-c, deputy etc.

I agree that memory safety is useful, but I think the bigger problem is complexity, and Rust goes in the wrong direction. I also think that any investment into safety features - even if not achieving perfect safety - in C tooling would have much higher return of investment and bigger impact on the open-source ecosystem.


> In the comments on HN around any bcachefs news (including this one) there are always a couple throwaway accounts bleating the same arguments - sounding like the victim - that Kent frequently uses.

And every time something like this comes up, I end up with every sort of accusation pointed at me, and no one seems to be willing to look at the wider picture - why is the kernel community still unable to figure out a plan to get a trustworthy modern filesystem?

> This is your problem to fix.

No, I've said from the start that this needs to be a community effort. A filesystem is too big for one person.

Be realistic :) If the community wants this to happen, the community will have to step up.


Look, if you're here saying "the throwaway comments aren't me" then I beleive you. It'd be nice if you said that clearly though.

Please please don't forget I want you to succeed - that's why I bunged nearly $800 your way in this endeavour - but I'm not the only person who thinks you come across as completely immune to critisicm, even when it's constructive and from your supporters.

>> This is your problem to fix.

> No, I've said from the start that this needs to be a community effort. A filesystem is too big for one person.

Right now that "community effort" is looking a bit unlikely, eh ?

I would hate to have to deal with these people as my primary occupation and I totally get why you don't want to continue.

That's said, nobody else has the power, skill or inclination to make bcachefs that wonderful filesystem of the future for Linux - only you. That's what I meant by "this is your problem to fix".

I wish you the best of luck with the new DKMS direction. And I'll get on board and actually try it out soon :D


> Please please don't forget I want you to succeed - that's why I bunged nearly $800 your way in this endeavour - but I'm not the only person who thinks you come across as completely immune to critisicm, even when it's constructive and from your supporters.

You have to understand, I get an absolute _ton_ of "constructive criticism" from people who basically are assuming everything is going off the rails and are expecting massive, and unrealistic changes; for bandwidth reasons if nothing else. I have to stay focused on the code and getting it done.

> Right now that "community effort" is looking a bit unlikely, eh ?

Actually, the big surprise from the DKMS switch is just how much the community came together to make it happen.

This project has looked like a one man show for a long time, and the core of it probably always will be for the simple reason that there are precious few people with the skillset required to do core filesystem engineering; that requires a massive amount of dedication and investment of time to get good at, there's a hell of a learning curve.

But there's still a lot of people with the willingness and ability to help out in other areas. People have been since even before the DKMS switch, to be honest, it's just that a lot of it is boring invisible but extremely necessary QA work - and that stuff is work, and people have helped out a lot there.

You have to be involved in the community, in the IRC channel to see this stuff going on. It's really not just me.

And now with the DKMS switch, a lot more people jumped in and started helping, and that's how we were able to get every major distro supported before the 6.17 release. That happened _fast_, and only a small fraction of the work was mine, mostly I was just coordinating.

Honestly, looking back, I don't think I could have planned this better - the timing was perfect. We're nearly done with stabilization, so it was the right time to start focusing more on distro integration and building up those working relationships, and the DKMS migration was just the kick in the pants to make that happen. Now we're pretty well positioned to get bcachefs into distro installers perhaps six months out.

The community is real, and it's growing.

(I would still _fucking love_ to have more actual filesystem engineers though, heh).


Eh? Linus has called it "experimental garbage that no one could be using" a whole bunch of times, based on absolutely nothing as far as I can tell.

Meanwhile just scan the thread for btrfs reports...


> Eh? Linus has called it "experimental garbage that no one could be using" a whole bunch of times, based on absolutely nothing as far as I can tell.

Where did Linus call bcachefs "experimental garbage"? I've tried finding those comments before, but all I've been able to find are your comments stating that Linus said that


I need to write up a proper patreon post on all this stuff, because there's a lot of misinformation going around.

No, I was not "ignoring the merge window". Linus was trying to make and dictate calls on what is and is not a critical bugfix, and with a filesystem eating bug we needed to respond to, that was an unacceptable situation.


Finding defects is a good thing, and fixing defects is a good thing. Adding new features can be a good thing as long as it doesn't introduce or uncover too many new defects in previously stable code. But what is lacking in your development process that you keep finding "critical" defects that could affect a large number of users during the merge window?

It seems like bcachefs would benefit from parallel development spirals, where unproven code is guarded by experimental flags and recommended only for users who are prepared to monitor and apply patches outside of the main kernel release cycle, while the default configuration of the mainline version goes through a more thorough test and validation process and more rarely sees "surprise" breakage.

It certainly appears that Linus can't tell the difference between your new feature introductions and your maintenance fixes, and that should trigger some self-reflection. He clearly couldn't let all of the thousands of different kernel components operate in the style you seem to prefer for bacachefs.


> It seems like bcachefs would benefit from parallel development spirals, where unproven code is guarded by experimental flags and recommended only for users who are prepared to monitor and apply patches outside of the main kernel release cycle, while the default configuration of the mainline version goes through a more thorough test and validation process and more rarely sees "surprise" breakage.

Maybe if you're a distance observer who isn't otherwise participating in the project

What's the saying about too many cooks in the kitchen?

These concerns aren't coming from the people who are actually using it. They're coming from people who are accustomed to the old ways of doing things and have no idea what working closely with modern test infrastructure is like.

There wasn't anything unusual about the out-of-merge-window patches I was sending Linus except for volume, which is exactly what you'd expect in a rapidly stabilizing filesystem. If anything I've been more conservative about what I send that other subsystems.

> It certainly appears that Linus can't tell the difference between your new feature introductions and your maintenance fixes, and that should trigger some self-reflection. He clearly couldn't let all of the thousands of different kernel components operate in the style you seem to prefer for bacachefs.

If Linus can't figure out which subsystems have QA problems and which don't, that's his problem, not mine.


Concerns about general policy need to be handled separately from individual cases. You needed to play along, advocate for your position, explain your position (e.g. demonstrate how your processes eliminate certain classes of errors; or find some objective way to measure QA problems and present the stats), and push for a holistic process change that supports your use-case. That process change would need input from other people, and might end up very different to your original proposal, but it could have happened. Instead, you've burned (and continue to burn) bridges.

Linus's job is not to make the very next version of the kernel as good as it can be. It's to keep the whole system of Linux kernel maintenance going. (Maintaining the quality of the next version is almost a side-effect.) Asking him to make frequent exceptions to process is the equivalent of going "this filesystem is making poor decisions: let's hex-edit /dev/sda to allocate the blocks better". Your priority is making the next version of bcachefs as good as it can be, and you're confident that merging your patchsets won't break the kernel, but that's entirely irrelevant.

> If Linus can't figure out which subsystems have QA problems and which don't, that's his problem, not mine.

You have missed the point by a mile.


> Concerns about general policy need to be handled separately from individual cases.

Citation needed.

> Linus's job is not to make the very next version of the kernel as good as it can be. It's to keep the whole system of Linux kernel maintenance going. (Maintaining the quality of the next version is almost a side-effect.) Asking him to make frequent exceptions to process is the equivalent of going "this filesystem is making poor decisions

You're arguing from a false premise here. No exceptions were needed or required, bcachefs was being singled out because he and the other maintainers involved had no real interest in the end goal of getting a stable, reliable, trustworthy modern filesystem.

The discussions, public and private - just like you're doing here - always managed to veer away from engineering concerns; people were more concerned with politics, "playing the game", and - I'm not joking here - undermining me as maintainer; demanding for someone else to take over.

Basic stuff like QA procedure and how we prioritize patches never entered into it, even as I repeatedly laid that stuff out.

> > If Linus can't figure out which subsystems have QA problems and which don't, that's his problem, not mine.

> You have missed the point by a mile.

No, that is very much the point here. bcachefs has always had one of the better track records at avoiding regressions and quickly handling them when they do get through, and was being singled out as if something was going horribly awry anyways. That needs an explanation, but one was never given.

Look, from the way you've been arguing things - have you been getting your background from youtube commentators? You have a pretty one sided take, and you're pushing that point of view really hard when talking to the person who's actually been in the middle of it for the past two years.

Maybe you should reevaluate that.


Bureaucracies run on process. You call that "politics", but (when arguing with programmers) I call it code. The Linux kernel project is a bureaucracy.

> citation needed

I'm sure you're familiar with "separation of concerns" in programming: it's the same principle. My experience of bureaucracies is experience, but I'm sure most good books on the topic will have a paragraph or chapter on this.

> bcachefs was being singled out because he and the other maintainers involved had no real interest in the end goal of getting a stable, reliable, trustworthy modern filesystem.

I imagine they would dispute that.

> No exceptions were needed or required,

I know that Linus Torvalds would dispute that. In this HN thread and elsewhere, you've made good arguments that treating your approach as an exception is not warranted, and that your approach is better, but you surely aren't claiming that your approach is the usual approach for kernel development?

> undermining me as maintainer

Part of the job is dealing with Linus Torvalds. You're not good at that. It would make sense for you to focus on architecture, programming, making bcachefs great, and to let someone else deal with submitting patches to the kernel, or arguing those technical arguments where you're right, but aren't getting listened to.

People are "more concerned with politics" than with engineering concerns because the problem is not with your engineering.

> bcachefs has always had one of the better track records at avoiding regressions and quickly handling them when they do get through

That's not relevant. I know that you don't see that it's not relevant: that's why I'm saying it's not relevant.

> have you been getting your background from youtube commentators?

No, but I'm used to disputes of this nature, and I'm used to dealing with unreasonable people. You believe that others are being unreasonable, but you're not following an effective strategy for dealing with unreasonable people. I am attempting to give advice, because I want bcachefs in the kernel, and I haven't a hope of changing Linus Torvalds' mind, but I have half a hope of changing yours.

Rule one of dealing with unreasonable people is to pick your battles. How many times have I said "dispute" or "argument" in this comment? How many times have these been worthy disputes? Even if I've completely mischaracterised the overall situation, surely you can't claim that every argument you've had on the LKML or in LWN comments has been worth bcachefs being removed from the kernel.


Look, it wasn't just this one thing. There had been entirely too many arguments over bugfixes, with the sole justification on Linus's end being "it's experimental garbage that no one should be using".

I can't ship and support a filesystem under those circumstances.

The "support" angle is one you and a lot of other people are missing. Supporting it is critical to stabilizing, and we can't close the loop with users if we can't ship bugfixes.

Given the past history with btrfs, this is of primary concern.

You've been looking for a compromise position, and that's understandable, but sometimes the only reasonable compromise is "less stupidity, or we go our separate ways".

The breaking point was, in the private maintainer thread, a page and a half rant from Linus on how he doesn't trust my judgement, and then immediately after that another rant on how - supposedly - everyone in the kernel community hates me and wants me gone.

Not joking. You can go over everything I've ever said on LKML, including the CoC incident, and nothing rises to that level. I really just want to be done with that sort of toxicity.

I know you and a lot of other people wanted bcachefs to be upstream, and it's not an ideal situation, but there are limits to what I'm willing to put up with.

Now, we just need to be looking forward. The DKMS transition has been going smoothly, more people are getting involved, everyone's fully committed to making this work and I still have my funding.

It's going to be ok, we don't need to have everything riding on making it work with the kernel community.

And now, you guys don't have to worry about me burning out on the kernel community and losing my last vestiges of sanity and going off to live in the mountains and herd goats :)


> Linus was trying to make and dictate calls on what is and is not a critical bugfix, and with a filesystem eating bug we needed to respond to, that was an unacceptable situation.

That's literally his job?


His job is to create drama, and argue over things that don't need to be argued?

Linus has never once found a mistake in the bcachefs pull requests, but he has created massive headaches for getting bugfixes out to users that were waiting on them.


this crowd is wild. the author answer is voted down :)

edit: by the time i commented it was already dark text. guess it recovered.


if you read the LKML archives closely, you'll find this sort of reply from koverstreet (deflecting all responsibility while taking none) as typical as it is deeply misguided.


Just looking for somewhere to stick that pitchfork, eh?


this was my first time speaking publicly about my opinion on this after reading as much of the history as i could with an open mind. i was really looking forward to bcachefs landing in the mainline kernel because btrfs doesn't meet my needs. i was and still am rooting for you, but your social skills really are comically bad and it's impacting the potential of the project. it makes me sad. P.S. your response here doesn't surprise me at all and is completely in character based on everything else i've seen from you.


I'm waiting for AR glasses to get higher res, but yes.

Also, if we're posting our wishlist - Preonic form factor.

https://drop.com/buy/preonic-mechanical-keyboard


heh, it's really not


what was the NixOS kernel regression?

NixOS and Arch are the two distros that are making the DKMS transition the smoothest.


I run nixos unstable, at some point in the last few weeks the kernel supplied with * boot.supportedFilesystems = [ "bcachefs" ]; in my config, went from version 6.16.0 to 6.12.45 and I had very long boot times (30 minutes+) with a lot of messages. My solution was to switch to the latest kernel * boot.kernelPackages = pkgs.linuxPackages_latest; Which bumped me back up to kernel 6.16.8 and smooth sailing.


Rust really is attractive to a filesystem developer. Over C, it brings generics for proper data structures, iterators (!), much better type safety, error handling - all the things Rust is good at are things you want.

For me, the things that would make it just perfect would be more ergonomic Cap'n Proto support (eliminate a ton of fiddly code for on disk data structures), and dependent types.


it remains an open question as to how reliable, performant and efficient a system built with these higher level constructs would compare to the highly optimized low level stuff you'd see in a mature linux filesystem project.

i suspect the linux stuff would be far more space and time efficient, but we won't know until projects like this mature more.


Eh? That's not an open question at all anymore; Rust has a drastically lower defect rate than C and good Rust is every bit as fast as good C.

Now, the engineering effort required to rewrite or redo in Rust, that's a different story of course.


i'd be curious how many of the higher level features and libraries would be best avoided if attempting to match the performance and space efficiency of a filesystem implemented in purpose designed highly optimized c.


I'm rewriting some of my Arduino projects into Rust (using Embassy and embedded-hal).

It's _so_ _much_ _better_. I can use async, maps, iterators, typesafe deserialization, and so on. All while not using any dynamic allocations.

With full support from Cargo for repeatable builds. It's night and day compared to the regular Arduino landscape of random libraries that are written in bad pseudo-object-oriented C++.


sure, i believe it. the question i have is: if one were to try to match the resilience, storage, memory and time efficiency of the well optimized linux c implementations of mature filesystems, and one were to use rust, would they be using all these high level language features and libraries out of the box or would non-canonical use of the language be necessary? (and if so, (or not) how would the resulting implementation compare from a readability perspective?)


I worked in Linux kernel-level land.

Calling it "optimized" is a stretch. A veeeery big one. The low-level code in some paths is highly optimized, but the overall kernel architecture still bears the scars of C.

The most popular data structure in the kernel land is linked list. AKA the most inefficient structure for the modern CPUs. It's so popular because it's the only data structure that is easy to use in C.

The most egregious example is the very core of Linux: the page struct. The kernel operates on the level of individual pages. And this is a problem in case you need _a_ _lot_ of pages.

For example, when you hibernate the machine, the hibernation code just has a loop that keeps grabbing swap-backed pages one by one and writing the memory to them. There is no easy way to ask: "give me a list of contiguous free page blocks". Mostly because these kinds of APIs are just awkward to express in C, so developers didn't bother.

There is a huge ongoing project to fix it (folios). It's been going for 5 years and counting.


> It's so popular because it's the only data structure that is easy to use in C.

Is this reasoning really true? A quick search reveals the availability of higher-level data structures like trees, flexible arrays, hashtables, and the like, so it's not as if the linux kernel is lacking in data structures.

Linked lists have a few other advantages - simplicity and reference stability come to mind, but they might have other properties that makes them useful for kernel development beyond how easy they are to create.


> Is this reasoning really true?

Well, yes. The kernel _now_ has all kinds of data structures, but you can look at the documentation from 2006 and see the horror of even the simplest rbtrees back then: https://lwn.net/Articles/184495/

A simple generic hashtable was added only in 2013!

> Linked lists have a few other advantages - simplicity and reference stability come to mind, but they might have other properties that makes them useful for kernel development beyond how easy they are to create.

The main property is that it's easy to use from C. Think about growable vectors as an exercise.


Where do I find a basic vector type? :)


Why, right here: https://elixir.bootlin.com/linux/v6.16.9/source/rust/kernel/... !

(XArray in regular C-based Linux also kinda qualifies)


there's a whole library inside linux and it's really good too!


> It's so popular because it's the only data structure that is easy to use in C.

I don't understand that statement. Linked lists are no easier or harder to use than other data structures. In all cases you have to implement it once and use it anywhere you want?

Maybe you meant that linked lists are the only structure that can be implemented entirely in macros, as the kernel likes to do? But even that wouldn't be true.


Think about a growable vector. Another basic structure that everyone uses in the userspace.

You can iterate through it fine, it's just an array after all. But then you want to add an element to it. At this point you need to handle reallocation. So you need to copy or move the elements from the old array. But this will break if the elements are not simple data structures.

So you need to have a copy function. There are copy constructors in C++, but not in C. So you need to reinvent them.


While I agree with Rust being safer, there are similar libraries for C++, naturally not Arduino ones that feel like Orthodox C++.


Yeah it's only an open question if you have your eyes closed.


To be fair Cap'nProto's C++ API is hardly ergonomic.


Many C++ libraries are unfortunately not ergonomic, because they are tainted by C culture.

We see this in the compiler frameworks that predated the C++ standard, and whose ideas lived on Java and .NET afterwards.

Too many C++ libraries are written for a C++ audience, when they could be just as ergonomic as in other high level languages, and being C++, there could be a two level approach, but unfortunately that isn't how it rolls.


Doing it right needs lenses (from Swift)


The only thing btrfs took from ZFS was the featureset - COW, data checksumming, snapshots, multi device. ZFS was a much more conservative design, btrfs is based on COW b-trees (with significant downsides) and if you can put it in any lineage it would be Reiserfs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: