> As a side note, we also discovered a local vulnerability (a race
condition) in the uutils coreutils (a Rust rewrite of the standard GNU
coreutils -- ls, cp, rm, cat, sort, etc), which are installed by default
in Ubuntu 25.10. This vulnerability was mitigated in Ubuntu 25.10 before
its release (by replacing the uutils coreutils' rm with the standard GNU
coreutils' rm), and would otherwise have resulted in an LPE (from any
unprivileged user to full root) in the default installation of Ubuntu Desktop 25.10.
Rust cannot help you if race condition crosses API boundary. No matter what language you use, you have to think about system as a whole. Failure to do that results in bugs like this
The bigger problem here is it seems like the rust utilities were rushed to be released without extensive testing or security analysis because simply because they are written in rust. And this isn't the first serious flaw because of that.
Doesn't surprise me coming from Canonical though.
At least that's the vibe I'm getting from [1] and definitely [2]
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.
> The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?
So yes, it sounds like the primary official reason is "enhanced resilience and safety". Given that, I would be interested in seeing the number of security problems in each implementation over time. GNU coreutils does have problems from time to time, but... https://app.opencve.io/cve/?product=coreutils&vendor=gnu only seems to list 10 CVEs since 2005. Unfortunately I can't find an equivalent for uutils, but just from news coverage I'm pretty sure they have a worse track record thus far.
> Performance is a frequently cited rationale for “Rewrite it in Rust” projects.
Rewrite from what? Python/Perl? If the original code is in C there _might_ be a performance gain (particularly if it was poorly written to begin with), but I wouldn't expect wonders.
Could be. The thing is, it kinda doesn't matter; what matters is, what will result in the least bugs/vulnerabilities now? To which I argue the answer is, keeping GNU coreutils. I don't care that they have a head start, I care that they're ahead.
That's short sighted. The least number of bugs now isn't the only thing that matters. What about in 5 years from now? 10 years? That matters too.
To me it seems inarguable that eventually uutils will have fewer bugs than coreutils, and also making uutils the default will clearly accelerate that. So I don't think it's so easy to dismiss.
I think they were probably still a little premature, but not by much. I'd probably have waited one more release.
It's extremely early to say if things are rushed or not. It's unsurprising that newer software has an influx of vulnerabilities initially, it'll be a matter of retrospectively evaluating this after that time period has passed.
It's a little different with software since you don't usually have the code or silicon wearing out, but aging software does start to have a mismatch with the way people are trying to use it and the things it has to interact with, which leads to a similar rise of "failure" in the end.
It's not even about API boundaries, it's about logic and the language isn't really responsible for that.
Expecting it to prevent it would be as gullible as expecting it to prevent a toctou or any other type of non trivial vulnerability.
That's why even though I appreciate the role of these slightly safer languages I still have a bit of a knee-jerk reaction to the exagerated claims of their benefits and how much of a piece of crap C is.
Spoiler, crappy programmers write crappy code regardless of the language so maybe we should focus on teaching students to think of the code they're writing from a different perspective and focus safety and maintainability rather than "flashiness"
So I was saying that rust monolithicism is NOT based on ignorance and naivety.
Do you see what I mean by nuance? I think you just glanced at the comment, saw that there were negative words around rust, and you lossy compressed into "Rust bad".
No. Race conditions are a normal part of our world, in the same way it's not a memory error if you coded the discount feature so that people can apply more than one 10% off coupon to an order and as a result the nine different "10% off" offers that marketing seeded have summed to a 90% discount which bankrupts you.
An example race condition would be Mike and Sarah both wake up, notice there's no milk and decide to grab milk on the way home that evening, they both work a full day, drop past the store and arrive home with a carton of milk. But, now there are two cartons of milk, which is too much milk. Oops. This is called a "Time of Check versus Time of Use" race or ToCToU race.
(Safe) Rust does prevent Data Races which can be seen as a specific very weird type of Race Condition, unlike other race conditions a Data Race reflects a difference between how humans understand computers in order to write computer software and how the machine actually works.
Humans are used to experiencing a world in which things happen in order. We write software for that intuitive world, this is called "Sequential consistency". A happens before B, or B happens before A, one of these must be true. Mustn't it? But actually a modern multi-core CPU cannot afford sequential consistency, we give that up for more speed, so any appearance of sequential consistency in concurrent software is an illusion for our comfort. (Safe) Rust promises the illusion is maintained, if you try to shatter it the compiler is going to say "No", languages like C or C++ just say well, if you accidentally destroy the illusion your program might do absolutely anything at all, good luck with that.
I like your idea of illustrating a race condition with buying milk - that should become the default method of explaining them. (Either that or bartenders serving customers which is my usual method of understanding work queues)
It can be about any resource. You get it when two concurrent functions access the resource without a queue, atomic operation or wait, and one of them modifies it.
> (a Rust rewrite of the standard GNU coreutils -- ls, cp, rm, cat, sort, etc), which are installed by default in Ubuntu 25.10.
0 benefits and only risks involved. Users are forced to choose between a worse new version or an older version that will no longer be supported. Like SystemD all over again.
It feels like there is a phenomenon where software devs (especially Open Source) have to keep developing even when just doing nothing would result in a better product. Like there's some monetization incentives to keep touching the thing so that you can get paid.
The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.
Of course, i still do, but i could see not caring being possible down the road with such architectures..
Eh maybe. I work on a big, mature, production erlang system which has millions of processes per cluster and while the author is right in theory, these are quite extreme edge cases and i’ve never tripped over them.
Sure, if you design a shit system that depends on ETS for shares state there are dangers, so maybe don’t do that?
I’d still rather be writing this system in erlang than in another language, where the footguns are bigger.
Treating ETS as the only footgun misses a few ugly ones, because a bad mailbox backlog or a gen_server chain can turn local slowness into cluster-wide pain before anything actually crashes.
Erlang does make some failure modes less nasty. It also hides latency debt well enough that people thinks the model saved them right up until one overloaded process turns the whole system into a distributed whodunit.
There are a bunch for sure! Turns out writing concurrent reliable distributed systems is really hard. I’ve not found anything else that makes them easier to deal with than BEAM though.
I’d switch if something better came along and happened to also be as battle hardened. I’ll be waiting a while i think.
Generally agree, all the problems i’ve had with erlang have been related to full mailboxes or having one process type handling too many kinds of different messages etc.
These are manageable, but i really really stress and soak test my releases (max possible load / redline for 48+ hours) before they go out and since doing that things have been fairly fine, you can usually spot such issues in your metrics by doing that
Yeah, but the current administration making their own people's lives worse by starting a war and inviting attacks such as this one, wouldn't manufacture any consent for those things.
If anything, it would manufacture opposition. The US general public blames the administration for any negative consequences resulting from the administration's war of choice: Attacks, high energy prices, further loss of US credibility, etc.
While true, the engineer would have to be a weapons grade tit to get themself in such legal trouble, and honestly deserves whatever criminal charges comes their way.
No, you definitely can’t. Or at least, not 3B+. I wound up buying https://www.amazon.com/ACEMAGICIAN-M1-Computers-Computer-3-2... which was $50 less a month ago (!!) because so many things don’t fit well. Immich is amazing, but you wouldn’t get a lot of the coolness of it if you can’t run the ai bits, which are quite heavy.
Let’s not forget that if even three engineers are working on this migration for only a week your cost is now 10’s of thousands for this couple hundred euros cost saving.
(assuming avg all-in engineer costs in europe)
It makes no sense to optimise cost for infrastructure mostly, it does make sense to make it faster, since almost all your spend is on engineers.
Spending thousands to save hundreds is not a healthy business.
I’ve never really found there to be all that much of a market for specifically c++ developers. If you do decide to look for work more seriously I wouldn’t be too hung up on language, if you can code in one you can pretty much code in all of them, and I’ve never hired a developer for specific language skill outside of a few rare cases it’s something really specific we are trying to fix (e.g erlang or something), even then it wouldn’t be a complete showstopper.
YMMV but that’s coming from a guy who writes in at least 3 languages at current $dayjob.
Because the hoards of people who can't find the ` or | or $ on their keyboard outnumber competent people 100:1. I had this exact experience too, so frustrating. Moved to a strictly referral model where I pay my SWEs $10k if a candidate they refer gets hired.
A person who only has real industry experience can very easily have never needed git at all. I know this shocks people who only have hobby or startup experience but git works very poorly at large scale and there are many big organizations who don't use it either because their solutions predate git, or they are newer companies that simply have good taste.
I’ve been in dev since CVS was a thing and did migrations across all of them really, svn, mg and finally git. I’ve worked in a broad swathe of industries and never once over 25 years have I ever seen an org not using source code management.
Shurely Shome mistake, not a vuln in holy rust!
reply