Hacker Newsnew | past | comments | ask | show | jobs | submit | cyberpunk's commentslogin

> As a side note, we also discovered a local vulnerability (a race condition) in the uutils coreutils (a Rust rewrite of the standard GNU coreutils -- ls, cp, rm, cat, sort, etc), which are installed by default in Ubuntu 25.10. This vulnerability was mitigated in Ubuntu 25.10 before its release (by replacing the uutils coreutils' rm with the standard GNU coreutils' rm), and would otherwise have resulted in an LPE (from any unprivileged user to full root) in the default installation of Ubuntu Desktop 25.10.

Shurely Shome mistake, not a vuln in holy rust!


Rust cannot help you if race condition crosses API boundary. No matter what language you use, you have to think about system as a whole. Failure to do that results in bugs like this

The bigger problem here is it seems like the rust utilities were rushed to be released without extensive testing or security analysis because simply because they are written in rust. And this isn't the first serious flaw because of that.

Doesn't surprise me coming from Canonical though.

At least that's the vibe I'm getting from [1] and definitely [2]

[1] https://cdn2.qualys.com/advisory/2026/03/17/snap-confine-sys... [2] https://bugs.launchpad.net/ubuntu/+source/rust-coreutils/+bu...


The best discussion I can find for the official reasons for switching is https://discourse.ubuntu.com/t/carefully-but-purposefully-ox... -

> But… why?

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects. While performance is high on my list of priorities, it’s not the primary driver behind this change. These utilities are at the heart of the distribution - and it’s the enhanced resilience and safety that is more easily achieved with Rust ports that are most attractive to me.

> The Rust language, its type system and its borrow checker (and its community!) work together to encourage developers to write safe, sound, resilient software. With added safety comes an increase in security guarantees, and with an increase in security comes an increase in overall resilience of the system - and where better to start than with the foundational tools that build the distribution?

So yes, it sounds like the primary official reason is "enhanced resilience and safety". Given that, I would be interested in seeing the number of security problems in each implementation over time. GNU coreutils does have problems from time to time, but... https://app.opencve.io/cve/?product=coreutils&vendor=gnu only seems to list 10 CVEs since 2005. Unfortunately I can't find an equivalent for uutils, but just from news coverage I'm pretty sure they have a worse track record thus far.


> But… why?

> Performance is a frequently cited rationale for “Rewrite it in Rust” projects.

Rewrite from what? Python/Perl? If the original code is in C there _might_ be a performance gain (particularly if it was poorly written to begin with), but I wouldn't expect wonders.


probably because many of those tools were around for 20ish years before 2005

Could be. The thing is, it kinda doesn't matter; what matters is, what will result in the least bugs/vulnerabilities now? To which I argue the answer is, keeping GNU coreutils. I don't care that they have a head start, I care that they're ahead.

>>> I don't care that they have a head start, I care that they're ahead.

Nice


That's short sighted. The least number of bugs now isn't the only thing that matters. What about in 5 years from now? 10 years? That matters too.

To me it seems inarguable that eventually uutils will have fewer bugs than coreutils, and also making uutils the default will clearly accelerate that. So I don't think it's so easy to dismiss.

I think they were probably still a little premature, but not by much. I'd probably have waited one more release.


fileutils-1.0 was released in 1990 [1]. shellutils-1.0 was released in 1991 [2], and textutils-1.0 was released a month later in the same year [3].

Those three packages were combined into coreutils-5.0 in 2003 [4].

[1] https://groups.google.com/g/gnu.utils.bug/c/CviP42X_hCY/m/Ys... [2] https://groups.google.com/g/gnu.utils.bug/c/xpTRtuFpNQc/m/mR... [3] https://groups.google.com/g/gnu.utils.bug/c/iN5KuoJYRhU/m/V_... [4] https://lists.gnu.org/archive/html/info-gnu/2003-04/msg00000...


It's extremely early to say if things are rushed or not. It's unsurprising that newer software has an influx of vulnerabilities initially, it'll be a matter of retrospectively evaluating this after that time period has passed.

> influx of vulnerabilities initially

https://en.wikipedia.org/wiki/Bathtub_curve

It's a little different with software since you don't usually have the code or silicon wearing out, but aging software does start to have a mismatch with the way people are trying to use it and the things it has to interact with, which leads to a similar rise of "failure" in the end.


It's not even about API boundaries, it's about logic and the language isn't really responsible for that.

Expecting it to prevent it would be as gullible as expecting it to prevent a toctou or any other type of non trivial vulnerability.

That's why even though I appreciate the role of these slightly safer languages I still have a bit of a knee-jerk reaction to the exagerated claims of their benefits and how much of a piece of crap C is.

Spoiler, crappy programmers write crappy code regardless of the language so maybe we should focus on teaching students to think of the code they're writing from a different perspective and focus safety and maintainability rather than "flashiness"


[flagged]


Yeah we get it you don't like rust and you want everyone to know how weird you are by tearing down asinine arguments no one actually made. How boring.

[flagged]


> based on ignorance and naivety.

About as nuanced as your bait framing of what a mere language ought/can do. Oh you're a python backend developer, guess that explains it.


So I was saying that rust monolithicism is NOT based on ignorance and naivety.

Do you see what I mean by nuance? I think you just glanced at the comment, saw that there were negative words around rust, and you lossy compressed into "Rust bad".


Your post is very badly written. It's confusing and starts off with a totally weird comment about wasting revolutionary capacity. Expect downvotes.

Rewrite tools in new language, get new exciting bugs!

That's optimistic. Use a search engine to find:

   JWZ CADT

Very good point

Is a race condition a memory related error?

Not this particular kind. This is a race between separate processes, and the target is the file system, not a location in memory.

No. Race conditions are a normal part of our world, in the same way it's not a memory error if you coded the discount feature so that people can apply more than one 10% off coupon to an order and as a result the nine different "10% off" offers that marketing seeded have summed to a 90% discount which bankrupts you.

An example race condition would be Mike and Sarah both wake up, notice there's no milk and decide to grab milk on the way home that evening, they both work a full day, drop past the store and arrive home with a carton of milk. But, now there are two cartons of milk, which is too much milk. Oops. This is called a "Time of Check versus Time of Use" race or ToCToU race.

(Safe) Rust does prevent Data Races which can be seen as a specific very weird type of Race Condition, unlike other race conditions a Data Race reflects a difference between how humans understand computers in order to write computer software and how the machine actually works.

Humans are used to experiencing a world in which things happen in order. We write software for that intuitive world, this is called "Sequential consistency". A happens before B, or B happens before A, one of these must be true. Mustn't it? But actually a modern multi-core CPU cannot afford sequential consistency, we give that up for more speed, so any appearance of sequential consistency in concurrent software is an illusion for our comfort. (Safe) Rust promises the illusion is maintained, if you try to shatter it the compiler is going to say "No", languages like C or C++ just say well, if you accidentally destroy the illusion your program might do absolutely anything at all, good luck with that.


I like your idea of illustrating a race condition with buying milk - that should become the default method of explaining them. (Either that or bartenders serving customers which is my usual method of understanding work queues)

It can be about any resource. You get it when two concurrent functions access the resource without a queue, atomic operation or wait, and one of them modifies it.

> (a Rust rewrite of the standard GNU coreutils -- ls, cp, rm, cat, sort, etc), which are installed by default in Ubuntu 25.10.

0 benefits and only risks involved. Users are forced to choose between a worse new version or an older version that will no longer be supported. Like SystemD all over again.

It feels like there is a phenomenon where software devs (especially Open Source) have to keep developing even when just doing nothing would result in a better product. Like there's some monetization incentives to keep touching the thing so that you can get paid.


The happy path for me is with erlang, due to the concurrency model the blast radius of an error is exceptionally small, so the programming style is to let things crash if they go wrong. So, really you are writing the happy path code only (most of the time). Combine this approach with some very robust tests (does this thing pass the tests / behave how we need it to?) then you’re close to the point of not really caring about the implementation at all.

Of course, i still do, but i could see not caring being possible down the road with such architectures..


Eh maybe. I work on a big, mature, production erlang system which has millions of processes per cluster and while the author is right in theory, these are quite extreme edge cases and i’ve never tripped over them.

Sure, if you design a shit system that depends on ETS for shares state there are dangers, so maybe don’t do that?

I’d still rather be writing this system in erlang than in another language, where the footguns are bigger.


Treating ETS as the only footgun misses a few ugly ones, because a bad mailbox backlog or a gen_server chain can turn local slowness into cluster-wide pain before anything actually crashes.

Erlang does make some failure modes less nasty. It also hides latency debt well enough that people thinks the model saved them right up until one overloaded process turns the whole system into a distributed whodunit.


Oh, I didn’t mean it’s the only one.

There are a bunch for sure! Turns out writing concurrent reliable distributed systems is really hard. I’ve not found anything else that makes them easier to deal with than BEAM though.

I’d switch if something better came along and happened to also be as battle hardened. I’ll be waiting a while i think.


in ten years of BEAM ive written a deadlock once. and zero times in prod.

id say its better to default to call instead of pushing people to use cast because it won't lock.


Generally agree, all the problems i’ve had with erlang have been related to full mailboxes or having one process type handling too many kinds of different messages etc.

These are manageable, but i really really stress and soak test my releases (max possible load / redline for 48+ hours) before they go out and since doing that things have been fairly fine, you can usually spot such issues in your metrics by doing that


Boots on the ground? Nukes? Internment camps? It can go much further.

Yeah, but the current administration making their own people's lives worse by starting a war and inviting attacks such as this one, wouldn't manufacture any consent for those things.

If anything, it would manufacture opposition. The US general public blames the administration for any negative consequences resulting from the administration's war of choice: Attacks, high energy prices, further loss of US credibility, etc.


While true, the engineer would have to be a weapons grade tit to get themself in such legal trouble, and honestly deserves whatever criminal charges comes their way.

caddy has tailscale integration i think too, so your foo.bar.ts.net “just works”

You could easily run all of that on a rpi…

No, you definitely can’t. Or at least, not 3B+. I wound up buying https://www.amazon.com/ACEMAGICIAN-M1-Computers-Computer-3-2... which was $50 less a month ago (!!) because so many things don’t fit well. Immich is amazing, but you wouldn’t get a lot of the coolness of it if you can’t run the ai bits, which are quite heavy.

It’s even dumber than that.

Let’s not forget that if even three engineers are working on this migration for only a week your cost is now 10’s of thousands for this couple hundred euros cost saving.

(assuming avg all-in engineer costs in europe)

It makes no sense to optimise cost for infrastructure mostly, it does make sense to make it faster, since almost all your spend is on engineers.

Spending thousands to save hundreds is not a healthy business.


I’ve never really found there to be all that much of a market for specifically c++ developers. If you do decide to look for work more seriously I wouldn’t be too hung up on language, if you can code in one you can pretty much code in all of them, and I’ve never hired a developer for specific language skill outside of a few rare cases it’s something really specific we are trying to fix (e.g erlang or something), even then it wouldn’t be a complete showstopper.

YMMV but that’s coming from a guy who writes in at least 3 languages at current $dayjob.


This. Especially now with LLMs value of grinding C++ trivia gets close to nothing.

“Oh, you know 12 ways to initialize a value in C++? That’s cute”


It’s bananas how many utterly crap candidates we get at the moment too.

One low point was an interview with a guy who connected with his current work laptop and couldn’t find the $ key on it for a basic scripting question.

I can’t make it make sense either.


Because the hoards of people who can't find the ` or | or $ on their keyboard outnumber competent people 100:1. I had this exact experience too, so frustrating. Moved to a strictly referral model where I pay my SWEs $10k if a candidate they refer gets hired.

What is their plan once they're hired somehow? Say inspiring words and delegate 100% of their actual work to others? AI?

We had a guy interview for a senior C++ position that hadn't used Git before and had no idea what a "merge" was.

A person who only has real industry experience can very easily have never needed git at all. I know this shocks people who only have hobby or startup experience but git works very poorly at large scale and there are many big organizations who don't use it either because their solutions predate git, or they are newer companies that simply have good taste.

I’ve been in dev since CVS was a thing and did migrations across all of them really, svn, mg and finally git. I’ve worked in a broad swathe of industries and never once over 25 years have I ever seen an org not using source code management.

I just don’t believe this at all.


I think you misread what I said.

It is almost like the current system favours cheaters spamming fake CVs while most real people do have imperfections (eg gaps in employment)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: