They have, but they also just announced this week that for business and enterprise plans, they’re switching from quotas for codex to token use based pricing, and I would expect that to eventually propagate to all their plans for all the same reasons.
I’d be surprised if that propagated to personal subscription plans, simply because it would put them at a huge competitive disadvantage against Anthropic, which they’ve already signaled they care about by saying they allow third-party harnesses. But I wouldn’t be surprised if they required third-party harnesses to use per-token billing, since that’d put them on par with Anthropic.
You're not looking at the albedo of the solar panels in isolation though, you're comparing it to asphalt and cars. Typical solar panels have an albedo of ~0.3. Asphalt around ~0.05.
Using a release less than two months is hardly “so far behind”. The 1.24 series had considerable regressions that have taken a number of patch releases to fix, it stands to reason that the same would be true of newer releases. Given there's still miscompilations getting fixed as late as 1.25.8, and 1.25 brought in large changesets for the new experimental GC, sticking it out while 1.24 is still getting patches a mere handful of weeks ago is not unreasonable.
Ubuntu isn’t too big to target, if anything, its dominance makes it the obvious target. When you look at the trajectory over the years and some of Canonical’s decisions, it’s hard not to raise an eyebrow. Major distros like Ubuntu and Fedora didn’t scale globally without taking big tech money and money rarely comes with no strings attached. At some point, players like Microsoft are going to expect a return on that investment.
What fearmongering has the anti-systemd crowd been selling you? Genuinely curious because I wish I wasn't running systemd. My perspective is that the things they (we?) are saying are basically correct. But the service manager works well enough that most distros have accepted the downsides.
> it is really fearmongering when the systemd people literally founded a company to develop attestation for linux?
Considering it changes nothing on what they actually work on on systemd I would give this a yes. Every time I hear "they will do this or that" it just never really happened. So far it feels more like "the boy who cried wolf" than "slippery slope" to me. But maybe I am missing something?
A lot of the devs have always here and there added features for secure/measured boot and image based OSes and things that make them more usable to daily drive (hermetic /usr/, UKIs, sysext, portable services, mkosi, DDIs, ...). A lot of the things make image based systems more modifiable/user accessible without compromising on the general security aspect.
If they really wanted to lock in Linux users to a single blessed image from them they would have had a better chance when Lennart was working at Microsoft (which generally is the only preinstalled CA) instead of starting a "competing" company (they are targeting a different niche from what I understand).
This, and locking down everyone to a single blessed Linux distro would be... Rather difficult given how widespread Linux is and just how many distros exist. It is one thing for each distro to decide "Hey, let's use systemd". Gnome requires it but that's Gnome; there is nothing stopping you from using XFCE, or I3, or KDE, or... It is a totally different thing to make every Linux distro stop working (and have said distro go along with that) because that distro isn't the "blessed" one. Microsoft can pull this off because they're Microsoft and they have total control over one of the most dominant operating systems. Apple can pull this off because they're Apple and control everything from the hardware upwards. Linux is neither of these. I would go so far as to argue that the BSDs have a better chance of pulling off something like this than Lennart does. RedHat may have a lot of influence in the Linux world, but it certainly doesn't have some secret god mode switch it can flip and universally make every distro conform to it's wants and desires.
> it is really fearmongering when the systemd people literally founded a company to develop attestation for linux?
Can you prove beyond a reasonable doubt that they intend to force this on you without any way of disabling it, or that they have already done so? Because unless they plan to do this (and you have concrete proof of such and not just "well they could do this" claims) or they have already done it across a significant portion of the Linux distribution ecosystem (and no, distros voluntarily switching to systemd is not forcing anyone to do anything), this is fearmongering. Simple as that.
> especially with the really short name limit of only 24 characters.
And with no meaningful separator characters available! No dashes, underscores, or dots. Numbers and lowercase letters only. At least S3 and GCS allow dashes so you can put a little organization prefix on them or something and not look like complete jibberish.
What a ridiculous take. Many people loudly raised the question and objected to the practice from the beginning, but a handful of companies ignored the objections and ran faster than the legal system. If they were in the wrong, legally or morally, they still deserve to face repercussions for it.
it is a take, ridiculous or not. the fact you rage against it implies its not as improbable as you may want it to be. besides ridiculousness is a very subjective matter, right? many things are super ridiculous in 2026 from 2020s perspective, and this just piles on top.
to me is superb ridiculous to shun the comment though. but we'll be having this split for a while, that for sure.
It is a measure of the intended level of care that the users of your interface have to take. If there's no unsafe in the interface, then that implies that the library has only provided safe interfaces, even if it uses unsafe internally, and that the interface exposed enforces all necessary invariants.
It is absolutely a useful distinction on whether your users need to deal with unsafe themselves or not.
It's useful, to be sure, but I wouldn't want to use a library with a safe public interface that is mostly unsafe underneath (unless it's a -sys crate, of course). I think "this crate has no unsafe code" or "this crate has a minimal amount of carefully audited unsafe code" are good things to see, in general.
Sure, it's a useful distinction for whether users need to care about safety but not whether the underlying code is safe itself, which is what I wrote about.
No or very little but verified unsafe internal code is the bar for many Rust reimplementations. It would also be what keeps the code memory safe.
I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum? I find it difficult to believe the rust community would accept using a library where the API requires unsafe.
Not at all. Some things are fundamentally unsafe. mmap is inherently unsafe, but that doesn’t mean a library for it shouldn’t exist.
If you’re thinking of higher level libraries, involving http, html, more typical file operations, etc, what you’re saying may generally be true. But if you’re dealing with Direct Memory Access, MCU peripherals, device drivers, etc, some or all of those libraries have two options: accept unsafe in the public interface, or simply don’t exist.
(I guess there’s a third option: lie about the unsafety and mark things as safe when they fundamentally, inherently are not and cannot be safe)
>I guess I don't write enough rust to say this with confidence, but isn't that the bare minimum
I have some experience and yes, unless you're putting out a library for specifically low-level behavior like manual memory management or FFI. Trivia about the unsafe fn keyword missed the point of my comment entirely.
For a specific harness, they've all found ways to optimize to get higher cache hit rates with their harness. Common system prompts and all, and more and more users hitting cache really makes the cost of inference go down dramatically.
What bothers me about a lot of the discussion about providers disallowing other harnesses with the subscription plans around here is the complete lack of awareness of how economies of scale from common caching practices across more users can enable the higher, cheaper quotas subscriptions give you.
I feel like a lot of this would go away if they made a different API for the “only for use with our client” subscriptions. A different API from the generic one, that moved some of their client behaviors up to the server seems like it would solve all this. People would still reverse engineer to use it in other tools but it would be less useful (due to the forced scaffolding instead of entirely generic completions API) and also ease the burden on their inference compute.
I’m sure they went with reusing the generic completions API to iterate faster and make it easier to support both subscription and pay-per-token users in the same client, but it feels like they’re burning trust/goodwill when a technical solution could at least be attempted.
> I feel like a lot of this would go away if they made a different API for the “only for use with our client” subscriptions.
They literally did exactly that. That's what being cut off (Antigravity access, i.e. private "only for use with our client" subscription - not the whole account, btw.) for people who do "reverse engineer to use it in other tools".
Nothing here is new or surprising, the problem has been the same since Anthropic released Claude Code and the Max subscriptions - first thing people did then was trying to auth regular use with Claude Code tokens, so they don't have to pay the API prices they were supposed to.
What I was getting at is that the current API is still a generic inference endpoint, but with OAuth instead of an API key. What I'm suggesting is that they move some of the client logic up to the OAuth endpoint so it's no longer a generic inference endpoint (e.g. system prompt is static, context management is done on the server, etc). I assume they can get it to a point that it's no longer useful for a general purpose client like OpenClaw
cargo-audit is not quite at an equivalent level yet, it is lacking the specific features discussed in the post that identify the vulnerable parts of the API surface of a library. cargo-audit is like dependabot and others here in that it only tells you that you're using a version that was vulnerable, not that you're using a specific API that was vulnerable.
Saddly, since it relies on a Cargo.lock to be correct it also is affected by bugs that place dependencies in the Cargo.lock, but are not compiled into the binary. e.g. weak features in Cargo currently cause unused dependencies to show up in the Cargo.lock.
reply