Hacker Newsnew | past | comments | ask | show | jobs | submit | ninkendo's commentslogin

(This post is directed to all software that shoves features like this in my face, and especially Microsoft more than Firefox.)

My problem with all software that shoves these AI features in my face, is that I don’t use features under duress.

If you interrupt what I’m doing to push me to use a feature, I won’t use it. If you’re a web designer and you block the page to tell me to sign up for an account, I close the tab and vow to never create an account. If you stop what I’m doing to ask me to rate your app, I’m going to give it 1 star. Et cetera.

Now I’ll be the first to admit this is childish… it’s a flaw in my character. When I feel pushed, I push back, and software pushing me makes me irrationally angry for reasons I can’t quite articulate. In some ways I wish I wasn’t like this. But I can’t be alone. I’m certain there is a non-negligible number of people like me, and when a browser immediately shoves AI features in my face on first launch, well, the first thing I’m going to do is disable them.

The especially tragic part is that I personally find LLMs useful! And I’m at the point where I sorta want to install a Firefox extension for ChatGPT now. But the actual browser AI features were pushed on me in a way that made me feel violated, so I can’t use them on principle. Maybe in a few years I guess.

If instead these companies would just dial it back several notches, I would have had the curiosity to try these features out myself, and I’d likely be using them by now. But the way they’ve tried so hard to force them on me has destroyed my trust and now, not only am I not using whatever feature they promote, I hate their product more than I otherwise would.

Firefox isn’t actually that bad here, and now that there’s a simple kill switch, I may actually try their chatbot sidebar thing. But for companies like MS, I will never, ever, ever use any of their AI features for the reasons above. (I’ve literally uninstalled Windows now, it’s gotten so bad.)


Tragically, capitalism drives corporate behavior.

Whatever works for large numbers is what will happen.

But overall, you and I (and many) will try to push back and insist on consent.

The sign-up form with an unchecked "sign me up for your newsletter" option.

The first-run experience with a question... "do you want us to notify you of new features?"

But this is not the norm, and even if good actors get rewarded by a few childish customers, bad actors seem to get rewarded much more by a massive infusion of funds.


IME, a 1.0 version is usually when a project starts taking backwards compatibility seriously. A pre-1.0 library may be plenty stable enough in terms of bugs, but being pre-1.0 means they’re likely going to change their mind on the API contract at some point.

That is the major problem for me… I don’t actually mind that much if a library has bugs… those can always be fixed. But when a library does a total 180 on the API contract, or removes things, or just changes their mind on what the abstraction should be (often it feels like they’re just feng shui’ing things), that’s a major problem. And it’s what people mean when they say “immaturity”: if I build on top of this, is it all going to break horribly at some point in the future when the author changes their mind?

People often say “just don’t update then”, but that’s (a) a sure fire way to accumulate tech debt in your codebase (because some day may come when you must update), and (b) you’re no longer getting what could be critical updates to the library.


I don’t think GP is moving the goalposts at all, rather I think a lot of people are willfully misrepresenting GP’s point.

Rust-to-rust code should be able to be dynamically linked with an ABI that has better safety guarantees than the C ABI. That’s the point. You can’t even express an Option<T> via the C ABI, let alone the myriad of other things rust has that are put together to make it a safe language.

You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

It would be very hard to accomplish. Apple was extremely motivated to make Swift have a resilient/stable ABI, because they wanted to author system frameworks in swift and have third parties use them in swift code (including globally updating said frameworks without any apps needing to recompile.) They wanted these frameworks to feel like idiomatic swift code too, not just be a bunch of pointers and manual allocation. There’s a good argument that (1) Rust doesn’t consider this an important enough feature and (2) they don’t have enough resources to accomplish it even if they did. But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.


> You can’t even express an Option<T> via the C ABI

But you can express Option<Foo> for a concrete Foo. Do you really need any more than that?


> But you can express Option<Foo> for a concrete Foo

I don’t think that’s true?

https://users.rust-lang.org/t/option-is-ffi-safe-or-not/2982...

You could maybe say that a pointer can be transmuted to an Option<&T> because there’s an Option-specific optimization that an Option<&T> uses null as the None value, but that’s not always guaranteed. And it doesn’t apply to non-references, for instance Option<bool>’s None value would be indistinguishable from false. You could get lucky if you launder your Option<T> through repr(C) and the compiler versions match and don’t mangle the internal representation, but there’s no guarantees here, since the ABI isn’t stable. (You even get a warning if you try to put a struct in your function signatures that doesn’t have a stable repr(C).)


You're right that there isn't a single standard convention for representing e.g. Option<bool>, but that's just as true of C. You'd just define a repr(C) compatible object that can be converted to or from Option<Foo>, and pass that through the ABI interface, while the conversion step would happen internally and transparently on both sides. That kind of marshaling is ubiquitous when using FFI.

> but that's just as true of C

Right, that's the whole point of this thread. The only stable ABI rust has is one where you can only use C's features at the boundaries. It would be really nice if that wasn't the case (ie. if you could express "real" rust types at a stable ABI boundary.)

As OP said, "I don't think deflecting by saying "but C is no safer" is super interesting". People seem intent on steering that conversation that way anyway, I guess.


> You can look to Swift for prior art on how this can be done: https://faultlore.com/blah/swift-abi/

> It would be very hard to accomplish.

Since Rust cares very much about zero-overhead abstractions and performance, I would guess if something like this were to be implemented, it would have to be via some optional (crate/module/function?) attributes, and the default would remain the existing monomorphization style of code generation.


Swift’s approach still monomorphizes within a binary, and only has runtime costs when calling code across a dylib boundary. I think rust could do something like this as well.

> I don’t think GP is moving the goalposts at all

Thank you :-)

> It would be very hard to accomplish.

Yeah it's a super hard problem especially when you provide safety using the type system!

The work the Swift team did here is hella impressive.

> But if you could wave a magic wand and make it “done”, it would be huge for rust adoption.

Yeah!


> IMO, it's best to keep things that are "your fault" (e.g. produced by your editor or OS) in your global gitignore, and only put things that are "the repository's fault" (e.g. build artifacts, test coverage reports) in the repository's gitignore file.

Very well put. This should be in the git-ignore manpage.


Was this translated automatically from C? I picked a spot totally at random and saw in https://github.com/Ragnaroek/iron-wolf/blob/main/src/act1.rs in place_item_type:

    let mut found_info = None;
    for info in &STAT_INFO {
        if info.kind == item_type {
            found_info = Some(info);
            break;
        }
    }
When typically in rust this is just:

    let found_info = STAT_INFO.iter().find(|info| info.kind() == item_type);
Now I want to go through and feng shui all the code to look more like idiomatic rust just to waste some time on a saturday...

(equivalent C file: https://github.com/id-Software/wolf3d/blob/master/WOLFSRC/WL... )

> Was this translated automatically from C?

I'll note that when I convert code between languages, I often go out of my way to minimize on-the-fly refactoring, instead relying on a much more mechanical, 1:1 style. The result might not be idiomatic in the target language, but the bugs tend to be a bit fewer and shallower, and it assists with debugging the unfamiliar code when there are bugs - careful side-by-side comparison will make the mistakes clear even when I don't actually yet grok what the code is doing.

That's not to say that the code should be left in such a state permanently, but I'll note there's significantly more changes in function structure than I'd personally put into an initial C-to-Rust rewrite.

The author of this rewrite appears to be taking a different approach, understanding the codebase in detail and porting it bit by bit, refactoring at least some along the way. Here's the commit that introduced that fn, doesn't look like automatic translation to me: https://github.com/Ragnaroek/iron-wolf/commit/9014fcd6eb7b10...


I actually find 1:1 to be helpful when learning a language.

How debug-able is the internals of the rust lambda version?

I will often write the code so I can simply insert a break point for debugging versus pure anonymous and flow-style functions.

C# example:

    #if DEBUG
    const string TestPoint = "xxxx";
    #endif

    var filtered = items.Where(x =>
    {
        #if DEBUG
        if (x.Name == TestPoint)
            x.ToString()
        #endif
        .....
    });
vs

    var filtered = items.Where(x => ....);

As a non-Rust guy, I keep writing the example above. I didn't even know about the second option!

If you do that, please share a link so I can learn from you! This is awesome!


Look into rust iterators and their associated functions for rust specific implementation. Additionally look into functional programming à la lambda calculus and Haskell for the extreme end of this type of programming if you’d like to learn more about it

Yes, the code is _very, very_ close to the C-Code. All over the place.

Sounds like something an LLM agent might be good at?

It probably would. But this port was mostly done to understand Wolfenstein 3D in detail, not for the source port itself. I could have generated big parts of the code. But I would have learning by doing that.

Literally nothing to do with that distinction.

> The question is: Whose job is it to manage the nulls. The language? Or the programmer?

> These languages are like the little Dutch boy sticking his fingers in the dike. Every time there’s a new kind of bug, we add a language feature to prevent that kind of bug. And so these languages accumulate more and more fingers in holes in dikes. The problem is, eventually you run out of fingers and toes.

I'm going to try to best to hide my rage with just how awful this whole article is, and try to focus my criticism. I can imagine that reasonable people can disagree as to whether `try` should be required to call a function that throws, or whether classes should be sealed by default.

But good god man, the null reference problem is so obvious, it's plain and simply a bug in the type system of every language that has it. There's basically no room for disagreement here: If a function accepts a String, and you can pass null to it, that's a hole in the type system. Because null can't be a String. It doesn't adhere to String's contract. If you try to call .length() on it (or whatever), your program crashes.

The only excuse we've had in the past is that expressing "optional" values is hard to do in a language that doesn't have sum types and generics. And although we could've always special-cased the concept of "Optional value of type T" in languages via special syntax (like Kotlin or Swift do, although they do have sum types and generics), no language seems to have done this... the only languages that seem to support Optionals are languages that do have sum types and generics. So I get it, it's "hard to do" for a language. And some languages value simplicity so much that it's not worth it to them.

But nowadays (and even in 2017) there's simply no excuse any more. If you can pass `null` to a function that expects a valid reference, that language is broken. Fixing this is not something you lump in with "adding a language feature for every class of bug", it's simply the correct behavior a language should have for references.


One thing that distinguishes macOS here is that the mach kernel has the concept of “vouchers” which helps the scheduler understand logical calls across IPC boundaries. So if you have a high-priority (UserInitiated) process, and it makes an IPC call out to a daemon that is usually a low-priority background daemon, the high-priority process passes a voucher to the low-priority one, which allows the daemon’s ipc handling thread to run high-priority (and thus access P-cores) so long as it’s holding the voucher.

This lets Apple architect things as small, single-responsibility processes, but make their priority dynamic, such that they’re usually low-priority unless a foreground user process is blocked on their work. I’m not sure the Linux kernel has this.


That it actually quite simple and nifty. It reminds me of the 4 priorities RPC requests can have within the Google stack. 0 being if this fails it will result in a big fat error for the user to 3, we don’t care if this fails because we will run the analysis job again in a month or so.

IIRC in macOS you do need to pass the voucher, it isn’t inherited automatically. Linux has no knowledge of it, so first it has to be introduced as a concept and then apps have to start using it.


Being explicit is a good thing. Especially for async threads they may handle work for many different clients with different priorities and may delegate work to other processes.


There is automatic priority donation across a handful of APIs


This sounds like Solaris doors. The remainder of the time slice of the door client is given to the door server.


Vouchers are related to turnstiles, which are from Solaris.


This is also how binder works in android.


> Now, those 600 processes and 2000 threads are blasting thousands of log entries per second, with dozens of errors happening in unrecognizable daemons doing thrice-delegated work.

This is the kind of thing that makes me want to grab Craig Federighi by the scruff and rub his nose in it. Every event that’s scrolling by here, an engineer thought was a bad enough scenario to log it at Error level. There should be zero of these on a standard customer install. How many of these are legitimate bugs? Do they even know? (Hahaha, of course they don’t.)

Something about the invisibility of background daemons makes them like flypaper for really stupid, face-palm level bugs. Because approximately zero customers look at the console errors and the crash files, they’re just sort of invisible and tolerated. Nobody seems to give a damn at Apple any more.


Are you sure they don’t get sent to Apple as part of some telemetry / diagnostics implementation?


Oh they absolutely are. But Apple clearly doesn’t care enough to actually fix them. They seem to get worse every release.


You don't need them to be sent to Apple. And if errors in console get sent to Apple, it's surely filtered through a heavy suppression list. You can open the Errors and Faults view in Console on any Mac and see many errors and faults every second.

They could start attacking those common errors first, so that a typical Mac system has no regular errors or faults showing up. Then, you could start looking at errors which show up on weirdly configured end user systems, when you've gotten rid of all the noise.

But as long as every system produces tens of thousands of errors and faults every day, it's clear that nobody cares about fixing any of that.


I wouldn't call UBI a "game plan" so much as a thing people can point to so justify their actions to themselves. It helps you pretend you're not ruining people's lives, because you can point to UBI as the escape hatch that will let them continue to have an existence. It's not surprising that so many in the tech industry are proponents of UBI. Because it helps them sleep at night.

Never mind that UBI has never actually existed, it probably never will exist, and it's very, very likely that it won't even work.

People need to face the possibility that we will destroy people's way of life the way we're headed, and to not just wave their hands and pretend that UBI will solve everything.

(Edited to tone back the certainty in the language: I'm not actually sure whether AI will be a net positive or negative on most people's lives, but I just think it's dishonest to say "it's ok, UBI will save them.")


OK, maybe take it down a few notches?

I'm only "in the tech industry" in the literal sense, not in the cultural sense. I work in academia, making programs for professors and students, and I think the stuff "the tech industry" is doing is as rotten as you appear to.

UBI has never existed because the level of production required to support it has only just started to exist. (It's possible that we're actually not quite there, but that's something we can only determine by trying it out—and if we're not, then I'm 100% confident we can get there with further refinement of existing processes.) If we have the political will to actually, genuinely do UBI—enough to support people's basic needs of food, clothing, shelter, and a little bit of buffer, without any kind of means testing or similar requirements—then it's very, very likely that it will work. All the pilot programs give very positive data.

I'm not pushing UBI because I think it's a fix to the problem of automation. I'm pushing UBI because I think it's the fulfillment of the promise of automation.


There's no reason why UBI wouldn't work.

The reason why it doesn't exist is because, for all that those in positions of power love to talk about it, they very consistently shoot down any actual attempt to implement them. I mean, for starters, it'd mean much higher taxes, and especially higher taxes on those very people (who currently pay lower rates on capital gains than people who actually produce value pay on their wages). When was the last time you've seen one of the Big Tech luminaries advocate for higher capital gains taxes?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: