1996 is not now. This comparision makes little to no sense.
I'm sure if Apple provided support for installing your own OS on their M series laptops it would be incredibly popular. And I don't need to guess at this using weird 1996 research on microkernels because Asahi Linux exists and clearly there is interest in it.
Do you forget what Apple in '96 was? Or are you saying that Tahoe is too polished for the Apple of '96?
Apple was not a bastion of quality in the 90's. They couldn't modernize the Mac OS, and that continued with little more than window dressing over what was released in the 80's. The Mac line up was a horrible mess of barely different models that needed a Ph.D to figure out what was different. The company was bleeding money and seriously close to bankruptcy.
The Apple of the mid 90's wishes it could release something like Tahoe.
Ya ok, unless you looked at it wrong, then it crashed.
OS 8 was a platinum theme over System 7. Which was a slightly better System 6, which wasn't significantly different than System 4.
System 7 was good for the time, OS 8 and 9 were not, and Apples inability to improve the OS were really starting to show. Windows 95 was a more stable OS than OS 8. Tahoe is better.
Yeah - an OS that crashed every time you launched Netscape and you as an end user had to manually allocate memory to apps?
Not to mention that the OS itself was still mostly 68K emulated code even on PPC Macs and holding the mouse down over the menu caused all apps to stop running.
Apple circa 1996 would be charging for its updates and licensing out the software to Power Computing and UMAX. They were making a lot of "interesting" decisions.
> even after Donald Trump vowed to backstop trade through the key oil chokepoint
I mean you have to be completely insane If you take Trump's word for anything at this point. Pretty sure even people high up in the administration have given up on even pretending that anything he says makes any sense.
Persumably there is already a law around why I cant just go borrow a book from my library, type out some 95% regurgitated varient on my laptop, and then try to publish it somewhere?
Edit: I looked it up and the thing that stops you from publishing a bootleg "Harold Potter and the Wizards Rock" is this legal framework around "The Abstractions Test".
I like the language of fueling being used here instead of the typical causal thing we see as though using AI means you will go insane.
I would completely agree that if you are already 1x delusional then AI will supercharge that into being 10x delusional real fast.
Granted you could argue access to the internet was already something like a 5x multiplier from baseline anyway with the prevalence of echo chamber communities. But now you can just create your own community with chatbots.
Hm. It shouldn’t be too hard to add something to models to make them do that, right? I guess for that they would need to know the user’s time zone?
Can one typically determine a user’s timezone in JavaScript without getting permissions? I feel like probably yes?
(I’m not imagining something that would strictly cut the user off, just something that would end messages with a suggestion to go to bed, and saying that it will be there in the morning.)
Chatbots already have memory, and mine already knows my schedule and location. It doesn't even need to say anything directly, maybe just shorter replies, less enthusiasm for opening new topics. Letting conversation wind down naturally. I also like the idea of continuing topics in the morning, so if you write down your thoughts/worries, it could say "don't worry about this, we can discuss this next morning".
I know a few people who work 3rd shift. That is people who good reason to be up all night in their local timezone. They all sleep during times when everyone else around them is awake. While this is a small minority, this is enough that your scheme will not work.
I actually was considering those people. That’s part of why I suggested it shouldn’t be a hard cut-off, but just adding to the end of the messages.
Of course, one could add some sort of daily schedule feature thing so that if one has a different sleep schedule, one can specify that, but that would be more work to implement.
It's funny that you frame it that way, because it's the mirror of (IMO) one of their best features. When using one to debug something, you can just stop responding for a bit and it doesn't get impatient like a person might.
I think you're totally right that that's a risk for some people, I just hadn't considered it because I view them in exactly the opposite light.
Claude will routinely tell me to get some sleep and cuddle with my dog. I may mention the time offhandedly or say I'm winding down, but at least it will include conversation stoppers and decrease engagement.
from my (limited) experience of ChatGPT versus Claude, i get the same. ChatGPT will always add another "prompt" sentence at the end like "Do you want me to X?" while Claude just answers what i ask.
looking at my history recently, Claude's most recent response is literally just "Exactly the right move honestly — that's the whole point."
My understanding of LLMs with attention heads is that they function as a bit of a mirror. The context will shift from the initial conditions to the topic of conversation, and the topic is fed by the human in the loop.
So someone who likes to talk about themselves will get a conversation all about them. Someone talking about an ex is gonna get a whole pile of discussion about their ex.
... and someone depressed or suicidal, who keeps telling the system their own self-opinion, is going to end up with a conversation that reflects that self-opinion back on them as if it's coming from another mind in a conversation. Which is the opposite of what you want to provide for therapy for those conditions.
The real question to me here is not the computer. Its why is there such a segment of the population that is so willing to listen to a machine? It it upbringing, societal, circumstance, mental health, genetic?
I know the Milgram obedience to authority experiments but a computer is not really an authority figure.
In a way this kind of reminds me of how in some religions or cultures, they may try to warn you away from using Oujia boards or Tarot, or really anything where you are doing divination. I suppose because in a way, it could lead to an uncharted exploration of heavy topics.
I’m not a heavy user of LLMs and I’m not sure how delusional I could be, but I wonder if a lot of these things could be prevented if people could only send like one or two follow up messages per conversation, and if the LLM’s memory was turned off. But then I suppose this would be really bad for the AI companies’ metrics. Not sure how it would impact healthy users’ productivity either. Any thoughts?
Not just the metrics, the actual utility. For the things the LLMs are good at, the context matters a lot; it's one of the things that makes them more than glorified ELIZA chatbots or simple Markov chains. To give a concrete example: LLMs underpin the code editing tools in things like Copilot. And all that context is key to allow the tool to "reason" through the structure of a codebase.
But they should probably come with a big warning label that says something to the effect of "IF YOU TALK ABOUT YOURSELF, THE NATURE OF THE MACHINE IS THAT IT WILL COME TO AGREE WITH WHAT YOU SAY."
The issue is ultimately blaming people doesn't really solve things. Unless its genuinely a one-of-a-kind case. But if this happened once its probably going to happen again, and this isn't the first such case of LLM hallucinations in law.
It's weird to think this way, because its easy to just point at a person for a specific instance, but when you see something repeat over and over again you need to consider that if your ultimate goal is to stop something from happening you have to adjust the tools even if the people using them were at fault in every case.
> You absolutely should be preventing users from being able to copy a private key!
> Asking you to put basic protections in place and collaborate with the ecosystem/industry is hardly "anti-user-choice mentality".
> the lack of identifying passkey provider attestation (which would allow RPs to block you, and something that I have previously rallied against but rethinking as of late because of these situations).
Does this guy deflate his neighbors tires before going to work to save them from car accidents?
I cannot believe he has this ridiculous paternalistic behaviour while simultaneously having these bullet points on his personal website that he linked to.
> digital identity ● urban mobility
user choice ● boston bruins
I'm curious how much this one guy, all on his own, has stalled passkey adoption.
In theory, this issue could never touch average users. It's only power users who use standalone open-source password managers. All the options normal users are funnelled into aren't going to expose passkeys in plain text (except maybe Firefox?), and thus aren't going to be phishable in any meaningful sense.
But this guy opted to tell the open-source community that having exportable passkeys is wrong, full stop, and that open-source implementations might get banned for allowing this, planting a gigantic red flag right next to the very idea of passkeys, making every single power user who sees that post (which is linked on every thread which touches on passkeys) either completely reject the idea, or approach it with extreme caution. And thus no power user will recommend it to anybody else, not to mention the general usability problems they have.
I guess if it weren't him, the same ideas would have been made clear in other ways.
I'm the guy you're talking about. Always easy to crap on people when you selectively quote what they said. The core pieces you left out are:
> I don't quite understand why requiring file protection/encryption can't be a temporary minimum bar here.
> or at a minimum require file protection/encryption.
If you think helping users to be safe online (which includes putting basic safeguards in place, like not leaving hundreds of unencrypted private keys on someone's desktop or downloads folder in plain text) isn't an important part of designing solutions for global scale, then we think about things very differently.
What we see different is that I don't collude *text stored inside a password manager* with *plaintext files left on someone's desktop or downloads folder*.
You clearly do, and even apply this philosophy to highly technical users. What I find ridiculous is that being able to copy sensitive information out of it is like 99% of what I do with password managers. It's the primary use case.
But it ultimately doesn't even matter because they contain nothing of value anyway. For example googling G0F6 in google patents yields this weird one from yesterday.
This shit patent is effectively claiming to have invented a "layer" that takes user prompts in a service, determines if the prompts need to be responded to in "real time mode", and if so route the prompt to an LLM that runs quickly and return the results. (As opposed to some batched api I suppose?).
I mean this is just routing requests based on if the query is prioritized. Its a patent claiming to have invented an IF statement. Most patents are of this quality or worse.
Might as well read VixRa papers for better ideas. And I mean this sincerely, because at least they aren't as obfuscated and the authors at least pretend to have ideas.
I'm sure if Apple provided support for installing your own OS on their M series laptops it would be incredibly popular. And I don't need to guess at this using weird 1996 research on microkernels because Asahi Linux exists and clearly there is interest in it.
reply