It's so important to remember that unlike code which can be reverted - most file system and application operations cannot.
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users
Frequency vs. convenience will determine how big of a deal this is in practice.
Cars have plenty of horror stories associated with them, but convenience keeps most people happily driving everyday without a second thought.
Google can quarantine your life with an account ban, but plenty of people still use gmail for everything despite the stories.
So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
Considering the ubiquity and necessity of driving cars is overwhelmingly a result of intentional policy choices irrespective of what people wanted or was good for the public interest... actually that's quite a decent analogy for integrated LLM assistants.
People will use AI because other options keep getting worse and because it keeps getting harder to avoid using it. I don't think it's fair to characterize that as convenience though, personally. Like with cars, many people will be well aware of the negative externalities, the risk of harm to themselves, and the lack of personal agency caused by this tool and still use it because avoiding it will become costly to their everyday life.
I think of convenience as something that is a "bonus" on top of normal life typically. Something that becomes mandatory to avoid being left out of society no longer counts.
What has gotten worse without AI? I don't think writing or coding is inherently harder. Google search may be worse but I've heard Kagi is still pretty great. Apple Intelligence feels like it's easy to get rid of on their platforms, for better and worse. If you're using Windows that might get annoying, personally I just use LTSC.
The skills of writing and coding atrophy when replaced by generative AI. The more we use AI to do thinking in some domain, the less we will be able to do that thinking ourselves. It's not a perfect analogy for car infrastructure.
Yeah Kagi is good, but the web is increasingly dogshit, so if you're searching in a space where you don't already have trusted domains for high quality results, you may just end up being unable to find anything reliable even with a good engine.
I am a car enthusiast so don't think I'm off the deep end here, but I would definitely argue that people love their cars as a tool to work in the society we built with cars in mind. Most people aren't car enthusiasts, they're just driving to get to work, and if they could get to work for a $1 fare in 20 minutes on a clean, safe train they would probably do that instead.
Of course they wouldn't, owning and operating a plane is -incredibly- inconvenient. That's what we are discussing, tradeoffs of convenience and discomfort, you can't just completely ignore one reality to criticise the other (admiting some hypocrisy here since that ideal train system mentioned earlier only exists in a few cities).
Is this some culture or region or climate related thing? I’ve never heard of BO brought up as a reason to avoid public transport or flying commercial in northern parts of Europe. Nor have I experienced any olfactory disturbance, apart from the occasional young man or woman going a tad overboard with perfume on the weekends.
Should we restructure society so that having a private airplane is easier and cheaper, but if you don't have one you'll have serious trouble in daily life?
No, people hate being trapped without a car in an environment built exclusively to serve cars. Our love of cars is largely just downstream of negative emotions like FOMO or indignation caused by the inability to imagine traveling by any other mode (because on most cases that's not even remotely feasible anymore).
That's what I am saying though. Anecdotes are the wrong thing to focus on, because if we just focused on anecdotes, we would all never leave our beds. People's choices are generally based on their personal experience, not really anecdotes online (although those can be totally crippling if you give in).
Car crashes are incredibly common and likewise automotive deaths. But our personal experience keeps us driving everyday, regardless of the stories.
Airbags, yes. But you can't just make it provably impossible for a car to crash into something and hurt/kill its occupants, other than not building it in the first place. Same with LLMs - you can't secure them like regular programs without destroying any utility they provide, because their power comes from the very thing that also makes them vulnerable.
And yet in the US 40,000 people still die on average every year. Per-capita it's definitely improving, but it's still way worse than it could/should be.
Yes, and a photo you put on your physical desktop will fade over time. Computers aren't like that, or at least we benefit greatly from them not being like that. If you tell your firewall to block traffic to port 80, you expect all such traffic to be blocked, not just the traffic that arrives in the moments when it wasn't distracted.
> So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.
This is anecdotal but "people" care quite a lot in the energy sector. I've helped build our own AI Agent pool and roll it out to our employees. It's basically a librechat with our in-house models, where people can easily setup base instruction sets and name their AI's funny things, but are otherwise similar to using claude or chatgpt in a browser.
I'm not sure we're ever going to allow AI's access to filesystems, we barely allow people access to their own files as it is. Nothing that has happened in the past year has altered the way our C level view the security issues with AI in any other direction than being more restrictive. I imagine any business that cares about security (or is forced to care by leglislation) isn't looking at this as a they do cars. You'd have to be very unlucky (or lucky?) to shut down the entire power grid of Europe with a car. You could basically do it with a well placed AI attack.
Ironically, you could just hack the physical components which probably haven't had their firmware updated for 20 years. If you even need to hack it, because a lot of it frankly has build in backdoors. That's a different story that nobody on the C levels care about though.
Once upon a time, in the magical days of Windows 7, we had the Volume Shadow Copy Service (aka "Previous Versions") available by default, and it was so nice. I'm not using Windows anymore, and at least part of the reason is that it's just objectively less feature complete than it used to be 15 years ago.
Somewhat related is a concern I have in general as things get more "agentic" and related to the prompt injection concerns; without something like legally bullet-proof contracts, aren't we moving into territory of basically "employing" what could basically be "spies" at all levels from personal (i.e., AI company staff having access to your personal data/prompts/chats) to business/corporate espionage, to domestic and international state level actors who would also love to know what you are working on and what you are thinking/chatting about and maybe what your mental health challenges are that you are working through with an AI chat therapist.
I am not even certain if this issue can be solved since you are sending your prompts and activities to "someone else's computer", but I suspect if it is overlooked or hand-waved as insignificant, there will be a time when open, local models will become useful enough to allow most to jettison cloud AI providers.
I don't know about everyone else, but I am not at all confident in allowing access and sending my data to some AI company that may just do a rug pull once they have an actual virtual version of your mind in a kind of AI replication.
I'll just leave it at that point and not even go into the ramifications of that, e.g., "cybercrimes" being committed by "you", which is really the AI impersonator built based on everything you have told it and provide access to.
Q: What would prevent them from using git style version control under the hood? User doesn’t have to understand git, Claude can use it for its own purposes.
Didn't actually check out the app, but some aspects of application state are hard to serialize, some operations are not reversible by the application. EG: sending an email. It doesn't seem naively trivial to accomplish this, for all apps.
So maybe on some apps, but "all" is a difficult thing.
Maybe not for very broad definitions of OS state, but for specific files/folders/filesystems, this is trivial with FS-level snapshots and copy-on-write.
Let's assume that you can. For disaster recovery, this is probably acceptable, but it's unacceptable for basically any other purpose. Reverting the whole state of the machine because the AI agent (a single tenant in what is effectively a multi-tenant system) did something thing incorrect is unacceptable. Managing undo/redo in a multiplayer environment is horrific.
I wonder if in the long run this will lead to the ascent of NixOS. They seem perfect for each other: if you have git and/or a snapshotting filesystem, together with the entire system state being downstram of your .nix file, then go ahead and let the LLM make changes willy-nilly, you can always roll back to a known good version.
NixOS still isn't ready for this world, but if it becomes the natural counterpart to LLM OS tooling, maybe that will speed up development.
Well there is cri-u for what its worth on linux which can atleast snapshot the state of an application and I suppose something must be similar available for filesystems as well
Also one can simply run a virtual machine which can do that but then the issue becomes in how apps from outside connect to vm inside
Ok, you can "easily", but how quickly can you revert to a snapshot? I would guess creating a snapshot for each turn change with an LLM become too burdensome to allow you to iterate quickly.
Git only works for text files. Everything else is a binary blob which, among other things, leads to merge conflicts, storage explosion, and slow git operations
Indeed there are and this is no rocket science. Like Word Documents offer a change history, deleted files go to the trash first, there are undo functions, TimeMachine on MacOs, similar features on Windows, even sandbox features.
I mean, I'm pretty sure it would be trivial to tell it to move files to the trash instead of deleting them. Honestly, I thought that on Windows and Mac, the default is to move files to the trash unless you explicitly say to permanently delete them.
Yes, it is (relatively, [1]) trivial. However, even though it is the shell default (Finder, Windows Explorer, whatever Linux file manager), it is not the operating system default. If you call unlink or DeleteFile or use a utility that does (like rm), the file isn’t going to trash.
Everything on a ZFS/BTRFS partition with snapshots every minute/hour/day? I suppose depending on what level of access the AI has it could wipe that too but seems like there's probably a way to make this work.
I guess it depends on what its goals at the time are. And access controls.
May just trash some extra files due to a fuzzy prompt, may go full psychotic and decide to self destruct while looping "I've been a bad Claude" and intentionally delete everything or the partitions to "limit the damage".
A "revert filesystem state to x time" button doesn't seem that hard to use. I'm imagining this as a potential near-term future product implementation, not a home-brewed DIY solution.
A filesystemt state in time is VERY complicated to use, if you are reverting the whole filesystem. A granular per-file revert should not be that complicated, but it needs to be surfaced easily in the UI and people need to know aout it (in the case of Cowork I would expect the agent to use it as part of its job, so transparent to the user)
>>I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
I do believe the approach Apple is taking is the right way when it comes to user facing AI.
You need to reduce AI to being an appliance that does one or at most a few things perfectly right without many controls with unexpected consequences.
Real fun is robots. Not sure no one is hurrying up on that end.
>>Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
Also in my experience this creates all kinds of other issues. Like going back up a tree creates all kinds of confusions and keeps the system inconsistent with regards to whatever else it is you are doing.
You are right in your analysis that many people are going to end up with totally broken systems
In theory the risk is immense and incalculable, but in practice I've never found any real danger. I've run wide open powershell with an OAI agent and just walked away for a few hours. It's a bit of a rush at first but then you realize it's never going to do anything crazy.
The base model itself is biased away from actions that would lead to large scale destruction. Compound over time and you probably never get anywhere too scary.
Most of these files are binary and are not a good fit for git’s graph based diff tracker…you’re basically ending up with a new full sized binary for every file version. It works from a version perspective, but is very inefficient and not what git was built for.
It works on Linux, Windows, macOS, and BSD. It's not locked to Apple's ecosystem. You can back up directly to local storage, SFTP, S3, Backblaze B2, Azure, Google Cloud, and more. Time Machine is largely limited to local drives or network shares. Restic deduplicates at the chunk level across all snapshots, often achieving better space efficiency than Time Machine's hardlink-based approach. All data is encrypted client-side before leaving your machine. Time Machine encryption is optional. Restic supports append-only mode for protection against ransomware or accidental deletion. It also has a built-in check command to check integrity.
Time Machine has a reputation for silent failures and corruption issues that have frustrated users for years. Network backups (to NAS devices) use sparse bundle disk images that are notoriously fragile. A dropped connection mid-backup can corrupt the entire backup history, not just the current snapshot. https://www.google.com/search?q=time+machine+corruption+spar...
Time Machine sometimes decides a backup is corrupted and demands you start fresh, losing all history. Backups can stop working without obvious notification, leaving users thinking they're protected when they're not. https://www.reddit.com/r/synology/comments/11cod08/apple_tim...
Restic is fantastic. And restic is complicated for someone who is not technical.
So there is a need to have something that works, even not in an optimal way, that saves people data.
Are you saying that Time Machine doe snot backup data correctly? But then there are other services that do.
Restic is not for the everyday Joe.
And to your point about "ignorant people" - it is as I was saying that you are an ignorant person because you do not create your own medicine, or produce your own electricity, or paint your own paintings, or build your own car. For a biochemist specializing in pharma (or Walt in Breaking Bad :)) you are an ignorant person unable to do the basic stuff: synthetizing paracetamol. It is a piece of cake.
IIUC, this is a preview for Claude Max subscribers - I'm not sure we'll find many teachers or students there (unless institutions are offering Max-level enterprise/team subscriptions to such groups). I speculate that most of those who will bother to try this out will be software engineering people. And perhaps they will strengthen this after enough feedback and use cases?
I hope we see further exploration into immutable/versioned filesystems and databases where we can really let these things go nuts, commit the parts we want to keep, and revert the rest for the next iteration.
I would never use what is proposed by OP. But, in any case, Linux on ZFS that is automatically snapshotted every minute might be (part of) a solution to this dilemma.
Yes, and I think we're already seeing that in the general trend of recent linux work toward atomic updates. [bootc](https://developers.redhat.com/articles/2024/09/24/bootc-gett...) based images are getting a ton of traction. [universal blue](https://universal-blue.org/) is probably a better brochure example of how bootc can make systems more resilient without needing to move to declarative nix for the entire system like you do in NixOS. Every "upgrade" is a container deployment, and you can roll back or forward to new images at any time. Parts of the filesystem aren't writeable (which pisses people off who don't understand the benefit) but the advantages for security (isolating more stuff to user space by necessity) and stability (wedged upgrades are almost always recoverable) are totally worth it.
On the user side, I could easily see [systemd-homed](https://fedoramagazine.org/unlocking-the-future-of-user-mana...) evolving into a system that allows snapshotting/roll forward/roll back on encrypted backups of your home dir that can be mounted using systemd-homed to interface with the system for UID/GID etc.
These are just two projects that I happen to be interested in at the moment - there's a pretty big groundswell in Linux atm toward a model that resembles (and honestly even exceeds) what NixOS does in terms of recoverability on upgrade.
Or rather ZFS/BTRFS/BchachFS. Before doing anything big I make snapshot, saved me recently when a huge Immich import created a mess, `zfs rollback /home/me@2026-01-12`... And it's like nothing ever happened.
There was a couple of posts here on hacker news praising agents because, it seems, they are really good at being a sysadmin.
You don't need to be a non-technical user to be utterly fucked by AI.
Theoretically, the power drill you're using can spontaneously explode, too. It's very unlikely, but possible - and then it's much more likely you'll hurt yourself or destroy your work if you aren't being careful and didn't set your work environment right.
The key for using AI for sysadmin is the same as with operating a power drill: pay at least minimum attention, and arrange things so in the event of a problem, you can easily recover from the damage.
It’s easy for people to understand that if they point the powerdrill into a wall the failure modes might include drilling through a pipe or a wire, or that the powerdrill should not be used for food preparation or dentistry.
People, in general, have no such physical instincts for how using computer programs can go wrong.
Which is in part why rejection of anthropomorphic metaphors is a mistake this time. Treating LLM agents as gullible but extremely efficient idiot savants on a chip, gives pretty good intuition for the failure modes.
I assumed we are talking about IT professionals using tools like claude here? But even for normal people it's not really hard if they manage to leave the cage in their head behind that is ms windows.
My father is 77 now and only started using computer abover age 60, never touched windows thanks to me, and has absolutely no problems using (and administrating at this point) it all by himself
There's no sandboxing snapshot in revision history, rollbacks, or anything.
I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.
Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.
For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users