I agree with the sentiment but I think for normie agents to take off in the way that you expect, you're going to have to grant them with full access. But, by granting agents full access, you immediately turn the computer into an extremely adversarial device insofar as txt files become credible threat vectors.
For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue. That hurts growth. I don't disagree with your general points, though.
> for normie agents to take off in the way that you expect, you're going to have to grant them with full access
At this point it's a foregone conclusion this is what users will choose. It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.
The threats are real, but it's just a product opportunity to these companies. OpenAI and friends will sell the poison (insecure computing) and the antidote (Mythos et all) and eat from both ends.
Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.
I don't want this, I just think it's going down that route.
There was a recent Stanford study which showed that AI enthusiasts and experts and the normies had very different sentiment when it came to AI.
I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?
Theres lots of cases of great highly useful LLM tools, but the moment they scale up you get slammed by the risks that stick out all along the long tail of outcomes.
I agree, in general we are going to find that ultimately most employee end users don't want it. Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.
On the other hand, entrepreneurs and managers are going to want it for their employees (and force it on them) for the above reason.
I want. If I get 10X more productive, I can unilaterally increase my compensation 10X by doing my stuff in 1 unit of time instead of 10 it took, and splitting the remaining 9 units of time into, say, 4 units of time doing more work, securing my position and setting myself up for promotion, and 5 units of time doing whatever the fuck I want. Not all compensation shows up in a bank account - working less, or under less stress, are also valuable.
Of course, such situation is only temporary - if I can suddenly be 10X productive, then so can everyone else, and then the baseline shifts so 10X is the new 1X.
You want it, but then you closed by explaining exactly why you shouldn't want it. Plus, the new baseline isn't neutral (as in, everyone is the same again). If humans can now do 10x the work as before, the employer doesn't need the same number of humans to carry out its work. So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.
> So the new baseline is actually "let's keep 1 employee and fire the other 9", unless the business can find a way to suddenly expand 10x so that it needs 10x as much work done.
If they have any surplus of money (or loans) they'll try, so those 9 employees may end up becoming team leads or middle management, trying to start new initiatives to get the 10x expansion (and 100x improvement).
The market isn't anywhere near efficient enough to directly translate productivity improvements into labor reductions. Thankfully, because everything that's nice and hopeful and human lives within the market inefficiency; a fully efficient market would be a hell worse than any writer or preacher ever imagined.
lol that has nothing to do with market efficiency.
I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.
Of course it does have to do with market efficiency, of which the inertia and surplus within companies (especially large ones) is a part.
> I’ve seen a number of your posts where you talk about topics you clearly are not all that well versed in, with such confidence when you’re plain wrong.
I'm sure it's true. However, since you brought it up, can you be more specific and name three?
Yes, but in the long run, the market expects growth and innovation, not just doing the same thing with fewer workers. Especially when every other company can just buy the exact same advantage for the same price.
Your first paragraph is so short sighted that its message didn't even make it beyond the next one. It's a race to the bottom and your "doing whatever the fuck I want" will obviously never materialize.
The typical work week today is 40 hours. Just like it was 80 years ago. The typical worker is dramatically more productive than 80 years ago yet "doing whatever the fuck I want" time has not increased. Why would it? Employers don't need to pay such that 20 hour work weeks give you the same income. Because everybody around you is ok with working 40 hours.
This won't be different with AI, no matter if the overall effect is 1.1x or 10x or 100x productivity. Because it's not a technological problem but a sociological one.
Good point. My rant assumed that "10x productivity" meant 10x output in 1x time, rather than 1x output in 0.1x time. Only one of those are actually objectionable.
> I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase? You're just giving away that value to your employer.
Those are productivity increases that got our standard of living to where it is. Fewer people doing the same amount of work has, historically speaking, freed people from their current job, allowing them to work on something else.
It's that analogy of the horse, they used to be farm animals. Now, fewer of them are 'employed' but they're much nicer jobs. I'm not sure if the same is true for us this time around though as new jobs being created have increasingly been highly skilled which means the majority can't apply.
If everyone becomes 10x more productive it won’t mean the companies cash flow 10x’s. Where value is loose there is competition, so in theory everyone should win. Unless nobody else can compete to capture that loose 10x value, in which case congratulations, you are now a unicorn.
Of course in reality in the short term what happens is companies lay off people to increase margins. Times will be tough for workers, and equity keeps gravitating towards those who already had it.
>Assuming it actually makes you more productive. I mean, who the hell wants to be 10X more productive without a commensurate 10X compensation increase?
Given sane working arrangements or at minimum presence of remote work, it would be a bit shortsighted not to want to get done with your work in a tenth amount of time. In the very least, you're competing for a promotion against less effective people, all while having more time for yourself. If not, you're building labor market skillset in an efficient way so you can hop to a better employer.
> I think most people are going to say they dont want it. I mean, why would anyone want a tool that can screw up their bank account? What benefit does it gain them?
I'm not so sure. Matter of marketing and social pressure, big time.
Consider this: "Always-on pervasive google/fb/... login? I think most people are going to say they dont want it. I mean, why would anyone want a tool that would track their every move on the internet?" That could easily have been a statement 20 years ago. And look where we are.
Their solution will be to push mandatory and nonconsensual updates to your devices which limit your device and your freedom in the name of security. Like Google is doing to Android in September. You will no longer be able to install "unverified" software on anything. To address prompt injection attacks they're probably working on an approach where your data all has to be in the cloud and subject to security scans. That's already basically the model for Google Workspace, Google Drive and Chromebooks.
The model will get full access to your data, but in the name of security, you will only be permitted to have data that is cloud-hosted; local storage will effectively just be cache.
The era of the general computer will end, and the products you purchased from these companies will be nonconsensually altered and limited.
I'm so glad I switched to Linux more than a decade ago. At least on the PC there will still be an open source ecosystem for a long time to come, it may have less features but I'm willing to accept that.
Knowing that they can change what you bought overnight with a single nonconsensual update, think very, very carefully about who you purchase all of your future technology from. Google's upcoming nonconsensual degradation of Android should be a lesson for everybody.
>Google's upcoming nonconsensual degradation of Android should be a lesson for everybody.
Google is almost certainly doing this because the iOS was not found to be a monopoly, while Andorid was. It came up in Google's appeal of the Epic case verdict, where they directly asked the judge about it. Turns out you can't be anti-competitive if you don't have [allow] any competitors.
Nope. I'm still going to blame Google for their own actions. Nice try, though. I'm old enough to remember when Google pretended to take responsibility for not being evil. Even had it as their motto.
> I'm so glad I switched to Linux more than a decade ago. At least on the PC there will still be an open source ecosystem for a long time to come, it may have less features but I'm willing to accept that.
Wait until age verification is mandatory everywhere. :)
I can already see that happening, e. g. to access financial transactions or government apps, one needs to verify the id, and that will not work without age verification that can not be tampered with. So Linux will either submit to the same or be excluded.
(That free developers will be able to run Linux fine for much longer will also be true, but I guess they only care about catching the 95%, not the 5% linux users ... and 5% is a high guesstimate).
Edit: To clarify the above, one already had to provide personal data for financial transactions, of course, so a bank knows who is who, but the recent age verification go hand in hand with the attempt to get rid of vpn, and applications now make it a new standard to query the age of users, with the claim to "help protect kids". And some people buy into that rationale too. I don't, but I have seen many non-tech savvy people submit to that justification.
There's always the zero knowledge proof tech alternative, but I don't have the feeling we are moving in that direction - it's not the most profitable business is it.
> It'll be like (lack of) privacy on the internet caused by the ad industrial complex, but much worse and much more invasive.
The concerning aspect is how others' content being scanned into systems don't have any knowledge or consent. Having private PII/files/code/emails/etc being read and/or accidentally shared by the agent online.
> Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.
Honestly, it's alright.
Just think of what we could do with computers up until this point.
We keep all those abilities.
And more, even, because the industry still keeps churning out new local LLMs.
So you even gain more capabilities than right now. Just not at the rate of the bleeding edge.
Which is just like the Linux desktop, essentially.
It's fine, really. There is no need to consume the bleeding edge. You will be fine.
Definitely agree here. Made the swap to Linux a little over a year ago and the only reason I even have nice hardware is because I like gaming. But if I was cut off from everything tomorrow, the decades of stuff I have that I have not played will keep me very happy lol
>Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence.
As a proud neo-luddite, I'm watching the AI hype with grim amusement and I'll tell you hwhat, it doesn't look like a good time. Even putting to one side the planetary scale economic crash that is incoming, all the hypers seem to be on some sort of treadmill that is out of their control and it simply doesn't look like fun.
Everyone keeps saying how essential it all is yet a few years in and I still don’t see anything like the promised future of “everyone using them every day for everything.” Everyone’s just constantly talking (or stressing) about it.
We - including the companies - don’t know what the real “billion dollar application” of them is other than the unproven claim it makes everyone more productive in some general sense. When it doesn’t work people continue to say “it’s your fault not the tool’s.” Meanwhile investors are getting skittish and not one AI company is profitable yet. Companies that laid people off for LLM’s are regretting their decisions, leadership (and educators) is dealing with unvetted writing and having to waste their time cleaning it up, the list goes on. “Slop” is still a huge and growing problem.
LLM’s are here to stay, but IMO it’ll be more relevant in the long run than 3D printers yet less revolutionary than the internet. Everyone will touch them at various points but this whole-life, every-industry-disrupted integration still seems far fetched to me. Pricing is still a huge unsolved problem - everyone is still subsidized and despite gains in using fewer resources, it’s still too much to run these locally, even small models (not even getting into tooling and knowledge required to use them in a productive way).
When we zoom out and look at the whole picture, LLM’s have mostly made everyone’s online experience worse while the VC funded companies behind them are playing municipal and state governments’ for suckers a la Amazon getting so many cities to trip over each other giving away land and tax breaks, but far worse. Those are the biggest contributions so far aside from anecdotes from coders about “1000x productivity.” Again, I think they’re here to stay. But it’s called “AI hype” for a reason.
LLM’s have mostly been a problem creator IME rather than a “disruptor.” Never really seen “revolutionary technology” quite like it.
But hey, I’ll admit it’s useful to have a meh local model when I’m writing TTRPG stuff and have writer’s block. Though then I remember how it was trained, a whole other subject I haven’t even touched, so that kind of sucks too.
I dont see companies doing that. it can be business ending. only AI bros buying mac mini in 2026 to setup slop generated Claws would do that but a company doing that will for sure expose customer data.
> For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue.
Strongly agreed.
I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.
And the people who were using Cowork already were mostly blind approving all requests without reading what it was asking.
The more powerful, the more dangerous, and vice versa.
> I saw a few people running these things with looser permissions than I do. e.g. one non-technical friend using claude cli, no sandbox, so I set them up with a sandbox etc.
People have different levels of safety-consciousness, but also different tolerances and threat models.
For example, I would hesitate running a Mythos-level model in YOLO mode with full control over my computer, but right now, for personal stuff, even figuring out WTF are sandboxes in Claude Code / Gemini CLI, much less setting them up, is too much hassle. What's the worst it can do without me noticing? Format the drive and upload some private data into pastebin? Much as I hate cloud and the proliferation of 2FA in every service, that alone means it can't actually do more to me than waste few hours of my life, as I reimage my desktop and restore OneDrive (in case of destructive changes that got synced up). These models are not yet good enough to empty my bank account in few minutes I'm not looking; everything else they can do quickly is reversible or inconsequential.
Now, I do look at things closely when working with agentic AI tools. But my threat model is limited to worrying about those few hours of my life. `rm -rf / --no-preserve-root` is an annoyance, not a danger.
(I accept that different contexts give different threat modeling. I would be more worried if I were doing businessy business stuff with all kinds of secret sauces, or was processing PII of my employer's customers, or lived in a country where it's easy to have all your money stolen if your CC number or SSN gets posted online.)
How many of these threat vectors are just theoretical? Don’t use skills from random sources (just like don’t execute files from unknown sources). Don’t paste from untrusted sites (don’t click links on untrusted sites). Maybe there are fake documentation sites that the agent will search and have a prompt injected - but I haven’t heard of a single case where that happened. For now, the benefits outweigh the risk so much that I am willing to take it - and I think I have an almost complete knowledge of all the attack vectors.
Systems have been caught out that review pull requests, that’s a simple and clear one. The more obvious to me for most people is anything you do that interacts with your email without an explicit approve list of emails to read.
Yes, but none of this applies to the local codex agent that runs when I tell it to and has access to my computer. Like: „scan this folder of PDFs and create an excel file with all expenses. Then enter them into my tax software.“ This needs access to very sensitive data and involves a quite complex handling of data. But the only attack vector I see is someone injecting prompts into my invoice files.
Which applies if you were to do this to invoices submitted to you, rather than ones you created, or if you have any way of user info getting into your invoices.
i think you lack creativity. you could create a site that targets a very narrow niche, say an upper income school district. build some credibility, get highly ranked on google due to niche. post lunch menus with hidden embedded text.
Because it’s hooked up to a microphone in your kitchen & your kid is arguing with you about what lunch they want & they say “Hey [agent], what day is pizza day at [school]?”
I cannot reconcile that growth for non-technical users is going to explode, when most utility from agents is via the ability to execute arbitrary code, generally in yolo mode, with the fact that almost all corporate IT departments do not give users the ability to install anything on their machine, let alone arbitrary code. Even developers at many companies are subject to this despite the productivity impacts.
The culture of corporate IT would need to change to allow it, and I just don't see it happening.
What about setting environments for normies that mitigate this problem? I don't know that you can do it on Windows, but Linux offers various tools for isolation where you can give full rights to an LLM and still be safe from certain classes of disaster.
Maybe this kind of isolation neuters the benefit you're thinking of, but I do believe some sort of solution could be reached.
For all the benefits that agents offer, they can be asymmetrically harmful. This is not a solved issue. That hurts growth. I don't disagree with your general points, though.