If your agent can execute Bash commands, it can do anything, including reading files (with cat), writing them (with sed / patch / awk /perl), grepping, finding, and everything else you may possibly need. The specialized tools are just an optimization to make things easier for the agent. They do increase performance (in the "how much can this do", not the "how fast is this" sense), but they're not strictly required.
IMHO, this is one of the more significant LLM-related discoveries of 2025. You don't need a context-polluting Github MCP that takes 10+% of your precious context window, all you need is the gh cli, which the agent already knows how to use.
The point of blocking rooted devices often isn't to protect your account, it's to protect other (often unsophisticated) customers of the organization against automated attacks.
Rooted devices aren't the problem, Python scripts pretending to be rooted devices are. There's just no way to distinguish between the two. The only way to disallow automated Python scripts from logging to your grandma's bank account is to also disallow you from logging into yours if your phone isn't blessed by Google.
So make a toggle in the account settings that requires a blessed phone or an authenticated visit to the branch to set. There's nothing here that requires _my_ device to be authenticated in order to protect my grandma.
I recently learned that Poland literally has a law on the books[1] (from the executive, not the legislative), mandating our use of SOAP and WSDL. You're definitely right on that score. As far as I know, it's supported by some EU directive or other, no less.
AFAIK, the DMCA doesn't require infrastructure providers (ISPs, DNS resolvers, "relay" services like Cloudflare) to block entire websites. It's just for surgical removals of content (and blocking of ISP / hosting provider customers who are notorious infringers).
The US doesn't have the kind of website blocking laws that many European countries have.
Yeah, in Europe, there tends to be an association between football fans and organized crime, just as there's one between unions and organized crime in the US.
The kind of hooligans who love beating up the hooligans from the other team are also perfect from beating up the hooligans from the opposing drug cartel.
Governments just can't come to grips with how much money software engineers make.
Paying a contractor $x million? Yeah no problem, projects are projects, they cost what they cost. Does that $x million pay for 5x fewer people than it would in construction or road repair? We don't know, we don't care, this is the best bid we got for the requirements, and in line with what similar IT projects cost us before.
Paying a junior employee $100k? "We can't do that, the agency director has worked here for 40 years, and he doesn't make that much."
Variants of this story exist in practically every single country. You can make it work with lower salaries through patriotism, but software engineers in general are one of the less patriotic professions out there, so this isn't too easy to do.
The entire targeted advertising industry is basically a progressive tax.
The "social contract" is that many services are fully or partially financed by advertising. Rich people produce more ad revenue (because they spend more), but they get the same quality of service, effectively subsidizing access for the poorer part of the population, who couldn't afford it otherwise.
If we break this social contract down, companies will still try to extract as much revenue as possible, but the only way to do that will be through feature gating, price discrimination, and generally making your life a misery unless you make a lot of money.
Did you actually suffer any negative consequences of these breaches?
I see so many comments about how punishments for data breaches should be increased, but not a single story about quantifiable harm that any of those commenters has suffered from them.
It is difficult to read this. On the one hand, for a good chunk of the population that is true and yet, one knows that is absolutely not true to individuals, who will be affected.
Since I do have multiple breaches under my belt, I could offer you an anecdote, but I won't. Do you know why? Because it is not up to me to quantify harm that was done the same way I don't have to explain, to a reasonable person, why doxxing people is not something people should have to suffer through.
I have a personal theory as to why that state persists. The quantifiable harm is small per individual affected, but high across the population and thus underreported. Sadly, the entities that could confirm that are not exactly incentivized to say they are causing harm to begin with..
> the algorithms across the board found out somehow.
It's worth keeping in mind that this is basically untrue.
In most of these algorithms, there's no "is_expecting: True" field. There are just some strange vectors of mysterious numbers, which can be more or less similar to other vectors of mysterious numbers.
The algorithms have figured out that certain ad vectors are more likely to be clicked if your user vector exhibits some pattern, and that some actions (keywords, purchases, slowing down your scroll speed when you see a particular image) should make your vector go in that direction.
No, but AFAIK they pulled some shenanigans with "bundling" Gemini scraping and search engine scraping.
Almost everybody wants to appear in search, so disallowing the entirety of Google is far more costly than E.G. disallowing Openai, who even differentiates between content scraped for training and content accessed to respond to a user request.
While there isn't a way to differentiate between scraping for training data and content accessed in response to a user request, I think you can block Googlebot-extended to block training access.
If your agent can execute Bash commands, it can do anything, including reading files (with cat), writing them (with sed / patch / awk /perl), grepping, finding, and everything else you may possibly need. The specialized tools are just an optimization to make things easier for the agent. They do increase performance (in the "how much can this do", not the "how fast is this" sense), but they're not strictly required.
IMHO, this is one of the more significant LLM-related discoveries of 2025. You don't need a context-polluting Github MCP that takes 10+% of your precious context window, all you need is the gh cli, which the agent already knows how to use.
reply