> In the digital world, you are constantly being attacked by the equivalent of a hundred armies, all the time. Hackers around the world, whether criminals or actual state-actors, are constantly trying to break into any system they can.
This is why I think cyberattacks should be seen from the "victim"'s perspective as something more like a force of nature rather than a crime -- they're ubiquitous and constant, they come from all over the world, and no amount of law enforcement will completely prevent them. If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
(I'm not saying that we shouldn't prosecute cyber crime, but that companies shouldn't be able to get out of liability by saying "it's the criminals' fault").
> So yes, many breaches involve some kind of software issue, but it is impossible to never make any mistake.
It's not possible to never make a mistake, no. But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.
The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.
In my ideal world, companies that follow good engineering practices, build systems that are secure by design, and get breached by a nation state actor in a "this could have happened to anyone" attack should be fine, whether through legislation or insurance. But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.
> If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
I genuinely have no idea how liability for civil engineering works, but the evidence of my eyes is that entire Oklahoma towns built by civil engineers get wiped off the map by tornadoes all the time. Therefore I assume either we can't design a tornado-proof building, or civil engineering gets the same cost-benefit analysis as security engineering. The acceptable cost-benefit balance is just different. But we can't be selling $10 million tornado-proof shacks, and we can't be selling $10 million bug-proof small business applications, if either is even possible.
> If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
This is why I liken it to protecting from an army. Wanting to protect a building from rain is fine - rain is a constant that isn't adapting and "fighting back".
Find me a building that is able to keep its occupants safe from an invading army, and then we'll talk. It's impossible. That's what we built armies for.
> But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.
To be clear, I agree that there's a spectrum, and I wouldn't want to make it so that companies can get away with everything. But I'm not sure we have a good solution for "my company has 10k engineers, one of them five years ago set up a server and everyone forgot it exists, now it's exploitable". Not in the general case of having so many employees.
> The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.
I'm not a security researcher, but I'd guess that most exploits are even simpler - they don't even necessarily rely on software exploits, they rely on phishing, on social engineering, etc.
I've seen plenty of demos of people being able to "hack" many companies by just knowing the lingo and calling a few employees while pretending to be from IT.
This doesn't even include "exploits" like getting spies into a company, or just flat-out blackmailing employees. Do you think the systems you've worked on are secure from a criminal organization applying physical intimidation on IT personnel? (I won't go into details but I'm sure you can imagine worst-case scenarios here yourself.)
> But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.
I agree, but there's a huge range between "builds software cheaply" and "builds software which is secure by default" (the second being basically impossible - find me a company that has never been breached if you think it's doable).
We want to make companies pay the cost when it incentivizes good behavior. That's sometimes the case, hence my agreeing with you for many cases.
But security is a game of weakest links, and given thousands of adversaries of various levels of strength, from script-kiddies to state actors, every company is vulnerable on some level. Which is why, in addition to making companies liable for real negligence, we have to recognize that no company is safe, even given enormous levels of effort, and the only way to truly protect them is via some state action.
The reason your bank isn't broken into isn't just that they are amazing at security - it's that if someone breaks into your bank, the state will investigate, hunt them down, arrest them and imprison them.
Show me a company that claims it's never been breached in some way, and I'll show you a company that has no clue about security, including their prior breaches.
> In the digital world, you are constantly being attacked by the equivalent of a hundred armies, all the time. Hackers around the world, whether criminals or actual state-actors, are constantly trying to break into any system they can.
This is why I think cyberattacks should be seen from the "victim"'s perspective as something more like a force of nature rather than a crime -- they're ubiquitous and constant, they come from all over the world, and no amount of law enforcement will completely prevent them. If you build a building that can't stand up to the rain or the wind, you're not an innocent victim of the weather, you failed to design a building for the conditions you knew would be there.
(I'm not saying that we shouldn't prosecute cyber crime, but that companies shouldn't be able to get out of liability by saying "it's the criminals' fault").
> So yes, many breaches involve some kind of software issue, but it is impossible to never make any mistake.
It's not possible to never make a mistake, no. But there's a huge spectrum between writing a SQL injection vulnerability and a complicated kernel use-after-free that becomes a zero-click RCE with an NSO-style exploit chain, and I'm much more sympathetic to the latter kind of mistake than the former.
The fact is that most exploits aren't very sophisticated -- someone used string interpolation to build an SQL query, or didn't do any bounds checking at all in their C program, or didn't update 3rd-party software on an internal server for 5 years. And for as long as these kinds of mistakes don't have consequences, there's no incentive for a company to adopt the kind of structural and procedural changes that minimize these risks.
In my ideal world, companies that follow good engineering practices, build systems that are secure by design, and get breached by a nation state actor in a "this could have happened to anyone" attack should be fine, whether through legislation or insurance. But when a company cheaps out on software and develops code in a rush, without attention to security, then they shouldn't get to socialize the costs of the inevitable breach.