I think they’re owning up to their mistakes instead of dodging the issue. I still feel that if they did the right testing, they shouldn’t be blamed for everything. It’s pretty standard for IT teams to avoid auto-updates and instead manually review them—especially in critical sectors like healthcare, aviation, and government. For instance, at my workplace, we’re not allowed to auto-update VsCode.
They mentioned they ran tests which unfortunately returned false positives. While it’s true they could’ve been more thorough, the affected companies also dropped the ball by not doing their own checks
> I still feel that if they did the right testing, they shouldn’t be blamed for everything.
This update crashed 100% of the Windows systems it got installed on, which means either their testing did not involve actually loading it on real world computers at all or that blue screening and boot looping did not cause the test to fail. It is objectively clear that they did not do the right testing. There is no excuse for this update having ever left the earliest stages of a proper test process.
It's not like this is a case of an unexpected interaction with a configuration not found in the test lab.
> It’s pretty standard for IT teams to avoid auto-updates and instead manually review them—especially in critical sectors like healthcare, aviation, and government.
This component was not able to be controlled in this way. Systems that were configured to be delayed on other CrowdStrike updates still got this particular update immediately with no ability for IT departments to control them.
> They mentioned they ran tests which unfortunately returned false positives.
Again, whatever tests they actually ran clearly didn't involve actually loading the update in to the actual driver. Their explanation sounds like they may have validated the formatting of their update or something like that but then just sent it.
> While it’s true they could’ve been more thorough, the affected companies also dropped the ball by not doing their own checks
No they did not because they could not. They may have dropped the ball when installing Crowdstrike in the first place, but the whole reason this was such a widespread thing affecting so many high priority systems is that it wasn't able to be controlled in the ways IT departments would want.
> This component was not able to be controlled in this way. Systems that were configured to be delayed on other CrowdStrike updates still got this particular update immediately with no ability for IT departments to control them.
I had to look this up because I had not heard about this. I didn't understand that this bypassed companies' protections. I take back what I said, I guess I'm used to companies like those to having poor IT standards but once something goes wrong, they pretend that they had no part in it.
They mentioned they ran tests which unfortunately returned false positives. While it’s true they could’ve been more thorough, the affected companies also dropped the ball by not doing their own checks