If a single person can make the system fail then the system has already failed.
It’s never a single person who caused a failure.
sure it is the dev who is to blame and not the clueless managers who evaluate devs based on number of commits/reviews per day and CEOs who think such managers are on top of their game.
If only we had terms for environments that were ment for testing, staging, early release and then move over to our servers that are critical…
I know it’s crazy, really a new system that only I came up with (or at least I can sell that to CrowdStrike as it seems)
Note: Dmitry Kudryavtsev is the article author and he argues that the real blame should go to the Crowdstrike CEO and other higher-ups.
Microsoft also started blaming th eu. Its such a shitshow its ridiculous.
It’s a systematic multi-layered problem.
The simplest, least effort thing that could have prevented the scale of issues is not automatically installing updates, but waiting four days and triggering it afterwards if no issues.
Automatically forwarding updates is also forwarding risk. The higher the impact area, the more worth it safe-guards are.
Testing/Staging or partial successive rollouts could have also mitigated a large number of issues, but requires more investment.
The update that crashed things was an anti-malware definitions update, Crowdstrike offers no way to delay or stage them (they are downloaded automatically as soon as they are available), and there’s good reason for not wanting to delay definition updates as it leaves you vulnerable to known malware longer.
And there’s a better reason for wanting to delay definition updates: this outage.