The advice, which is specifically for virtual machines using Azure, shows that sometimes the solution to a catastrophic failure is turn it off and on again. And again.
Microsoft’s Azure status page outlines several fixes. The first and easiest is simply to try to reboot affected machines over and over, which gives affected machines multiple chances to try to grab CrowdStrike’s non-broken update before the bad driver can cause the BSOD. Microsoft says that some of its customers have had to reboot their systems as many as 15 times to pull down the update.
I am so confused. What’s supposed to happen on the 15th reboot?
Probably triggers some auto-rollback mechanism I’d guess, to help escape boot loops? I’m just speculating.
Welp, Ars Technica has another theory:
https://arstechnica.com/information-technology/2024/07/crowdstrike-fixes-start-at-reboot-up-to-15-times-and-get-more-complex-from-there/
Yep. That makes more sense. Thanks!
That’s some high quality speculation