All our servers and company laptops went down at pretty much the same time. Laptops have been bootlooping to blue screen of death. It’s all very exciting, personally, as someone not responsible for fixing it.

Apparently caused by a bad CrowdStrike update.

Edit: now being told we (who almost all generally work from home) need to come into the office Monday as they can only apply the fix in-person. We’ll see if that changes over the weekend…

  • v9CYKjLeia10dZpz88iU@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    4 months ago

    I mean - this is just a giant test of disaster recovery plans. And while there are absolutely real-world consequences to this, the fix almost seems scriptable.

    It seems like it is, I’m not responsible for any computers that had this issue, but I saw this powershell script posted on reddit for a group policy.

    Though, I think some systems had more unique problems, I also saw different steps for repairing an Azure VM.

    There were also that didn’t understand how to get around Bitlocker, and people on reddit posted solutions for that too.


    Though, even with all of this, I was surprised that hospitals had issues. It seems like there’s other issues in deployments, and I saw some people on YC claim this was related to organizations filling checkboxes for regulatory requirements. That they likely had this software because they were concerned with failing an audit. I don’t know if there’s truth to that, but I am surprised there wasn’t more redundancy in critical infrastructure.

    edit: I want to stress again that I’m not responsible for any computers that had this issue and haven’t tried to use any of the above solutions myself. I’ve just noticed lots of people still commenting on reddit not understanding that they can fix this issue with one of these 3.