Swapping entire devices takes more time and labor than this. So does factory resetting, and all the other solutions I saw other IT people roll out. They panicked.
Its a crashing os not dead machine so we do have the ability to remote access before the OS even loads, though we don’t always deploy that on smaller organizations. In those cases we walk someone through booting into recovery or send a tech to the office. We had this handled in the 1st hrs, a few hundred endpoints affected the rest were not on crowdstrike. We almost switched to it 6 mo ago…dodged a bullet.
If you have no idea how long it may take and if the issue will return - and particularly if upper management has no idea - swapping to alternate solutions may seem like a safer bet. Non-Tech people tend to treat computers with superstition, so “this software has produced an issue once” can quickly become “I don’t trust anything using this - what if it happens again? We can’t risk another outage!”
The tech fix may be easy, but the manglement issue can be harder. I probably don’t need to tell you about the type of obstinate manager that’s scared of things they don’t understand and need a nice slideshow with simple words and pretty pictures to explain why this one-off issue is fixed now and probably won’t happen again.
As for the question of scale: From a quick glance we currently have something on the order of 40k “active” Office installations, which mostly map to active devices. Our client management semi-recently finished rolling out a new, uniform client configuration standard across the organisation (“special” cases aside). If we’d had CrowdStrike, I’d conservatively estimate that to be at least 30k affected devices.
Thankfully, we don’t, but I know some amounts of bullets were being sweated until it was confirmed to only be CrowdStrike. We’re in Central Europe, so the window between the first issues and the confirmation was the prime “people starting work” time.
Swapping entire devices takes more time and labor than this. So does factory resetting, and all the other solutions I saw other IT people roll out. They panicked.
Its a crashing os not dead machine so we do have the ability to remote access before the OS even loads, though we don’t always deploy that on smaller organizations. In those cases we walk someone through booting into recovery or send a tech to the office. We had this handled in the 1st hrs, a few hundred endpoints affected the rest were not on crowdstrike. We almost switched to it 6 mo ago…dodged a bullet.
If you have no idea how long it may take and if the issue will return - and particularly if upper management has no idea - swapping to alternate solutions may seem like a safer bet. Non-Tech people tend to treat computers with superstition, so “this software has produced an issue once” can quickly become “I don’t trust anything using this - what if it happens again? We can’t risk another outage!”
The tech fix may be easy, but the manglement issue can be harder. I probably don’t need to tell you about the type of obstinate manager that’s scared of things they don’t understand and need a nice slideshow with simple words and pretty pictures to explain why this one-off issue is fixed now and probably won’t happen again.
As for the question of scale: From a quick glance we currently have something on the order of 40k “active” Office installations, which mostly map to active devices. Our client management semi-recently finished rolling out a new, uniform client configuration standard across the organisation (“special” cases aside). If we’d had CrowdStrike, I’d conservatively estimate that to be at least 30k affected devices.
Thankfully, we don’t, but I know some amounts of bullets were being sweated until it was confirmed to only be CrowdStrike. We’re in Central Europe, so the window between the first issues and the confirmation was the prime “people starting work” time.