Categories
Uncategorized

Cyber Defence Countermeasures for a “Building Overrun” Error

Disasters can overtake a facility so rapidly that people don’t have time to close down operations. IT practitioners everywhere should look at what just happened at the U.S. capitol and consider how to update their own BC/DR plans.

This week has been exhausting. It was already bad enough with COVID-19 infections and deaths setting new records day after day. Then we had that violent extremist assault on the U.S. Capitol building (you might have heard something about it on the news). Between these two disasters, it’s been like watching an airplane get dismantled panel-by-panel by frenzied harpies while you’re belted in your seat and unable to influence the inevitable fiery crash … while the pilot is in on it … only it’s your democracy getting dismantled, not an airliner.

Suffice it to say, I’ve found it extraordinarily difficult to write anything coherent this week. I churned out twelve pages of draft articles over the weekend, none of which seemed appropriate for my beat. All I’ve been able to focus on is the capitol assault and its ramifications. So, rather than go down another 10,000 words worth of scathing political analysis, I figured I should ride the wave and talk about a cybersecurity idea that’s apolitical but madness-adjacent.

Ahem. Here goes:

Shortly after the insurrectionists breached the U.S. Capitol building on 6th January, some of the bolder intruders posted photos and videos on social media of the spaces they’d been able to breach as the police fell back. One photo in particular set the security community buzzing: it was taken from the desk of a congressional staffer in a senator’s office and showed an unlocked PC, open to Microsoft Outlook:

Screencap of a Twitter post from 6th January 2021

The insurrectionist photographer who snapped this photo clearly had direct access to the logged-in user’s email and whatever file shares were mapped to the PC, making the machine a potential goldmine for opportunistic theft, alteration, or destruction of government records (not to mention a perfect opportunity to introduce malware into the government network). The users had evacuated their offices so quickly that they didn’t have time to lock their computers. 

That made sense: what was the staff’s highest operational priority in the moment? Following cyber hygiene practices and getting (potentially) lynched by a blood-crazed mob or evacuating straightaway, sacrificing systems for staffers’ lives? I have no way of knowing what the leader on the ground was thinking, but I feel confident that whoever it was acted appropriately: when you’re being overrun, your top priority is to save your people Everything else comes second. These were politicians and bureaucrats, after all. Not soldiers. I’d wager that nothing in their job description required them to die defending their position. 

So, sure. It made sense that a rapid assault on the facility would have left at least a few PCs unlocked and vulnerable. For me, that prompted an obvious question: why didn’t the IT department that serves the capitol employ a centralized lock-down capability? Why didn’t IT remotely shutdown all of the PCs in the building and then disconnect their network nodes as soon as they heard about the complex being overrun? This idea has been bugging me for days. 

We had the capability to do this in the military: our network management systems allowed us to remotely boot, reboot, and shutdown groups of PCs by unit, by floor, and facility. In an emergency, one sysadmin had the ability to remotely deny an attacker access to our systems and networks with just a few clicks. This actually happened on my watch: we discovered an intruder had social engineered his way into one of our buildings and was attempting to exfiltrate data across our network with stollen user credentials. We immediately cut that entire building off from the base network before we sent troopers around to secure the compromised machines for forensic analysis (and nick the criminal if we were lucky). 

In a way, I can understand why the capitol’s IT staff might not have been ready for this scenario. Who the heck thought something like this was possible in real life? It was a far-fetched notion, more suited to a big budget action movie than real disaster planning. And yet, there we were.

To be clear, I’m not suggesting that anyone in the capitol’s IT department did anything wrong; they might have had the capability to lock down the building’s machine’s and didn’t get the alert order to do so. Or maybe they didn’t get permission to, as people offsite relied on cable news to reveal the scope of the problem and coverage was contradictory. Or maybe they had the capability and used it but this one PC glitched and didn’t process the shutdown order. I have no way of knowing what really happened; I don’t have any friends in this IT department.

I’m bringing this up because I believe this is an essential Business Continuity / Disaster Recovery capability that every organisation needs to design, test, and maintain. I’m arguing that a company’s Cyber Operations function should have the ability to remotely shut down and isolate entire buildings’ worth of IT kit during a rapidly unfolding emergency. I think that IT boffins must have the ability to deny intruders access to company protected information and IT resources on a facility-wide scale, on-demand. 

You might well ask “Come on, Keil. It seems very unlikely that our accounting practice in Leeds is likely to be overrun by a mob of uncoordinated violent extremists. Why should we go to all this trouble?” That’s a fair question. Let me counter with some slightly less fantastic scenarios where all of your employees might be forced to abandon their posts on short notice:

  • Do buildings ever catch fire in your community? 
  • Do you ever experience severe weather, like tornadoes, hurricanes, or blizzards? 
  • Are any of your facilities heated by natural gas? 
  • Are any of your facilities located near an industrial site?
  • Does your community experience “mass shooter” events? 
  • Does your community experience “mail bomb” attacks?

I’m sure there are some communities out there that can cheerfully answer “no” to of my questions. I envy y’all. Are you hiring? DM me and I’ll send you my CV …

For the rest of us, these seemingly outlandish scenarios can be very real threats, any one of which might require staff to rapidly evacuate a facility, such that there might not be time to properly shut down all company PCs. 

Here in Texas, we reluctantly experience all of these disasters on a depressingly regular basis. These events – while rare – might well happen again at any time. Therefore, a good BC/DR plan compels IT and Security to prepare for them. Bear in in mind, most communities already require this … to a degree. Companies may be forced by law to hold regular unannounced “fire drills” and practice evacuating people safely out of their building. That was one of the lessons we took away from 9/11 … companies that forced their employees to practice evacuating the Twin Towers were more likely to have survived the attacks than companies that didn’t.

It makes sense, then, for a solid BC/DR plan to also include practicing remotely shutting down all IT kit for an entire floor, facility, or site on short notice. I suspect that the actual act of toggling the network management system isn’t the part that needs to be practiced; it’ll be the process of requesting, authenticating, and authorizing the remote shutdown action. Everyone involved on every shift will need to know exactly how the orders flow and how to verify that the actions triggered were executed correctly. 

Sure, the sort of events that will drive this sort of emergency action will likely be rare … but, as we’ve seen, the potential consequences of failing to act in time might be catastrophic. That’s what BC/DR is all about: considering what might go wrong, then figuring how to mitigate the worst outcomes well ahead of time so everything runs smoothly should worse come to worst.