Participants in a recent cyber-warfare exercise told Reuters that the exercise highlighted problems in leadership, communications and readiness. The two-day exercise brought together 230 government agencies, private firms and other participants. Participants were split into two groups - attackers and defenders - before each developed tactics for attacking and defending critical infrastructure systems, such as those controlling banking, telecommunications and utilities.As is frequently the case, John Leyden gets to the absolute heart of the problem in the last paragraph. Knowing the terrain on which battles are fought is the problem.
[snip]
Attackers always have the advantage over defenders in cybersecurity and, by extension, cyber-warfare. Problems such as maintaining extended supply lines or knowing the terrain on which battles are fought really translate into the sphere of cybersecurity.
Important networks like classified DoD networks are (as you'd expect) treated very differently than your home DSL, or even corporate networks. The most important protection concept that people rely on is Red-Black Separation: the untrusted (unclassified, or Red) network is kept entirely separated from the trusted (classified, or Black) network. From an architecture perspective, this is summed up in a tongue-in-cheek saying:
An air gap solves a multitude of security sins.Your computer isn't patched? Doesn't matter, as long as the old saying from Maine applies: can't get theah from heah.
So security architects put all the important stuff on one network that is hermetically sealed from Al Gore's Intarwebz. Cool, right? To break in, you need a real-life spy, who can break into the building at night to install his equipment. The scene in Ocean's Eleven where the computer nerd breaks into the casino computer room to install his monitoring equipment is very well done, and is precisely the threat scenario here. The proper safeguard? Armed Marine guards.
So how does the Fed.Gov's classified network get compromised, to the point that you get headlines and a multi-tens-of-billions-of-dollar program to fix it?
The biggest problem for the architect is that you're not really the architect. You don't really know what things look like, and you can't.Bob Metcalf is one of the pioneers of computer networking - in fact, one of the inventors of Ethernet. He is to computer networks what Paul Mauser is to rifles: there's a sharp line that divides pre-Ethernet and post-Ethernet history. These days, Metcalf is best known for a description of why computer networks are so danged useful:
The value of a computer network increases with the square of the number of computers connected to it.Now if Al Gore's Intarwebz just consisted of your computer, and your mom's, then that might be useful to you. Everyone else would (understandably) be less interested. The reason that Al Gore actually deserves a fair amount of credit is that he really pushed early funding of NSFnet (the National Science Foundation's network back in the Pleistocene Age), which hooked up pretty much most universities to the Internet.
A network with you and your mom? Not so useful. A network with Harvard, Stanford, MIT, and the Library of Congress? Yeah, there'll be something useful there.
And now back to security: You mean that you want to air-gap that?
Ignore the logistical problems: you need to extend the Red network all the way to the FOBs in Iraq and Afghanistan, or the troops don't get email from home. You can deal with that problem by throwing money at it.
The problem is that what you want (security) and what your users want (information on Al Gore's Intarwebz) inherently is in conflict. You can't win unless they lose, and vice versa.
And remember, you're not really the architect. These networks weren't so much designed, as grew. Even the Internet itself grew by connecting networks together - a network of networks. The name IP comes from this: Internet Protocol.
So back to the classified networks: did someone connect the Red Network to the Black one? We don't know, and folks who might know won't say. My experience is that nobody knows what their networks look like, and certainly haven't mapped all the connections (specifically, they have maps, but the maps do not reflect reality). But it doesn't matter, because users will bypass the air gap, anyway. With this:
It's a USB thumb drive (translation: an 8 Gigabyte removable file storage device). It fits in your pocket (or, if you follow the link, in your magazine).
Remember, your users want information. It's on the Red network. They can copy it to the thumb drive, and then walk over to a computer on the Black network and upload the information.
Of course, information flows both ways - classified data can go to USB. Very few of your users will do this, because very few are spies or traitors, so data flowing from Black to Red is not the initial problem.
Malicious code going from Red to Black is the initial problem.
Now why on earth would one of your users install malicious code on a Black network computer? Same reason they install spyware on their home computer: they don't know it's malicious. They just want to watch this:
It's the Dancing Baby from the mid-1990s. This was the first example of a mass Internet video meme - it was wildly popular, and spread virally, via email from user to user as people passed the link on to each other. Remember, as the architect, you need to keep the Black network from getting to the dancing baby.
You lose.
So if you are the chief Bad Guy - Dr. Evil, head of an unfriendly government's Intelligence Service - how to you hack the Fed.Gov classified network? Give people interesting poisoned bait - an interesting or funny video that contains embedded malware that runs when the video is watched. They'll want it, because it's interesting. They'll download it from the Red network (Al Gore's Intarwebz) and take it onto the Black network, where it will spread.
And now your spy comes into the picture. All he has to do is pick up the classified data that's been harvested by the malware botnet army that has infested the Black network. Of course there's risk, because he does indeed have to get past the armed Marine guards, but there is a long history of this sort of thing happening.
We don't know any details of the recent breaches, but we do know this: DoD has banned USB. It's not the only time we've seen malware infections via USB, either unintentionally or on purpose.
And this is why Leyden sums up the problem so well:
Problems such as maintaining extended supply lines or knowing the terrain on which battles are fought really translate into the sphere of cybersecurity.You don't know what your network looks like, it's evolving too quickly for you to ever know this, and even if you did know, you don't control the logistical flow of information.
Yet another couple of basic principles forgotten:
ReplyDelete1. There is no such thing as a secure computer or network, unless it's unplugged.
2. Someone WILL plug it in.
3. You can't stop the users
4. Since you can't stop them, you can't trust them. That doesn't mean try and lock them down everywhere; it means acknowledging that you CAN'T stop them and architecting you systems to deal with that.
5. They WILL get in, no matter what you do; so you MUST have defense in depth and you MUST have ways of dealing with compromise.
I was going to say "you wouldn't believe how many times I'd fought the battle over "air gaps" and real security"... except that you most likely WOULD.