Saturday, June 28, 2008

Patching is a pain in the bore

Anyone reading this is familiar with computer bugs, because you're using a computer (hello!). Bug fixes are called patches. Keeping your computer patched is basic computing hygene, just like cleaning your gun after a trip to the range is basic shooting hygene.

The similarities end there. Excessive use of bore patches on your rifle won't do any harm - in fact, if you're using crummy old commie corrosive ammo, you probably can't be excessive in your use of Hoppes, bore snakes, brushes, patches, etc. Patching computers is a different beast.

The problem with computers is that software is not very well understood. Computer programs (and especially Operating Systems) are so huge and complicated that literally nobody really knows how they work.

True story: In my younger days (very much younger, in fact, in the 1980s) I worked for Gould Computer Systems. We made minicomputers that had a "Real Time" operating system: highly specialized so that programs could have exceptionally quick response even running on the quite modest hardware available at the time. These computers were used extensively for things like airplane flight simulators, because when you put the stick forward, the nose dropped then, not 150 miliseconds later.

The amazing thing, looking back today, is that the absolute maximum size that the Operating System could be was 64 kiloBytes (that's kilo, not Mega). Tiny, in today's terms.

The upshot was that it was humanly possible for a single person to know just about everything about the OS.

Fast forward to today. Microsoft XP Service Pack 3 is hundreds of Megabytes in size (I haven't bothered to look up the exact size - compared to the Gould OS, it's measured in "humungatrons"). Remember, this is not the XP Operating system; SP3 is a set of security patches for XP. XP is much larger.

So who cares that nobody really understands how programs really work? The problem is that by fixing a bug (applying a patch), you change the program or OS in potentially unpredictable ways. By fixing one problem, you may introduce a new (and worse) problem.

A patch to fix broken functionality provides a tangible value to the person running the program - it's fixing something that doesn't work, so these are typically seen as a Good Thing. It may break something else, but things are broken now already, so it's probably worth a try. Security patches are a very different thing: nothing is obviously broken to the people running the program, so applying the patch risks breaking something that "works fine". Lowering the chance of the computer getting owned is important to a set of people (like me!), but sometimes not the ones who typically run the business.

As a result, businesses will typically test patches in general, and security patches in particular, very carefully. This is expensive, and time consuming, and quite frankly not very much fun. In other words, patching is a pain (and boring).

Security administrators are also put in a situation where they're damned if they do and damned if they don't. If they apply a patch and it breaks an important program, everyone blames them. If they don't apply the patch, and the computer gets hacked (and, say, 20 million credit cards get stolen), everyone blames them.

There's a great article about this dilemma that really covers the "should we/shouldn't we" agony: Patch and Pray. Anyone remotely interested in this should go read it. Now. I'll wait.

Now this wouldn't be a problem if any of several criteria were met:
  1. Suppose there weren't very many security patches. This was situation ten years ago - in 1998 there were around 300 security patches for all operating systems and programs combined. The number patches any one system administrator had to apply was sort of manageable. This hasn't been true for a long time: last year there were over 7,000. Game over. (Note that I don't agree with CERT's number for 1998; the Bugtraq mailing list had a somewhat different count that I like better for some pretty obscure reasons).

  2. Suppose everything was not on Al Gore's Internet thingie. It would matter a whole lot less if you were vulnerable to attack if not many attackers could get to your computer. Back in the 1980s when I worked at Gould, this was pretty much the situation. When the Morris Worm - the first really important Internet Security incident, where everyone thought that Internet was coming to an end - hit in 1988, there were only a few thousand computers on the whole Internet. Heck, DNS was still new, and some computers used lists of addresses from (manually maintained) HOSTS files. Now, pretty much everything is on teh Intarbebs, including a bunch of stuff that shouldn't be (like electric power distribution controllers). Game over.

  3. Suppose Bad Guys didn't write exploit programs to attack vulnerable computers. Unfortunately, we've seen a progression of motivations over the last fifteen years:
1995 - "Napoleon Dynamite" hacking: "Girls want boyfriends with skills ... Bow hunting skills ... Nunchuck skills ... Computer hacking skills."

2001 - Bragging rights hacking: I was actually at the Infosec computer security conference in 2003 when Fluffy Bunny was marched out in handcuffs by the Police. I always thought that he was one of the funnier of the web site defacers.

2006 - Hacking for Dollars. Malware (Spam, Phishing, electronic credit card theft, etc) is now a Billion dollar industry, attracting serious talent and funding (Mafia, etc). The Bad Guys are better funded than we are. Game Over.
One last example before I get to my point. In 2000, we saw the introduction of Microsoft's Windows 2000 as a sea change in how companies manage their security vulnerabilities. Up until then, the Windows 95/98 OS just didn't have many network facing services, so there wasn't too much of a target for an attacker. Windows 2000 introduced what was essentially a server-class OS to desktop machines. As a result, I started telling my customers that they could no longer test just their servers for vulnerabilities; rather, they needed to test everything. One of my Really Smart Customers (RSC) took this to heart. I got a very interesting phone call one morning:
RSC: Hi, Ted.

Me: Hi, RSC.

RSC: Remember how you told me I need to scan everything, not just my servers? Well, we just did.

Me: Well done, you! How'd it go?

RSC: Well, we have a quarter million vulnerabilities. We kind of wish we hadn't done it, because now we think we really should do something about it, and there's NFW.
If he did it now, he'd probably have 5 million vulnerabilities. Game over.

So, everything is vulnerable. Attackers can pretty much get anywhere they want to , if they're patient and really want to - the smart ones can, at least. The rest of us face a never ending Hobbesian Choice of patch and pray.

Yikes! This is turning into the Post That Ate Sheboygan. I'll continue in part 2: "So what do we do?"

No comments: