Wednesday, November 9, 2011

Responsible disclosure

I've been working in the Internet Security industry for a long, long time.  Long enough not just to have published in the technical press, but to have those publications go out of print (all you PlayStation Network kids get off my lawn!).

I say this not to boast, but to establish my bona fides.  I've personally been involved in the process of contacting a computer vendor to tell them that we've found a vulnerability in their product.  Some vendors have been great to work with - Microsoft was very good, even in the 1990s: they'd just ask that we hold an announcement until they had a patch written and tested.  After all, it doesn't do anyone any good to announce a vulnerability if there's no fix.

Hey, all y'all are really screwed!  Aren't we the cleverest kids on the block?

We worked very hard at Responsible Disclosure, which is more or less described above.  Not everyone did.  Sometimes, they were Black Hats, keeping their exploits to themselves ("Day Zero" exploits, called that because the vendor hasn't created a patch yet, and so the clock can't start running on your patch cycle).

Sometimes, announcements went out because a vendor simply refused to make a patch, or sometimes even acknowledge that they'd been told.  The security mailing list archives are filled with threads along the line of I'm announcing because Foo, Inc won't reply to my notification about their vulnerability in FooOs.

Protip to vendors: it's considered polite to have a "security@foo.com" address, with someone who actually reads the emails.

And so to the big security news, about Apple's App Store basically having no security at all.  Reader Joseph emails to point us to this:
A man who created a bogus stock price tracker app for the iPhone that was in fact malware has been thrown out of Apple’s developer program. That would seem uncontroversial until you discover the app was designed to highlight a security flaw rather than cause damage or steal data.

Charlie Miller was told his right to create and upload apps had been terminated “effective immediately.”

If Miller’s name seems familiar, that may be because he’s a perennial winner at the PWN2OWN competition, held at the CanSecWest security event in Vancouver each year. Contestants can ask judges to visit a URL using various combinations of hardware, operating system and browser, with the latest publicly available security updates applied. Last year was a particularly bad day for Apple with a MacBook Pro running Safari the first computer to fall (Miller being the successful attacker) and the iPhone the first smartphone hacked.
Now this is an interesting situation.  Miller did not notify Apple until after he had created his app.  Some people in the industry think that's a no-no.  Certainly, this is why Apple expelled him from their Developer program.

Me, I'm not sure that this was the right thing to do.  I personally have had vendors tell me that they don't want to fix a vulnerability I've reported to them.  There are lots of excuses they give; if they're honest, they'll say "it's too hard" - this happened with Sun Microsystems, where the bug was buried deep in the guts of RPC.  The code was 15 years old then, and nobody really knew how it worked (yes, this happens more than you'd think).  Everyone was afraid to touch the code, because the breakage they might do in "fixing" the problem might be horrific.  OK, fair enough.

But usually the official excuse you get back is "we were unable to recreate the bug, and so view the problem as theoretical".  The proper response to this is OK here's some exploit code, biatches.  Usually gets the proper level of attention.

But how do you do this in the App Store "Walled Garden" environment?  The exploit has to be an app. And the app has to come through the App Store - if you jail break your phone, there's no guarantee that the code works the same way.  Maybe it does, maybe it doesn't.

If you're talking about a serious vulnerability, you need certainty, because the cost of the fix will be measured in tens of thousands of dollars to get the patch created, and much more for everyone to get the upgrade installed.  You really really don't want to go off half cocked here.  Word will get around the security community.

So what was Miller supposed to do?  Yeah, he could have notified Apple, but they are infamous for their lousy attitude to security.  And that notification would put them on the lookout for interesting App Store submissions from Miller.

Infinite Loop.  n.  See Loop, Infinite.  It's in Cupertino, actually, and Apple's HQ is right there.

And so, Jeremiah Wright like, Apple's security chickens are coming home to roost.  They've demonstrated repeated contempt for the security industry, and have repeatedly flouted expected practice.  They are presumptively in perpetual wagon-circling mode, and so they can't play the victim card that mean old Charlie Miller isn't following the rules when he basically said here's your exploit code, biatches.

And so the way that they expelled him from their App Developer group looks petty and spiteful.  It also looks pathetic.  Consider: Miller is perhaps the world expert in Apple exploits.  He can feed information to any of a thousand other App Developers.

Protip to the Apple security team: Google LBJ I'd rather have him inside the tent peeing out than outside the tent peeing in.  I even linkified it for you.

You're welcome.

And a note to Apple users:  most of you think that you have good security.  You really don't.  That will change when Apple gets as good an attitude towards security as Microsoft has.  For now, assume that you have nothing, plus or minus twenty percent.

2 comments:

Stephen said...

All I can say, as a analog guy in a digital world, is - wow....

Stuff like this makes my head hurt. Now, put a 1911 in my hand and I'm at home...

Great piece you've written here.

Anonymous said...

Good points, you make.

But my experience with microshaft has been quite different from yours. I've been the victim, multiple times, of their lousy security. And their blase attitude about it. But that was long ago, and far away. I haven't had to use micro-crud now for about 10 years. Linux and my MacBook are my current darlings (although I am NO fan-boy).

I've got the bona fides as well. I've been in the business since '68 hardware/software (starting with Wang nixie tube calculators - then IBM (360 days)... then Data General, Dec, Stratus, Motorola here in Austin before they tailed out. then on to a couple of start-ups. Now I do the odd consulting job.) ;->

I'm no hacker white or black hat. But when I run into a genuine security breach just getting though my day, I feel compelled to report the problem to the offending company. PARTICULARLY WHEN IT CONCERNS MY MONEY. (I am talking about a very big financial institution that rhymes with lace.)

I've written carefully worded analyses of the problems I've found and gotten back, twice now, form letters that go something like this:

"Sorry you are having trouble with our website. You could try restarting your browser. If you need additional help, please look at our faq, or call our customer service center".

Yeah, like I want to spend my valuable time on the phone waiting for some numb-skull to explain the internet to me.

So I just don't bother anymore... screw them. My money's safe.

Still, it is surprising how few site developers bothered to filter input responses, (until recently). Just plop in a big 'ol jpg in an input field and watch the site die. An sql query that gets processed... Yikes!!

I'm really just a lowly performance analyst. So I've got immense respect for anyone that wades into that snake-pit called Security. Rarefied air, that. I'm just that weird lady in the back that 'knows math'.

Speaking of security have you
read Daemon by Daniel Suarez?

Great Fun!