Monday, September 21, 2015

An unhackable computer?

Bluesun emails to point out this development:
The software that kept the helicopter’s computer secure was at the heart of its operating system, and it could be just what the world needs to make everything from pacemakers to insulin pumps and power stationsto cars immune to hacking.
“My hope is that in 10 years’ time, anything that is security critical is running on our system or some other one built on the principles we’ve established,” says Gernot Heiserfrom the newly formed Australian national research agency, Data61. One of its predecessors developed the crucial component of the “unhackable” operating system – its kernel.

...
Known as seL4, the kernel has a few highly secure properties: it can only do what it is designed to do; its code can’t be changed without permission; and its memory and data transfers can’t be read without permission. An earlier version of it, called OKL4, is now in millions of smartphones.
Heiser says that two features underpin seL4’s security, one of which is a new way of isolating data inside the kernel. But the key development was making the code capable of being checked mathematically. Other kernels might have these properties too, but it is impossible to know for sure without mathematical proof, says Heiser.
As devout Catholics say about miracles, that would be something.  Color me skeptical.

There has been a desire for provably secure operating systems for decades.  The first formalized efforts were captured in what was called the Orange Book, which defines what is required to do this.

The Orange Book pre-dates my 30+ career in computer security.  I've personally been involved in Orange Book evaluations of commercial products.  It was not a happy experience.  Lots of time spent, even more money spent, no return on the investment.

The problem is that what the security people want (security, duh) is not what the people budgeting for the computer want (functionality).  Sometimes the security people can impose standards that products have to meet (welcome to the Defense Department alphabet soup of FISMA, FEDRAMP, and JITC), but in the long run the security team will always lose the battle to the end users.

And this makes sense.  Just imagine if in 1950 the government established the Department of Computer Design, and all computers had to come from there.  Do you think you'd have an iPhone today?

And so my skepticism goes far beyond whether this truly is mathematically provably secure - I don't have the skills to really address that, although there's an old saying in cryptography that everyone knows enough to design a cipher that they can't crack (which doesn't mean that someone else can't).

My skepticism goes deeper - if this kernel is as secure as is claimed, then it can only be so because functionality is severely restricted.  If you restrict functionality, you make your end users very unhappy.  If some of these have stars on your uniform, you are unlikely in the extreme to be able to force them to use your system.
"My hope is that in 10 years’ time, anything that is security critical is running on our system or some other one built on the principles we’ve established"
Sorry, you don't get 10 years of reduced functionality - likely at significantly higher cost (those Computer Science PhDs don't pay for themselves).  Your users won't stand for it, or wait for you.

And so while an unhackable computer would be very nice indeed, this almost certainly isn't it.  Oh well, mustn't grumble - more work for Internet Security folks ...


9 comments:

R.K. Brumbelow said...

Gah, rainbow books, other than the authors, no one ever loved them. The only way to make something unpack able is to make it no functioning, preferably with a very large emp burst or microwaves. aside from that, you can make things effectively unhackable, but in doing so you make them disposable, as upgrading, fixing etc simply requires replacement.

many things, in deed the vast majority of things, even the vast majority of things with microprocessors have precisely 0 need to be networked, and less of a need to ever be updated. Call me a grog are if you will, and I understand why people want a networked fridge that shops, orders, accepts tracks and plans family meals, but come on folks if you life is that full, eat out or get one of the services that delivers pre cooked meals to your icebox each day, you people know that is a real thing now right? a new chef prepped meal every day delivered to you for less than the cost of dining out? look it up and keep your fridge off of al gores cat playground.

Jeffrey Smith said...

I am a complete doofus on security, but even I see the way to hack these systems.
its code can’t be changed without permission; and its memory and data transfers can’t be read without permission.
Forge the permissions.

To make something unhackable, as Mr. Brumbelow suggests, is to keep them unnetworked and isolated. IOW without all the functions people think they want.

Jake (formerly Riposte3) said...

There's only one way to make a computer completely unhackable, and it involves a liberal application of high explosives and thermite.

Somehow, I don't think that's what they mean.

Ken said...

Concur. The Internet of Things is a mug's game. Alas, we are scarcely short of mugs.

R.K. Brumbelow said...

Wow I hate spell check auto correct.

Unpack able = unhackable
no functioning = non functioning
grog are = grognard

Anonymous said...

Well, theoretically - and I emphasize theoretically - it should be possible to detect code alterations by checksumming the code using an external algorithm, assuming:
1) the code was built originally to be checksummed in a rdeliable manner;
2) the external algorithm used is always secure.

Since #1 is possible in many cases, but certainly not all, and #2 is fantasy (there's no "sandbox guarantee" of algorithm security beyond "it's mine, mine alone, always under my control, and I proof its integrity continually), mass implementation code is always subject to corruption.

Eagle said...

Re. TCSEC:

Um... once upon a time, I worked for DEC when they were trying to get VMS certified to Orange Book B1. Windows had hit C2, and DEC wanted to show it was *more* secure, so we spent a lot of time modifying programs and tools to acknowledge and treat labeled objects properly. Labeling was done *down to the character* in text files, and each program on the system was labeled as well.

Hence, a user with a "confidential" clearance would not be able to run programs or access Decnet nodes at a higher clearance level, and a terminal emulator or DecWrite (or any other program for display of text) for a "confidential" user would not be able to display data labeled "secret".

We hit B1 for the base VMS operating system... but much of the work our group was doing - we wanted to get to B3 or maybe A1 - was abandoned because our modified systems had become so tedious to use that the intended customer, DOD, abandoned the project.

Eric Wilner said...

Sounds like a load of rebottled Crisco to me.
"The blue smoke is insecurity leaving your computer."

Jake (formerly Riposte3) said...

Well, if there's blue smoke leaving your computer, it probably has become very secure. :P