Tuesday, December 10, 2013

Anarcho-Capitalism 2.0

One of the big tech buzz phrases lately is "the Internet of Things".  Moore's Law has suggested that computing power and memory roughly double ever 18 months.  It's gotten to the point where your cell phone doesn't just have more computing power than the systems that plotted the Moon landing, it has millions of times more power than that.  The trajectory is smaller, faster, cheaper, less power draw.

I myself have been playing this tech game for so long that I remember the ROACH chip - the Router On A CHip.  That's so pedestrian now as to be all quaint and retro.  A smartphone is only possible because all the myriad components (and their connecting communications busses) have long since collapsed into a single flake of silicon.

And so to the Internet Of Things.  What happens when computers appear in anything?  Camp Borepatch has six (!) heating/cooling zones, with smart thermostats that talk to each other over internal wiring.  The last owner put all that in, and the technology is likely nearly ten years old.  Your car has dozens of computers.  The IPv6 address space has billions and billions of unique addresses, so all of these can be Internet enabled, allowing them to talk to each other and work cooperatively to solve problems that nobody has thought of before, because positing a solution would have seemed absurd on its face.

Silicon Valley in general (and Cisco in particular) are all over this as the Next Big Thing.

The problem is that current Operating Systems stink.  More specifically, they were designed for the Apollo era - even Linux dates back to Unix which first got its stirrings in the 1960s.  The network is a marvel of redundancy and resiliency (as indeed DARPA had designed it to be, again, back in the '60s), but networks go down and we're quite a long way from applications that gracefully handle network outages.  The problem is that error handling is at the application level, which means that you have to write it for each of the apps on the system.  Every. Single. App.  It's like having to handle network addressing at the app level, rather than at the OS.  Actually, it's worse.

The current computing paradigm is broken when you think of it scaling to billions of processors distributed randomly around the world.  Too bad for the Internet Of Things.

Or is it?  Clark at Popehat has a very interesting (and a pretty technical) overview of Urbit, which shows the promise of shattering the data center into a billion shiny computing shards:
Nock programs are tree structures.


This is not unprecedented – Lisp ("The greatest single programming language ever designed.") does too.

And here – suddenly – the conceptual Legos start clicking together.

Because a Nock program is functional, it operates without caring what machine its on, what time it is, what the phase of the moon is.

Every Nock program is a tree, or a pyramid. Every subsection of the tree is also a tree. Meaning that each subsection of a Nock program is a smaller Nock program that can operate on any machine in the world, at any time, without caring what the phase of the moon is. Meaning that a Nock program can be sliced up with a high carbon steel blade, tossed to the winds, and the partial results reassembled when they arrive back wafted on the wings of unreliable data transport.

Nock programs – and parts of programs – operate without side effects. You can calculate something a thousand times without changing the state of the world. Meaning that if you're unsure if you've got good network connectivity, you can delegate this chunk of your program not just to one other machine, but to a thousand other machines and wait for any one of them to succeed.
Moore's Law says that all of these billions of network node devices will be smarter in 18 months - twice as smart.  As people replace (say) smart light bulbs in 5 years, that's 3 generations of performance improvement.  There will be 8 times the computing power available in the Internet Of Things - and Urbit/Nock let you harness that.

It actually lets anyone harness that:
Nock supports and assumes full encryption of data channels, so not only can you spread computation across the three machines in your home office, you can spread it across three thousand machines across the world.

The list goes on and on.

Envisioning and defining Nock took a stroke of genius. Implementing it, and Hoon, and Urbit, will be a long road.

But once it's all done, it will function like an amazingly solid, square, and robust foundation. All sorts of things that are hard now, because we have built our modern computational civilization on a foundation of sand will become easy. We have vast industries based around doing really hard work fixing problems that modern computing has but a Nock infrastructure would not – Akamai, for example, pulls in $1.6 billion per year by solving the problem that modern URLs don't work like BitTorrent / Urbit URLs.

When an idea, properly implemented, can destroy multiple different ten-billion-dollar-a-year-industries as a side effect it is, I assert, worth thinking about.
I imagine that some of you have been following the "Anarcho" part of all of this and wondering where the "Capitalism" part comes in.  That's it, right there.  With a billion networked computers all more powerful than the computer you're reading this on right now, computing ceases to be a scarce commodity.  This quite frankly turns the field of computer security on its head - while I don't know that this doesn't solve the problem of Denial Of Service, I don't know that it won't.  After all, if your computer (whatever that means in an Urbit world) is DDoS'ed, why couldn't your Nock programs just run somewhere else?

You can see why Cisco is pushing this so hard - the network essentially becomes the computer (as the old Sun Microsystems advert put it).  It makes Cisco's networking gear more valuable.

And now to the really subversive part.  Clark again:
Back in the early days of the internet when Usenet was cutting edge, there was a gent by the name of Timothy C May who formed the cypherpunk mailing list.
His signature block at the time read
Timothy C. May, Crypto Anarchy: encryption, digital money, anonymous networks, digital pseudonyms, zero knowledge, reputations, information markets, black markets, collapse of government.
I bring up his sig block because in list form it functions like an avalanche. The first few nouns are obvious and unimportant – a few grains of snow sliding. The next few are derived from the first in a strict syllogism-like fashion, and then the train / avalanche / whatever gains speed, and finally we've got black markets, and soon after that we've got the collapse of government. And it all started with a single snowflake landing at the beginning of the sig block.

Timothy C May saw Bitcoin. He saw Tor. He didn't know the name that Anonymous would take, and he didn't know that the Dread Pirate Roberts would run Silkroad, and he didn't know that Chelsea Manning would release those documents. …but he knew that something like that would happen. And, make no mistake, we're still only seeing small patches of hillside snow give way. Despite the ominous slippages of snowbanks, Timothy C May's real avalanche hasn't even started.

I suggest that Urbit may very well have a similar trajectory. Functional programming language. Small core. Decentralization.

First someone will rewrite Tor in it – a trivial exercise. Then some silly toy-like web browser and maybe a matching web server. They won't get much traction. Then someone will write something cool – a decentralized jukebox that leverages Urbit's privileges, delegation and neo-feudalist access control lists to give permissions to one's own friends and family and uses the built in cryptography to hide the files from the MPAA. Or maybe someone will code a MMORPG that does amazingly detailed rendering of algorithmically created dungeons by using spare cycles on the machines of game players (actually delegating the gaming firms core servers out onto customer hardware).

Probably it will be something I haven't imagined.
Will this happen?  Who knows?  But Silicon Valley is pushing this because it (rightly) sees a paradigm shift.  The folks at the Fed.Gov are clueless, shambling dinosaurs (otherwise they'd work in Silicon Valley, duh - yes, that sounds arrogant; yes, it's true).  And so, if this happens, the Fed.Gov won't realize it until it's already happened.  Until the paradigm toothpaste has shifted out of the tube.

And the punch line?  Imagine how much metadata the NSA will have to analyze with 2 orders of magnitude more computers each doing 3 orders of magnitude encrypted, randomized network connections?  They will need 100,000 times the compute and storage capacity within a decade.  And more importantly, the imagination to know how to make this work.  And they'll need a further 100,000 times the power ten years further out.

Let us know how that works for ya, Ft. Meade.  There's no way that the NSA has increased their computing power by a factor of ten billion in the last 20 years.  They won't do that in the next 20, either.

The world is far less predictable, and far less controllable than anyone thinks.  It's very probably less predictable and controllable than anyone can imagine - at the very time that Progressives think that they can lock down control over the populations and institute the New Jerusalem.  Let us know how that works for ya, Progs.
It is our task, both in science and in society at large, to prove the conventional wisdom wrong and to make our unpredictable dreams come true.
- Freeman Dyson
Bootnote: What is the man behind Urbit?  His name is Curtis Yarvin, and he works in Silicon Valley.  He also goes by the nom de blog of Mencius Moldbug. We've seen him before here.  Clark addresses this obliquely in the comments to his post:
The neo-reactionary stuff on Urbit that seems to be decoration is not. It is the whole point.
If Yarvin (and Cisco, and Silicon Valley) can pull this off, this is Big, big stuff.  RTWT, including the comments which are packed full of smart.

8 comments:

Old NFO said...

Absolutely correct... and the KEY point "assumes full encryption of data channels"... THAT is going to upset a whole bunch of 'watchers'...

Weetabix said...

Thanks for letting me come here for teh smarts. Not sure I could handle any more.

Re: Lisp ("The greatest single programming language ever designed.")

Shouldn't that be ("The greatest (single) programming language (ever (designed)).")?

Borepatch said...

Wheetabix, LOL. My favorite acronym for LISP is Lots of Insipid Stupid Parentheses.

Weetabix said...

We called it "Lost In Stupid Parentheses" but same thought.

It was a language that, although I didn't program in it, I could take someone else's code, appropriate the bits I needed, and cobble something together that would do what I needed it to do. My results, much like my cars, were never beautiful, but they were functional.

James said...

Well ya lost me at "2.0". But by God if it is important to you guys, I support what ever the hell you said.

Unknown said...

I started reading though the docs shortly after seeing Clark's post.

It's intriguing for sure. What people seem to miss out on is that bandwidth does not grow at the same rate. Yes Moore's law affects bandwidth too, but not in the same way. The hard part will be trying to demand pull bandwidth vendors to upgrade very large, very expensive systems - which they'll do eventually, one city at time and the rural folks will see it much later, after the price has been driven down.

So, if everything is distributed, and worse, to cover for possible connection errors, multiply distributed - where a section of a program may be running on 2 or 2k or 200k machines - and you only care about the first answer, then your using bandwidth for traffic that serves no purpose. Now imagine 10 billion computers doing that - ouch.

As long as we don't go too overboard on the distributed computing part of it - sure if you don't have the cycles to do something in a reasonable time frame, then distributed computing might be called for. But there's a problem - who decides what's reasonable? Obviously if it takes longer to request and set up a distributed process than it would to just do it locally, you'd want to avoid it.

Do I really need the government running their decryption analysis on my machine? And how do we prevent that?

Still the concept is valid and I think they're correct - we can't keep building on top of our shaky foundation. The problem is everyone want's to be in on the design of the replacement and we all know what happens when you design by committee.... I'd rather have a coherent, small, simple design by a someone who likely did things I wish he hadn't and didn't do things I wish he had than some committee's idea of compromise.

Just be very careful where your Trusted Platform Module comes from :)

Cap'n Jan said...

LISP indeed. Being a mathy sort, I was happily using Lisp far far back in the mists of time... ;-> Still it is one of my favorite languages, that and Tcl. Lots of similarities, simple, MASSIVELY POWERFUL languages. But you can't trust me. I loved FORTRAN too.

Thanks for the memories!

But speaking of the future, have you read Daniel Suarez's books? He wrote Daemon (scary too), but the scariest is Kill Decision. The guy is imminently knowledgeable about 'our' field; tech and he is a smart futurist. Other favorites are David Brin, Verner Vinge and dare I mention Jerry Pournelle (who I have this massive intellectual crush on?)

Anyways, not a futurist (well maybe), my other favorite is one you are no doubt familiar with: Larry Correia. I'll read ANYTHING that man writes!

Fair Winds, Borepatch, sorry I missed you when you lived down here in Austin, I wanted to go up to the 'shoot' in Dallas, but it was just too far at the time. If you are ever back in the area, my husband and I will be glad to treat you to some shooting up in our neck of the woods at Eagle Peak or Red's your choice, and then some BBQ again, your choice of places!

Fair Winds,

Cap'n Jan

Borepatch said...

Richard, what I was thinking is that it could be possible to put a simple app-style programming interface on top of this. If the amount of data exchanged is small (i.e. not video) then it would make it easy to 'net enable anything. IOW, this facilitates the Internet Of Things.

Cap'n Jan, I'm sorry I missed you out in Austin. I'll take you up on your kind offer if I get back out there.