Wednesday, January 24, 2018

R.I.P. Moore's Law

This seems like a pretty big deal:
The death of Moore’s law is no surprise, because the semiconductor industry has told contradictory stories for years. While it created new process nodes like clockwork, the capital requirements to develop those new devices climbed nearly exponentially. 
The laws of physics were to blame: they created a money pit into which Intel and the other companies threw tens of billions of dollars, with little to show for it. 
Physics was a tough enough opponent, but now computer science itself has joined the fight thanks to the Meltdown and Spectre design flaws first revealed here in The Register
The two mistakes mean that branch prediction techniques, designed to further improve the performance of ever-cheaper silicon, have introduced two classes of security threats - one set (Meltdown) that can be remediated by imposing as much as a 30% performance penalty - and another set (Spectre) that at this point can’t really be remediated at all - except, possibly, by littering code with instructions that suck all the benefits out of branch prediction. 
The computer science behind microprocessor design has therefore found itself making a rapid U-turn as it learns that its optimisation techniques can be weaponised. The huge costs of Meltdown and Spectre - which no one can even guess at today - will make chip designers much more conservative in their performance innovations, as they pause to wonder if every one of those innovations could, at some future point, lead to the kind of chaos that has engulfed us all over the last weeks. 
One thing has already become clear: in the short term, performance will go backwards. The steady and reliable improvements every software engineer could rely on to make messy code performant can no longer be guaranteed. Now the opposite applies: it’s likely computers will be less performant a year from now.
Software has been increasingly bloated for about as long as I can remember - especially since Windows XP.  Linux has seen a noticeable slow down over the past 10-15 years, and Linux has about as pure a performance optimization/old school software hacker ethos as anything these days.

Likely mobile phones will be hardest hit, as the iOS/Android bloat continues unabated and power draw prevents a brute force "turn up the clock speed" approach.  Slower CPUs combined with bloated, slower software will give a lot worse user experience.

We are seeing the passing of an age of innocence.

8 comments:

  1. I was reading in one of the trade mags that as the processes have moved down to single digit nanometers, the number of companies capable of doing it has dropped down to even smaller single digits. The author claimed there were only four companies in the world that could do 14nm. I think it was two companies that were aiming for the next shrink (7 or 5 nm, depending on who you talk to).

    The cost per transistor has now gone up for the first time since, well, the first time transistors were integrated. Yet 85% of design starts and 43% of all production were bigger than 65nm geometries.

    Moore's Law is well and truly dead.

    A quick check tells me the atomic radius of silicon is 0.225nm. That means in a 5nm wide channel, you have 22 atoms. Do you think quantum effects are going to show up. I think that's absolutely going to happen.

    I wonder if they can be made to work reliably. Then there's that whole thing about nobody can seem to tell if a quantum computer is actually a quantum computer.

    ReplyDelete
  2. It began before that. Data from Wiki, so not definitive:

    Windows 3.1 installed size between 10-15MB.
    Windows 95 installed size 50-55 MB.
    Windows 98 installed size ~500 MB.
    Windows XP Installed size ~1.5 GB.

    That's a bloat rate to make Moore's Law throw up it's hands and surrender.

    ReplyDelete
  3. ASM826, just to put a timeframe on it, Windows 3.1 came out in 1988/89, and XP came out in 2001. So in 15 years, the OS code increased in size by a factor of 100. Oof.

    Of course, you could say the same about Linux - I got Slackware (0.99 kernel) in 1994 on 24 floppy disks which included source code. Lord only knows how big a distro with modern GUI and source code would be.

    ReplyDelete
  4. I remembered the first time I installed XP on a machine, and it installed at a bit over 1.1 GB.

    Considering the first hard drive I ever bought brand new was a 1.6GB disk, I was flabbergasted.

    ReplyDelete
  5. Not to deny that the OSes have indeed swollen a lot, but if you aren't careful, you can easily sweep up a lot of not-really-OS-bloat in measures like how many bytes are included in a distribution.

    My Linux distribution includes not just an OS and basic OS-ish utilities like a linker, shell, and compiler for C (the native language of the OS itself), but many complicated pure applications like Blender (3D modeling and rendering, suitable for e.g. authoring an animated video) and a very large number of specialized programmer tools and libraries, many of which are also very complicated (e.g. ghc, an optimizing compiler for a programming language which has nothing to do with the OS, and cgal, a "computational geometry" library capable of many weird things like helping to convert raw X-ray or MRI data into a tidy gridded surface corresponding to the surface of the underlying bone). Even though these things have nothing to do with being an OS, many customers appreciate when these things arrive on the same discs as their Linux distribution, and/or are made available on the same servers as any version of Linux that should be compatible with them. It's a different kind of bloat, and not necessarily sloppy or harmful (so maybe deserving a more neutral less negative word, like "sprawl"?) because it's a fairly sensible response to how it has become so cheap to store an extra byte on a DVD, and to how much useful specialized free software is out there.

    There are also some arguably-not-bloat causes of OS growth, e.g. Linux contains drivers for many more kinds of hardware today than it did then (naturally, because it still contains drivers for a lot of the old hardware and also contains drivers for a lot of hardware that didn't exist then). And both Linux and Windows contain support for various new standard things like network protocols and encryption systems that didn't exist then. I doubt that these causes contribute nearly as much to byte count as the ship-a-bunch-of-applications sprawl does, but I expect it increases the size of both OSes by rather more than 20%.

    ReplyDelete
  6. Windows 3.1 was small and Windows for Workgroups 3.11 fit on a manageable number of 1.44 inch floppies (14 if I remember correctly) but neither one had any drivers to speak of, so every device you bought came with its own driver disks. I much prefer having things bundled together.

    Now that storage devices are in the 32GB ,64GB ,128GB and larger range burning 2GB of disk for the OS installation, 300-ish fonts, and drivers for lots of gear is worth the space. There isn't any incentive to squeeze the OS down to the smallest possible footprint anymore.

    ReplyDelete
  7. Also note for Moore's law is processor clock speeds topped out at ~3.3 GHz years ago (compared to a 1993 486 at 133Mhz or a Pentium at 60Mhz) so Intel and AMD have been adding more and more cores and L2 cache to improve performance.

    In 8 years single thread processor performance has not quite tripled while multi-thread performance has gone up 10-fold

    ReplyDelete
  8. Thinking of meltdown and Spectre, the phrase "premature optimization is the root of all kinds of evil" comes to mind.

    ReplyDelete

Remember your manners when you post. Anonymous comments are not allowed because of the plague of spam comments.