Wednesday, December 3, 2014

It seems that Moore's Law is over

And nobody noticed:
Well it turns out that Moore’s Law actually came to an end in 2012. This does not mean that there has been absolutely no progress in microprocessors, but the process has slowed down significantly. There have been bigger improvements in battery life, LCD quality, and solid-sate storage has become less expensive, but raw computing power is not increasing according to Moore’s Law!

The problem is already known in the industry. In the past, improvements in computing power have come from shrinking the transistor size. But now that transistors have shrunk down to 28nm, it’s proving difficult to make [chips with smaller transistors that are as inexpensive per transistor as chips with 28nm transistors. (This has been edited to remove inaccuracies in the original post.)]
RTWT.  This has big implications for artificial intelligence.

4 comments:

  1. People don't have any idea the scale of a nanometer. A sheet of paper is approximately 100,000 nm thick. When you're working with a chip where individual transistors are 28 nm, there's nowhere to go.

    ReplyDelete
  2. The issue isn't strictly one of Moore's Law per se.

    Two complementary problems are at hand here: the ability to manufacture ever larger intgegrated circuits, both physically and transistor count, to be used in devices of simultaneously increasing power and decreasing physical size (cell phones, tablets, etc); and the *fact* that the existing CISC processor architectures and operating systems don't allow for true parallel processing (SMP ain't parallel).

    I wrote an article for MIPS magazine many years ago in which I proposed that the next natural step in computer systems - the step that would lead to true parallel processing and a huge leap in processing power - would be a processor design (and the associated operating system) in which *individual instruction cycles* would be a sharable resource. Not just at a software function level, but at the atomic instruction level. One processor asking another processor to "do this for me for a while" without any measurable impact on system operation and with minimum handling by the operating system.

    We ain't there yet. CPU designs, whether multi-core, pipelined or multi-pipelined, are still based on CISC and procedural operations. We're still stuck with single-threaded interrupt handlers. Pipeline instruction handlers help mitigate repeated sequences but don't do much for random operations. I could go on and on about the limitations in all of the current operating systems which are all based on procedural operations that may be faster on multi-core processors, but which are still core-locked on a functional level.

    Hence, the number of transistors per CPU isn't the only limiting factor: the limiting factor is the design in which those transistors are being implemented. And the size of the device in which these new processors are being used.

    Sorry for the length of my response...

    ReplyDelete
  3. I've always marveled at the ability of humans to see a trend, and to just extend the trend line out into the future in order to make predictions.

    yes, for a period of time, Moore's law was (sort of) holding up as true.

    But to assume that the trend line would continue at that brisk a pace indefinitely?

    That's that same kind of flawed thinking that has people running the 1970 to 1998 temperature trend lines into the stratosphere and claiming "catastrophic global warming."

    I hope like hell that software and hardware engineers weren't short sighted enough to think that Moore's Law would continue, unaffected and unabated, forever, without ever once experiencing a setback.

    ReplyDelete
  4. The limitation is in the photolithography used to manufacture the masks, which are used to manufacture the wafers that are then cut into CPU's.

    They've hit the limit for the wavelength of the light source used, where the diffraction of the light at the edges of the mask cause the features printed to get "fuzzy".

    They've known about this for years, and have researched other "light" sources. I would expect them to start using X-rays, or perhaps start direct writing to the wafer surface one of these years.....

    ReplyDelete

Remember your manners when you post. Anonymous comments are not allowed because of the plague of spam comments.