81

Several years ago, the 'next big thing' was clockless computers. The idea behind it was that without a clock, the processors would run significantly faster.

That was then, this is now and I can't find any info on how it's been coming along or if the idea was a bust...

Anyone know?

For reference:

http://www.cs.columbia.edu/~nowick/technology-review-article-10-01.pdf

Peter Cordes
  • 328,167
  • 45
  • 605
  • 847
GeoffreyF67
  • 11,061
  • 11
  • 46
  • 56

2 Answers2

78

Here's an article from a few years ago that's gung-ho on the technology, but I think the answer can be found in this quote:

Why, for example, did Intel scrap its asynchronous chip? The answer is that although the chip ran three times as fast and used half the electrical power as clocked counterparts, that wasn't enough of an improvement to justify a shift to a radical technology. An asynchronous chip in the lab might be years ahead of any synchronous design, but the design, testing and manufacturing systems that support conventional microprocessor production still have about a 20-year head start on anything that supports asynchronous production. Anyone planning to develop a clockless chip will need to find a way to short-circuit that lead.

"If you get three times the power going with an asynchronous design, but it takes you five times as long to get to the market—well, you lose," says Intel senior scientist Ken Stevens, who worked on the 1997 asynchronous project. "It's not enough to be a visionary, or to say how great this technology is. It all comes back to whether you can make it fast enough, and cheaply enough, and whether you can keep doing it year after year."

Clonkex
  • 3,373
  • 7
  • 38
  • 55
Mark Ransom
  • 299,747
  • 42
  • 398
  • 622
  • 8
    This. In order to make async chips you have to develop a whole new set of tools, train new design engineers, develop new working practices, and probably along the way have a few spectacular design failures. This costs a hell of a lot, compared to relying on the current process scaling. – pjc50 Aug 31 '10 at 15:36
  • 3
    Eight years later I am sure more could be said on this subject; even this answer doesn't really paint the concept as a dead end as far as I can tell. – Darren Ringer Feb 27 '17 at 13:56
  • 4
    @DarrenRinger as implementing Moore's Law becomes harder and harder it certainly seems like there'd be an opportunity for an alternate approach. If you have information about any recent attempts it would be worth leaving your own answer, I haven't heard of any. – Mark Ransom Feb 27 '17 at 14:44
  • Perhaps years of development of aggressive clock gating (and even power gating) to unused portions of a CPU / ALU have made the gains somewhat less dramatic than in 1997. In 1997, mainstream CPUs didn't have low-frequency idle states, they just had "asleep" and "on". Software support for deeper sleep states was a new thing back then, in early PIII and Athlon CPUs, IIRC. I'm not saying there's nothing to gain from async CPUs, just that clocked CPUs aren't leaving so much on the table, especially for idle power. – Peter Cordes Jul 21 '22 at 01:07
13

There's some information on this subject availableboth here Asynchronous CPU and here History of general purpose CPUs, including a list of (some not so) recent implementations.

Looking at some of the benefits (power consumption, speed) and disadvantages (increased complexity, more difficult to design) it seems logical that in recent years the development seems to have focussed on embedded designs:

  • Epson ACT11
  • SEAforth® 40C18
Tim
  • 19,793
  • 8
  • 70
  • 95