THE COMPUTER BULLETIN - November 2001
This Issue's Contents

BCS Logo

Leading Edge

Beyond the limits

As the ultimate physical limits of the materials used in microelectronics appear on a horizon just 10 years away, supercomputer specialist Chris Lazou looks at new ways to continue the progress of the last half-century

At a meeting of Cray supercomputer users in 1989 BCS President Brian Oakley, formerly head of the UK government's Alvey Programme of advanced computing research, gave the keynote address, Supercomputing in the year 2000.
He said, 'Though one can foresee a doubling or more of CMOS chip speeds, the end is in sight for continuing development beyond, say, the next 10 years as the top speed at which the electrons can move through silicon is reached.' In a slide he indicated a projected line to 2000 which showed this limit to be around 150 nanometres (150 millionths of a millimetre).

Some 20 years earlier came the observation by Gordon Moore, co-founder of Intel, of a doubling of performance every 18 months. This became known as Moore's Law. Intel invested $7.5bn to develop its new Itanium chip, but to obey Moore's Law 325Gflops (325bn floating point operations per second) on a chip is needed in 10 years' time.
At the recent NEC User Group meeting in Italy I attended a presentation by Dr Fukuma, general manager of NEC's silicon systems research laboratories, who described the latest research and problems to overcome in chip technologies which NEC expects to put on the market in the next 5-15 years. I am drawing heavily from material in Dr Fukuma's presentation in this article, but the opinions are mine.
VLSI (very large scale integration) is currently driven by the larger market of consumer goods, such as mobile phones, rather than supercomputers. For mobile units one needs low voltage devices, and this allows the use of a thin oxide, which in turn has low current leakage. In supercomputer devices one also needs low voltage, but the current leakage is high. The crux for higher density VLSI development is to avoid leakage so that a stable device can be built.
A 150 nanometre technology is available from supercomputer suppliers, including NEC. Scaling has been the driving force for VLSI. When one looks at the CMOS road map, technology at, say, 50 nanometres is possible in 10 years' time. However, when the technology goes below 100 nanometres it hits many problems, not least current leakage.

Recent work using organic material on copper has demonstrated devices using 80 nanometre technologies. Copper is apparently a must for VLSI at these gate densities. The Industry Technology Road Map for Semiconductors, agreed by the International Semiconductor Association for logic devices on CMOS, predicts that scaling barriers kick in from about 2008 to 2014. For example, it is generally agreed that a one nanometre thickness oxide is not possible.
In 1986, in my book, Supercomputers and Their Use, I stated, 'Each generation of computers since the first (in the 1940s) has had an upper echelon. The construction of these "supercomputers" has pushed back the technological frontiers of the day.'
In the past advances were driven by conventional scaling, but now new approaches are needed to develop devices: new processes using new material, new architectures and new circuits.

With the advent of larger 10x10mm chips, a new approach is to use a fast clocking strategy (at several GHz speed) and at the same time adopt dynamically reconfigured logic. This can be done at very high speed: in five nanoseconds eight tasks can be mapped on the same hardware device.
In addition the trend is to put multiple processors on a chip. To overcome high stand-by power dissipation, a software controlled power management circuit is needed. So the next generation of CMOS is likely to have equivalent scaling with adjustable wide range multiple parameters to allow many tasks to be performed on the same hardware device.
As an illustration of where research is going, an EJ-MOSFET device at eight nanometres has already been demonstrated by NEC. This requires less than one volt to power it.
But how do these device speeds translate when they are incorporated into supercomputer systems?

High end supercomputing is more than a chip: it also involves memory bandwidth, heat extraction and tight communication system integration. So although device developers see a life in CMOS for possibly the next 20 years, high performance systems designers see extra barriers which bring this back to the timeframe of the Industry Technology Road Map for Semiconductors.
One of the severe system constraints challenging system designers is the number of pins needed to service the increased functions included on a chip. Although a high speed interface could help reduce the problem, this would depend on the system architecture, especially in high performance computers. When one looks at technology changes needed to deliver petaflops (a million billion floating point operations per second) in the future the biggest challenge is how these would fit in the computer environment of today. Current research suggests that it is possible to get petaflops for a particular application - for example IBM's Blue Gene project to simulate protein folding - but very hard to deliver as general computing. It is worth noting that the processor proposed by IBM has a restricted instruction set, yet requires 1m processors linked together to get one petaflop peak performance. Even with 32 processors on a chip, one needs 32,000 communication connections outside the chip.
Sustained performance peters out to a relatively low baseline very quickly with so many connections.
In short, beyond CMOS, after 2010 to 2014, there will be new material challenges. Some of these materials are in the experimental phase, with many reliability, design, manufacturing process and operating environment issues to be solved.

With Josephson junction devices heat dissipation is about 1,000 times lower than that incurred with silicon. Many computer manufacturers, including IBM, Control Data, and NEC, demonstrated components with logic gates switching at 10 picoseconds (10 millionths of a millionth of a second) as early as 1981. As superconductivity occurs at low temperatures, the devices have to be submerged in cryogenic liquids. This causes difficulties in interconnection and chip packaging.
Quantum computing uses the properties of atom spin (sometimes ions) as its basic unit, a quantum bit, or qbit. A conventional digital computer bit has values of either zero or one; in addition to these two states, quantum computers can also store zero and one simultaneously.

However, although quantum computation is fundamentally parallel, it requires special algorithms and is unlikely to become a general purpose machine. So far only two kinds of practical use have been discovered: in database searching and factorisation. Hardware developments are lagging even further behind. It is estimated that around 10,000 qbits are needed for practical factorisation. The latest IBM experiment using nuclear magnetic resonance (NMR) only managed to use five to seven qbits. The IBM experiment changed atoms into ions and used the two internal spin states of the ion as a qbit. It then used the microwave pulses as addresses. Unfortunately, because of heat, the NMR method does not allow the generation of more than 15 qbits.

NEC has demonstrated a solid-state qbit, making a superconductive single electron box with Josephson junctions. Its work won NEC Japan's prestigious Nishina Memorial prize for physics, but there is a long way to go before qbits can be used for realistic quantum computation. If or when the problems are solved, devices could be built with 1bn transistors on a chip just one nanometre square. The dice are already cast, with future systems inevitably having many processors on a chip. However, memory hierarchies, bandwidth, communication interfaces and system software compatibility are not only essential elements: they will have enormous influence on the fortunes of future products.

Christopher Lazou, a Member of the BCS, is chairman of the BCS Fortran Specialist Group and managing director of HiPerCom Consultants, a consultancy in high performance computing.

This Issue's Contents

Copyright British Computer Society 2001