This article illustrates research and development work in nanotechnology for manufacturing computers at the molecular level. Computers have gone from large and mechanical, like Babbage's Difference Engine, to molecular. Researchers have shown that carbon nanotubes can be strung across electrodes to make minute transistors. Beyond sheer density of data, the nanotube chips have another, perhaps even more important, potential advantage over their electronic rivals: the memory does not disappear when the power goes off. The tubes may be drawn to the electrode by an electrical attraction, but they are held there by van der Waals attraction, a sort of molecular-level suction. In that way, an electromechanical memory chip will have more in common with a computer hard drive or floppy disk than with random access memory. Physicist Paul McEuen and his colleagues at Cornell have fabricated a transistor that passes signals through a single atom.
Anyone who's familiar with advances in computer chips-how they've gotten progressively smaller and faster over the past decades-has heard of Moore's law. First formulated in 1965 by Gordon Moore, co-founder of Intel Corp., it states that the number of transistors per square inch on integrated circuit chips-a raw measure of computing power-doubles every 18 months. Because Moore's law has held for well-nigh 40 years, computers have morphed from the clunky, room-filling behemoths of yore into portable machines that seem as much magic wands as number-crunchers.
But if Moore's law has been an irresistible force, it may soon meet an immovable object. Chipmakers use photolithography to etch electronic circuits into silicon wafers, and they are quickly approaching the physical limits of that technology. In as little as a decade, progress in making faster, more powerful silicon integrated circuits may screech to a halt.
By then, however, the successor to the silicon chip may be ready to replace it. For more than two decades, researchers in nanotechnology have been claiming that computers could be manufactured from the bottom up out of pieces cobbled together at the molecular level. Such computers need not even resemble current-day machines. The boldest designs for such machines have as much in common with heirloom clocks as they do with cutting-edge computers.
Clockwork computers might seem fanciful, but a company in Massachusetts has already built a prototype of a mechanical memory device that promises to be far more powerful than present-day random access memory chips. In spite of having millions of moving parts, this memory chip promises to be faster and more robust than its competition.
Back to the Future
The idea of computers without electronics seems comical, like something out of Gilligan's Island. But the history of computing reaches into the pre-electrical era. The essence of a computer, after all, isn't electricity. It's entering data and receiving an answer. Only recently has an electrical signal been the most useful kind of signal to receive.
Before the 20th century, a would-be computer user would be most interested in generating a number. The first proto-computers were simple adding machines, using toothed wheels similar to clockworks to tally strings of numbers. By the 19th century, the demands of mathematicians had increased, and so did the technology.
Charles Babbage's Difference Engine, conceived in 1822, was the first attempt to automate the calculation and printing of very large tables of figures-a vital function in an era when navigation, finance, and artillery batteries relied on such tables. As designed, the Difference Engine would have calculated polynomial functions via the movement of stacks of toothed wheels. Before he could complete his prototype, Babbage hit upon a more advanced design for a programmable computer that could operate programs punched into paper cards. This machine, called the Analytic. Engine, is often seen as the precursor to modern computers, though it, too, was never built.
Another Englishman, Thomas Fowler (who designed the prototype for the central heating system), built an-other remarkable proto-computer in 1840. Made out of wood, Fowler's calculating machine used a matrix of spindly arms moving across notched tabs to multiply or divide numbers in base-three notation. Because it relied on such a simple mechanism, the design would have been easy to scale up. Fowler died before assembling a full-scale calculating machine, but his plans called for a device that could multiply two 12-digit numbers.
Such machines are obviously too limited to be much more than curiosities today. (A model of the Difference Engine now sits in the British National Museum of Science and Industry in London. And in 1975, two computer scientists were able to construct a tic-tac-toe-playing computer out of Tinkertoys.) But their core idea-computation via repetitive motion of moving parts-became the basis for revolutionary rethinking of computing back in the late 1980s.
K. Eric Drexler, then a researcher at the Massachusetts Institute of Technology, wanted to demonstrate that it was possible to build control mechanisms for the molecule-size machines he had been proposing for a decade. Building electronic computers at a molecular scale is impossible using conventional lithography. The wavelength of even extreme ultraviolet light is many times larger than even a complex molecule, such as a protein. So in 1988, Drexler published a paper exploring how one could build a computer out of moving parts the size of molecules.
In place of controlling electrical pulses with transistors, Drexler proposed manipulating a two-dimensional array of minuscule rods, each knobbed at precise points along its length. When a given rod was extended a specific distance, its knob would block another rod from moving, much the way an electric current fed into a transistor can block another current in a circuit.
Shuttling a complex web of rods back and forth would process data every bit as well as routing sign als through a computer chip, Drexler wrote. A set of four rods, for instance, could establish a NOT or NAND gate, the building blocks of computer logic. Arrays of such gates could create an entire computer processor smaller than a bacterium.
Speed wouldn't be a problem: The moving parts would have to travel just a few billionths of an inch. But other factors affecting performance are far from clear. Friction at that scale is not well understood, and so much activity in such a small space would inevitably generate tremendous amounts of waste heat.
And there is, of course, another non-trivial problem: No means now exist for making such a computer. "Drexler was arguing that such a system was feasible at a point in time when we could not build those systems," said Ralph Merkle, a member of the faculty in the College of Computing at Georgia Institute of Technology. "But the argument for feasibility is sound. He wanted to show it was possible."
In the mid-1990s, the idea of electromechanical computers using Drexler's "rod logic" fell from favor among backers of nanotechnology. Although the discovery of carbon nanotubes-infinitesimal cylinders of carbon atoms-might have given Drexler's rods a boost, nanotubes were soon found to have interesting electronic properties of their own. Depending on how the atoms in the molecule were arranged, a nanotube could be either a conductor or a semiconductor. By the late 1990s, researchers had built parts of transistors from individual nanotubes, and it seemed likely that nanocomputers, whenever they were finally built, would be electronic, not mechanical.
Electromechanical computing wasn't exactly dead, though it probably will never resemble Drexler's complex cube of shifting rods. In 2000, a group of scientists at Harvard University, led by chemistry professor Charles Lieber, published a paper that showed how one could make a memory chip from two layers of nanotubes.
The idea, dreamed up by then-graduate student Thomas Rueckes, was bracing in its elegance. Each layer would be made up of a set of nanotubes running in parallel, and the two layers would be perpendicular to one another and separated by a small gap. To make a bit, you merely put opposite charges on tubes that cross one another; electrostatic attraction would bring the two tubes together. Points where the tubes touch would represent a "1" while points where they don't touch would represent a "0." Reading the bits is straightforward, since current can flow across the touching nanotubes with little resistance. In experiments involving a handful of nanotubes, the Harvard team demonstrated that the idea really worked.
The potential was staggering. A square one centimeter on a side-about one-quarter the area of a postage stamp-could hold a trillion bits of information. That's equal to more than a thousand CD-ROMs, or the contents of a bookshelf that is one mile long.
In 2001, Rueckes left Harvard and helped found Nantero, a firm in Woburn, Mass., dedicated to turning this idea into a product. The pitch to investors was simple: This technology has the chance to revolutionize computer memory. "We won't see nanotube computers any time soon," said Nantero's chief executive officer, Greg Schmergel. "But our memory chip is a step in that direction."
Nanotubes are only one nanometer wide-less than a millionth of an inch. That makes them considerably smaller than transistors on current-generation memory chips, which means that it's possible to splay many more of them across a given square inch. More bits per square inch make for more memory.
Granted, it's not quite that simple. No matter how small individual nanotubes may be, the ultimate scale of the device is limited by other factors, such as the size of wires leading in and other, non-nano parts. Manufacturing at scales that small is no mean feat, since no one has yet built a device that can pick up and move around nanotubes at assembly line speeds.
Ease of manufacturing led to an important design change, Schmergel said. First, instead of laying a set of nanotubes crosswise over another, Nantero researchers decided to lay the nanotubes over a bed of electrodes created through standard photolithography. Thanks to advances in lithography using ultraviolet wavelengths, the rows of electrodes can be much less than a micrometer apart.
Another manufacturing-related design decision revolved around how to place the tubes in the proper orientation. The goal, Schmergel said, was to find a way to do this without inventing a whole new manufacturing technology. Instead, the tubes are scattered across the top of the electrode grid and then etched away using standard lithographic techniques. Only bundles of the most precisely aligned tubes survive, and they are then cemented into place. Because of this lithographic process for aligning the tubes, Schmergel said, "Anyone who's making memory today can make our chips."
And although these chips won't approach the theoretical limits, they'll pack plenty of punch. The prototype wafer, unveiled in May, possessed 10 billion junctions for storing data. The first commercially available chips, Schmergel said, will hold several megabytes.
Memory You Never Lose
Beyond sheer density of data, the nanotube chips have another, perhaps even more important potential advantage over their electronic rivals: The memory doesn't disappear when the power goes off. The tubes may be drawn to the electrode by an electrical attraction, but they are held there by van der Waals attraction, a sort of molecular-level suction. In that way, an electromechanical memory chip will have more in common with a computer hard drive or floppy disk than with RAM.
To see why this could be a huge edge, turn on a computer: During the minutes spent booting up, the random access memory is filled with the operating system read off the hard drive. The RAM has to be filled in this way because the data in it vanishes every time the power is switched off. A computer with an electromechanical memory, in which the bits are held in place by the van der Waals force, could begin working the instant it is turned on.
"'Once we can get the devices to a high enough density," Schmergel said, "these chips will be competitive with hard drives. You could have computers with just one device to handle all memory. It would be a substantial leap forward."
Memory may turn out to be the only part of the computer that is best done electromechanically. As Merkle pointed out, electrons are smaller and lighter than molecules, and any process that could fabricate the rods and framework for an electromechanical computer could be used to make nano-transistors that would work faster and more efficiently.
Last year, in fact, physicists at Cornell University in Ithaca, N.Y., fabricated a transistor in which the electrical signal flows through a single cobalt atom.
Someone may well decide to build a nanocomputer that 'uses rod logic, but it will be for a niche application, or as a curiosity, or in homage to Drexler.
Still, Merkle said, moving, molecular-size parts will play a critical role in the future of Moore's law. "When you ask how we are going to make ever smaller computer components, the answer is through ever more precise control over the location of atoms," Merkle said. "Fundamentally, nanotechnology is a manufacturing technology. It's a better way of arranging atoms."
It's a startling vision. And one that seems only as improbable as a gigabyte computer memory chip filled with flexing nanotubes.