This paper focuses on supercomputing that is more commonly associated with electrical rather than mechanical engineering. A vast range of mechanical engineering problems-issues of optimization, friction, turbulence, combustion, manufacturing processes, events at the molecular or atomic level, events that involve multiple physics phenomena, and processes that involve many orders of magnitude of space and time-require advanced computational resources to simulate them with high fidelity. Researching processes from the smallest practical level to the largest requires immense calculating resources. Companies, universities, and government agencies are building larger and faster computers to push research and analysis to new levels of complexity. If MEMS switches worked as advertised in civilian and military communications, not only could the radios and phones use less power and therefore be lighter and cheaper to manufacture and operate, but they could also function across a greater frequency range than they do now. Computational engineering and science are seen as a very important tool in moving forward with mechanical systems, design, and technology.
Supercomputing is more commonly associated with electrical rather than mechanical engineering, but that is changing. A vast range of mechanical engineering problems-issues of optimization, friction, turbulence, combustion, manufacturing processes, events at the molecular or atomic level, events that involve multiple physics phenomena, and processes that involve many orders of magnitude of space and time-require advanced computational resources to simulate them with high fidelity.
Researching processes from the smallest practical level to the largest requires immense calculating resources. According to Adnan Akay, director of the Division of Civil, Mechanical, and Manufacturing Innovation at the National Science Foundation, advances in understanding the fundamentals of manufacturing processes rely on large-scale computing.
"Modeling manufacturing processes, such as cutting, molding, and casting, requires multiphysics," Akay said. "It includes chemical reactions, heat transfers, deformation. All this requires supercomputers. You can optimize cutting, shape, weight, etc.-the best combination of parameters."
The Purdue University Supercomputer called Steele has more than 1,000 nodes. The first 700 were assembled in hours and were handling 1,400 research jobs that day.
"Modeling manufacturing processes requires multiphysics. It includes chemical reaction, heat teansfers, deformation. All this requires supercomputers. you can optimize cutting, shape, weight, etc, - the best comblination of parameters."
Purdue University offers several cases in point. Mechanical engineering researchers are using advanced computing resources to study problems that include failure in micro electromechanical switches, friction at the molecular level, and the identification of bacteria, which happen to look distinctly different from each other in the light of a laser.
The work at Purdue is just one instance of mechanical engineering researchers making use of the latest, greatest computational tools to investigate and solve long-standing problems. Their work provides a taste of what's happening in the field. Purdue researchers, like their counterparts elsewhere, are making discoveries that would not be possible without massive computing power.
In November, the 1,058-node Steele cluster in Purdue's Rosen Center for Advanced Computing was ranked 104th among the world's supercomputers by Top500. org, the Web site where four supercomputer experts rank the performance of advanced computers twice a year. By comparison, the No. 1 ranked supercomputer was Roadrunner at Los Alamos National Laboratories, an IBM cluster operating at 1.105 petaflops (one petaflop is one quadrillion floating point operations per second).
Steele started with 700 Dell PowerEdge 1950 nodes, which were assembled in about four hours last May 5-a newsmaking event-and by the end of the day, 1,400 research jobs were already running on the cluster. Since then, more nodes have been added, bringing the total up to 1,058 in December.
Steele has 13,568 gigabytes of main memory. The computational power of the Steele components ranges from 3 to 50 teraflops. "The Dell 1950 server wasn't the fastest, but that was a trade-off to get more nodes," said John Camp bell, associate vice president of the Rosen Center.
Steele was purchased through a "community cluster" arrangement, which pools the needs of researchers across the university. Information Technology at Purdue, the universitis primary IT organization and parent of the Rosen Center, pays for facilities and support, and faculty members bring funds to the table to buy the nodes. "We tell faculty they're guaranteed to have as many nodes as they've purchased within four hours of their job submission," Camp bell said. "They can also borrow unused nodes for up to four hours."
Community clusters like Steele are becoming more common across the county, according to Eduardo Misawa of the National Science Foundation. Misawa, who is program director for dynamical systems in the Division of Civil, Mechanical and Manufacturing Innovation at NSF, said that 10 or 15 years ago, it would have been extraordinary for a university to have something like Steele.
Steele is by no means the only advan(:ed computing resource Purdue offers its researchers. There's also a SiCortex machine, which runs on comparatively low power, with nearly 4,000 processors that operate at 500 MHz. "The SiCortex processors are a lot slower, but you can get a lot more," Campbell said. "It's very attractive because you can use a lot of processors for less power."
Steele is the seventh community cluster at Purdue, and in the spring of this year, the eighth will be added. The wide array of resources means researchers have lots of options. "One of the challenges of a computing center at a university is to maintain a lot of different machines," Campbell said.
Nand Bumps and Mems Failure
If MEMS switches worked as advertised in civilian and military communications, not only could the radios and phones use less power and therefore be lighter and cheaper to manufacture and operate, but they could also function across a greater frequency range than they do now. "Lots of money has been spent on civilian and military MEMS, and the potential is great, but they are notoriously unreliable," saidJayathi Murthy, the Robert V. Adams Professor of Mechanical Engineering at Purdue.
The problem is that nobody understands how materials behave at the micrometer scales of microelectromechanical systems. With $17 million in funds from the National Nuclear Security Administration of the Department of Energy for five years and another $4 million from Purdue, the NNSA Center for the Prediction of Reliability, Integrity, and Survivability of Microsystems, which Murthy directs, is investigating MEMS failure.
Founded last April, the PRISM Center an4 its 40-some researchers, half faculty and half graduate students, have access to peta-scale (1015 floating point operations per second) supercomputers at the National Laboratories as well as the Steele cluster and SiCortex computers in the Rosen Center at Purdue.
In its first year, PRISM targeted contact and fluid structure issues, which are known to be quite different at the small scale than at the large. A certain class of switch, known as contacting radio frequency capacitive MEMS, measures 300 micrometers in length, but only a few micrometers thick or high. It consists of a carrier of radio frequency signals under a dielectric material and, suspended above that, a metallic membrane. Radio frequency signals pass when the membrane is about 3 micrometers above the dielectric and capacitance is low. When a voltage above a certain threshold is applied, it causes the metallic membrane to contact the dielectric.
That contact, which is expected to work trillions of times, increases capacitance and shuts off the switch. The problem is that sometimes the contact fails to release. This happens because the metal membrane is irregular and bumpy, not smooth and regular, and only the bumpy sections of the metal touch the pad. Under repeated use, the nano-scale bumps deform and change their shape, which is why the switch fails to operate as expected.
PRISM researchers discovered that for the bumps, size matters. At the nanometer scale, smaller bumps interact more strongly than the large ones with the dielectric pad. Furthermore, where the metal and the dielectric pad make contact, charges get trapped and prevent the membrane from resuming its previous, low-capacitance position.
PRISM researchers are working to characterize these processes and predict how they affect the performance and reliability of the micro switches. They use supercomputers to tryout different scenarios. Characterizing these phenomena involves five orders of magnitude of spatial scales, nano-second to micro-second contact physics, and months of time to failure. "It also involves structural motion physics," Murthy said. "The MEMS moves in a gas, so there is fluid dynamics. There is electrostatic physics, materials issues with micron-thin materials that don't act like normal materials, so the physics are very different. Lots of different physics and length and time scales is why we need supercomputers. The computations and simulations let us look at these small scales and examine effects of the different phenomena independently. We can turn them on and off in the sim."
Although the PRISM researchers are looking at fundamentals, their findings are intended to lead to changes in manufacturing MEMS.
Friction at The Molecular Level
Ashlie Martini, an assistant professor of mechanical engineering at Purdue, is seeking to understand how friction arises fundamentally, which means she is investigating phenomena on a very small scale. Her search has practical applications in the constant push toward more efficiencies of operation. "In mechanical engineering, that means less wear, less friction, and less energy dissipation," Martini said. "The efficiencies people seek are often at the interfaces between moving objects."
One of her projects focuses on modeling atomic stick-slip friction between an atomic force microscope and a substrate. The AFM has a nano-scale pointed tip that is dragged along the substrate. "Without stick-slip friction, it would just slide across the surface," Martini said. "Instead it sticks, sticks, sticks, and slips. When you look at the periodicity of the stick-slip, it correlates with the atomic structure of the substrate."
For many materials useful to engineering applications, like metals, the stick-slip sequence creates a sawtooth pattern that reflects the force of friction. "When you go up the tooth, the friction increases," Martini said. "When you go down, friction drops. There's a big burst of energy dissipation. You have inconsistent friction and energy dissipation."
Martini found that the size and shape of the sawtooths are highly dependent on conditions like temperature, velocity, and load, which means that the sawtooth shape and size aren't predictable. "It's not easy to design something around an interface exhibiting that kind of be ha vior," she said.
Her Research into this project is intended to discover what happens with the atoms in the AFM tip and the substrate-how the crystalline structure changes, how atoms transfer from the tip to the substrate and vice versa, and any other changes that occur.
To model the interactions between the AFM and the substrate, Martini needs a supercomputer. Her simulation of the stick-slip problem includes 500,000 atoms, interacting on time scales over nine orders of magnitude, from femtoseconds to milliseconds (that's 10-12 to 10-3), and a spatial scale that includes a couple of orders of magnitude. "Each one of those atoms can potentially interact with any of the others at any time," Martini said, "and nearly all of these interactions are being modeled."
Atoms move on the scale of nanometers, and they do it in femtoseconds. Martini's goal is "to see how that affects some engineering device in seconds. So you need 1012 time steps for each second, and that is the reason you must have big computers to get to the scales of engineering applications while still capturing the small-scale stuff."
Shining a Laser on Germs
News of outbreaks of salmonella, E. coli, and other bacteria pop up far too often on the news, and by the time medical personnel identify the specific bug, it's often too late for the victims. It can take four to seven days on a DNA sequencing machine to identify a particular strain of bacteria.
With colleagues and students, Daniel Hirleman, the William E. and Florence E. Perry Head of Mechanical Engineering at Purdue, has developed and patented a way to immediately identify bacterial strains using lasers and a charge-coupled device. The method shines a laser through a Petri dish of bacteria in agar and the CCD captures the resulting image. Hirleman and his team discovered that the resulting "scattering fingerprint" for each strain of bacteria is visually distinct.
Hirleman's team is building an extensive library of these bacterial fingerprints, using a computer to process and store the images. But he'd also like to figure out why the bacteria have unique fingerprints. "The circular patterns are caused by some of the bacteria dying off," Hirleman said. In the colony, bacteria starve in a dome shape around the original bacterium, but this shows up as a circle in the two-dimensional images.
Hirleman would like to model the growth of the bacterial colonies and their topography and distribution. "If you have a new strain, and you want to predict how it will look, you have to understand what mutations do to the metabolism and growth pattern of the bacteria," he said. "To understand and compute all of this is the grand computational challenge. If I know what the colony structure at the n:ano- and micro-scales look like, then the next question is what is its scattering fingerprint? This is the other grand computational challenge."
To do all of this, Hirleman and his team need a supercomputer to perform the calculations for the trillions of bacteria in a colony and how the position of each affects the laser beam. Although this project has heat and mass transfer, it certainly isn't traditional mechanical engineering, but Hirleman says it simply shows how broad a discipline mechanical engineering is. "It impacts everything," he said.
Hirleman may not be able to perform the calculations to the extent he'd like with current supercomputing resources, not just those at Purdue, but anywhere. "The number of calculations increases exponentially as you increase the number or.interactions," said Misawa. "We are limited in terms of computer power to do that."
Nevertheless, Hirleman and other researchers will make full use of the resources they have. "There are many types of projects, all very complicated, with heat transfer, energy generation, friction, and other processes that require lots of computation," he said. "This is why my department got involved with Steele. We see computational engineering and science as a very important tool in moving f01'ward with mechanical systems, design, and technology."