This article focuses on the fact that by inserting digital humans into that virtual world—or stepping into it themselves—engineers have found new ways to test designs for ergonomics, manufacturability, maintainability, safety, and style. The goal, of course, is to design better, higher quality products faster and cheaper by getting everyone from manufacturing and quality through safety and maintenance involved in the process before settling on a design. The approach, called concurrent engineering, has been talked about since the quality revolution in the 1980s. Digital humans provide important insights into the design of production and assembly equipment. By simulating the task with large digital populations, safety engineers can determine before a machine goes into production whether anyone is likely to circumvent its safety features. One way to overcome behavior barriers is to put real people in simulations.
Jack and Jill are having a frustrating time of it lately. They reach for levers just beyond their grasp, crawl into spaces too small to fit, and try to respond to instruments just out of their line of sight.
Sometimes it's painful. They smack their hands against cold metal by trying to remove components they cannot maneuver. Strain against loads too heavy to lift. Risk and even lose their limbs to unsafe manufacturing processes.
It's a life even a crash dummy wouldn't envy. But it's just another day at the office for Jack and Jill, the Everyman and Everywoman characters in digital human factors software created by Engineering Animation Inc. of Ames, Iowa.
Jack and Jill were created to simulate the reach, vision, strength, and movement of actual human beings . T hey, and other human factors software, are part of a new generation of digital creations designed to analyze how people might interact with the three-dimensional environment created by computer-aided design tools.
Aided by cheaper, more powerful computers, 3-D CAD models have grown increasingly common and complex, encompassing everything from simple components to cars and jet fighters. Over the past 15 years, developers have designed software to reuse solid CAD models in a variety of ways. Today, software programs and plug-ins use CAD data to visualize, animate, simulate, validate, manufacture, and assemble parts digitally.
In essence, they create a virtual 3-D world. By inserting digital humans into that virtual world-or stepping into it themselves- engineers have found new ways to test designs for ergonomics, manufacturability, maintainability, safety, and style.
The goal, of course, is to design better, higher-quality products faster and cheaper by getting everyone from manufacturing and quality through safety and maintenance involved in the process before settling on a design.
The approach, called concurrent engineering, has been talked about since the quality revolution in the 1980s. Yet it has proven notoriously difficult to master for companies making complex products, such as cars, airplanes, heavy machinery, and electromechanical devices.
“What often happens is that product development is much more of a serial process,” said Bill Boswell, senior director of product development at Engineering Animation. “People don’t work in parallel.”
According to Boswell, “Manufacturing engineers don’t look at designs until other organizations have signed off on them. There are horror stories of people hanging almost upside down to change a hydraulic plug because no one ever talked to the service people to see if they could perform the task. If a service person had run an ergonomics model while the part was designed, maybe she could have kept that from happening. Without the underlying tools, it’s hard to have a parallel process.”
Bob Brown, president of Deneb Robotics Inc. in Troy, Mich., has the same kind of concerns. “We’re really the enabling tech for concurrent engineering,” said Brown, whose company markets Deneb/ERGO, human motion and task analysis software that competes with Jack and Jill. Deneb is owned by Dassault Systemes, developer of CATIA CAD software. The company’s goal, Brown said, is to reduce rework by 80 to 90 percent before product launch.
“We’re selling cost avoidance,” he said. "No one likes to admit they make design mistakes, but we ask them to look at the dollar value of product and tooling change orders on their last program. If we can save even 30 to 50 percent of that by proving out their products in a digital environment, it would dramatically improve their bottom line.”
Human models are especially useful in highlighting potential problems in product assembly, use, and maintenance. These might include parts positioned where hands cannot pivot to remove them, tractor levers that short farmers cannot grasp, and production processes with safety features that workers can circumvent.
Such problems often remain hidden in blueprints or CAD drawings. They show up when engineers start crawling over physical mockups. In the past, companies would correct problems discovered on a mockup, then build a new mockup to correct issues arising from the previous set of corrections. It cost time and money.
“Car companies built anywhere from 20 to 40, or even more, custom mockups at a cost of $500,000 to $1 million each,” said John MacKrell of consultant CIMdata Inc. in Ann Arbor, Mich. “The cost in time is even greater when you’re trying to speed up product cycles to take advantage of market trends. Today, he said, engineers want to catch most design and assembly errors digitally and build mockups only at the end of the process.
Error avoidance is the objective of the Virtual Product Development Initiative at Lockheed Martin Tactical Aircraft Systems in Fort Worth, Texas. The program is now in its third year. It uses Engineering Animation’s human factors software in conjunction with CAD, visualization and animation systems, and manufacturing software.
“Our goal is to reduce cycle time and achieve as much as a 50 percent reduction in manufacturing costs through error avoidance,” said program director Mary Ann Horter. “By doing everything digitally, we hope to lower the cost of building mockups and reduce unplanned changes in the development cycle.”
A Tsunami of Changes
That means paying close attention to assembly. That’s especially important in the aircraft industry, where new fighters may have up to a million parts. A simple change, such as widening a bulkhead hole to improve access, could change the part’s load-bearing properties and set off a tsunami of other changes throughout the aircraft. That might mean changing dies or machine specifications.
“We’ve caught lots of mistakes before we released parts,” said Tactical Aircraft Systems’ manager of visualization, Robert Lynch Jr. However, he’s reluctant to divulge details about such secretive programs as the F-22 Phantom and Joint Strike Fighter, a next- generation fighter jet that Lockheed Martin is competing with Boeing to build.
Yet he does recall visualizing the assembly of JSF weapon bay doors. “We were able to look at the assembly sequences when it was years away from delivery,” Lynch said. “The engineers actually watched an animation of the way workers arranged the parts, then changed their minds about the most efficient way to build the doors.”
Assembly is also an issue at Deere & Co., the Moline, 111., producer of farm and heavy equipment. The company used Jack and Jill to locate an access hole in a panel to confirm that any production line worker could attach the panel to an assembly.
Deere engineers started by defining a fifth percentile Jill and a 95th percentile Jack. A fifth percentile woman is a digital manikin whose anatomical traits rank among the smallest, narrowest, shortest, and weakest 5 percent of all women. A 95th percentile man has size and strength equal to or greater than 95 percent of men. By testing the size, shape, and location of the access hole against different digital populations, Deere ensured that all workers on its line could assemble the part. Deere also uses human factors software from Division Group, a San Diego unit of Parametric Technology Corp.
Digital humans also provide important insights into the design of production and assembly equipment. “Presses and robots often have pinch points where a person could be crushed,” said Deneb’s Brown. “So they install palm buttons, safety devices located away from moving parts that prevent the machine from working if the operator’s hands are not on them.
“A woman loading and unloading the machine might be able to rest her elbows on those buttons and leave her hands at risk,” he said. By simulating the task with large digital populations, safety engineers can determine before a machine goes into production whether anyone is likely to circumvent its safety features.
Complex Assembly Interactions
As more companies outsource component manufacturing, their assembly interactions grow more complicated. Autos, for example, increasingly consist of large preassembled modules delivered from various sources and put together on an automaker’s assembly line.
First-tier suppliers not only have to simulate their own assembly process, but how the component fits into the automaker’s line. “The auto-maker might share CAD data on car and dashboard shape, and the component manufacturer would come back and show them the best way to assemble the dashboard into a car,” Deneb’s Brown said. “Our systems show how it fits and how someone using an ergonomic assist device can snake the part through a door or frame and into its slot.”
If assembly is important, so is disassembly for maintenance. Nowhere is this more important than in military and commercial aircraft. Some stories of maintenance snafus have reached near-legendary status, such as the simple plug that took two minutes to unscrew, only after hours spent disassembling an entire surrounding subcomponent.
Such maintenance problems could prove deal-breakers for commercial airlines, which earn their profits by flying expensive aircraft nearly continuously. Anything that grounds them for prolonged periods costs too much money. On the military side, the issue is flight- readiness. Convoluted maintenance procedures reduce the amount of time that high-performance fighters can stay in the air.
Boeing Co. uses its own proprietary human factors model, Boeing McDonnell Douglas Human Modeling System, or BMD-HMS, to test a variety of maintenance tasks in military and commercial aircraft. “We built it over the past 10 years, pretty much from scratch,” said Steve Rice, a principal engineer and scientist at Boeing Phantom Works in Long Beach, Calif. “We worked with optometrists, biomechanics, and other specialists to define geometries, how the spine moves, motion algorithms—over 100 anthropomorphic measures.”
In a typical application, Boeing had to determine whether a 50th percentile female or fifth percentile male could replace an inert gas bottle located under a transport’s cargo floor. The human modeling system showed that small mechanics could see and reach the fasteners. The software’s collision detection feature verified that no structural parts interfered with the operation. The simulation saved Boeing the time and cost of testing populations of workers on full-scale physical mockups.
Using the same approach, Boeing highlighted assembly issues that would be nearly impossible to identify without physical mockups. It found, for example, that while a 95th percentile male could fit through an access hole in a wing box, the bay’s width and the distance between the upper and lower stringers would restrict his full range of motion.
What works on the ground also works in orbit. “The International Space Station has a requirement that every maintenance removal, such as a smoke detector or valve, must be able to be carried out by a fifth percentile Japanese female and a 95th percentile Western male,” said Phantom Works senior engineer and scientist Terri Graham.
By inserting manikins into a digital rendition of the space station, Boeing proved that astronauts could inspect and reach the parts. The human modeling system also created a swept volume around each tool that defined tool clearance in the hands of the astronaut. As long as the motions needed to remove the part stayed within the swept volume, the job could be done without any problem.
“In the past, Houston would build water tanks where they would suspend actual astronauts and have them perform tests on physical mockups. Using BMD-HMS saved them millions,” Graham said.
NASA asked Boeing to analyze several tasks aboard the space station’s laboratory node. Because it is so expensive to launch astronauts into orbit, NASA strives to schedule their time fully. That means building lots of mockups to estimate the time and number of people needed to complete even the most mundane assignments.
Digital software makes the scheduling process easier and cheaper. “The astronauts have to translate racks that look like telephone booths with handrails on them from one part of the space station to the lab module,” Graham explained. “We ran a collision detection analysis to determine whether it took one or two crew members to balance and move the racks. We found that it took only one, and showed them where the racks were a close fit and where collisions were most likely to occur.”
Space may hold the glamour, but most human factor applications remain firmly grounded here on Earth. Some of the most interesting analyses involve brute application of digital speed and power. That’s how Battelle Research Institute of Columbus, Ohio, used Jack to locate the grip height for a weed trimmer. It simulated a population of 5,000 people of different sizes to evaluate designs for comfort, reach, and safety.
The U.S. Army used a more focused approach to evaluate amphibious assault vehicle hull designs because it already knew the problem. Army landing craft behave differently when cruising slowly and when skimming over the water surface at high speeds. The analysis helped the Army modify the hull so drivers could see the horizon no matter how fast they traveled.
Immersive Virtual Reality
Sometimes, though, even the most powerful human factors models are not enough. They may do a fine job of analyzing physical issues, but some decisions, such as styling, ergonomics, comfort, and—for lack of a better word—the “rightness” of a product, rest on subjective assessments of engineering information. In this realm, 3-D models displayed on large monitors simply do not convey the feeling of a design. Immersive virtual reality does.
Immersive VR does just what the name implies: It catapults the viewer into the picture. Naval captains feel as if they are sitting in the control room of a nuclear submarine. Airline executives walk down the aisles of aircraft still on the drawing boards. Automobile stylists sit inside the interior of their latest creation and look out the window (though they forgo the smell of new leather).
Today, engineers want to catch most design and assembly errors digitally
“Our experience has been that designers really obtain an enhanced understanding of data when they are able to see it full size and in three dimensions,” said Robert Tilove, the group manager for visualization and geometric modeling at the General Motors R&D Center in Warren, Mich.
Tilove knows because he was one of the people who helped bring the technology to GM during the early 1990s, when the entire virtual reality field became a hot topic at university research laboratories.
GM already had half a solution. It used head-mounted displays to flash images in front of each user’s eyes. Unfortunately, the headsets had poor color resolution—a red flag in style-conscious Detroit—and their 3-lb. weight quickly became annoying. Equally important in an enterprise as collaborative as auto design and construction, the headsets isolated viewers. They did not all see the same image or one another.
A Sun Microsystems researcher, Michael Deering, developed a small pair of glasses containing electronic shutters. Instead of projecting an image onto the glasses, Deering displayed it on a computer monitor. Synchronizing the shutters with alternating right-eye and left-eye images created the illusion of stereo depth. The technology is used by StereoGraphics of San Francisco.
It made inspection of CAD models as intuitive as turning or bending one’s head. “It’s a bit like a 3-D movie except it’s interactive,” says Tilove. “If you sit far to the left in a 3-D movie, you’ll see a distorted image. Here, the system tracks your position and adjusts the perspective. Many people can view the image and each other at the same time to discuss and resolve engineering integration problems.”
GM tested the system on a 39-inch monitor large enough to deliver full-size views of hubcaps and instrument clusters. It then moved to a power wall, which projects life-size 3-D images on a large screen from the rear.
Going into the Cave
Power walls did a good job of displaying a car’s exterior, but engineers couldn’t get inside the car. That required displays on all sides. The Electronic Visualization Laboratory at the University of Illinois in Chicago had the solution. The lab called it a CAVE, according to a 1993 paper, partly for the simile of the cave in Plato’s Republic, in which the philosopher discusses inferring reality, or ideal forms, from their shadows projected on the wall. CAVE also stands, somewhat redundantly, for CAVE Automated Virtual Environment.
The CAVE surrounds the viewer with four display walls: one in front, two on the side, and one underneath. GM built the first company-owned CAVE in 1994.
Engineers could now sit in a real seat while the computer drew the image of the car around them. By pointing a laser wand, visible to everyone in the room, they could highlight parts, push buttons, switch levers—do everything but kick the tires.
GM’s studios use the CAVE to visualize designs without building clay models. “Obviously, it saves money,” Tilove said. “More importantly, though, it saves time. Instead of an iterative process of building prototypes and evaluating them, we can build and modify computer images.
“We are able to look through the window and see if an A-pillar blocks our view,” Tilove continued. “We can check vision, obscurations, the location of controls and glove boxes. Then we can evaluate styling themes and compare A to B to C. We can see how they look relative to a competition’s entry in the market.”
The CAVE also lets engineers assess complex engineering data. In one dramatic example, GM engineers simulate a crash at 30 mph using finite element analysis. They then play back the incident in the CAVE.
“We can visualize every instant of time, what happens to the shape of the vehicle, how it buckles, how the engine moves on impact,” Tilove explained. “Visualizing it in full scale helps engineers and designers to interpret analysis results and suggest modifications to improve performance.”
How accurate are the representations? Some simulations of physical events might prove to be 80 to 90 percent accurate, while other phenomena, such as high-frequency wind noise, are extremely difficult to model, Tilove said.
The same is true of human factors software. When it comes to size, reach, vision, and strength, digital manikins may be more accurate than real human beings, according to Boeing’s Rice.
“There’s some amount of error inherent in every human model that approximates human motion,” he said. “We attempt to validate our models and report the error, which might be 2 percent for certain types of reaches, 0.5 percent for others.
“But you can measure a person in the morning and their reach will change by an inch or so by the afternoon. So there’s probably more error in individual measurements from one time of the day to another than there is in our model.”
Simulating the mechanical behavior of humans is fairly mature, Rice said. After all, modeling the size of an arm and the rotation of its elbow and wrist is not all that different than describing the movement of a metal shank with two joints.
Simulating the Subjective
What is still missing, he said, are ways to simulate more subjective factors, such as comfort and fatigue. “The field is moving toward behavior issues,” Rice explained. “Several systems have some of these capabilities, such as a little red light that comes on and says the manikin is more uncomfortable than it was before.
“But it’s still very vague,” he continued. “One person may be perfectly comfortable in that situation while someone else is in pain. There’s very little objective data in this area. When someone decides it’s important enough to fund research to gather valid data, we’ll start seeing more capable software.”
One way to overcome behavior barriers is to put real people in simulations. Yet immersive virtual reality poses its own challenges, starting with geometric complexity. “The amount of data we’d like to manipulate is so big, even the fastest computers can’t handle them,” Tilove said. “That means you have to trade off image realism for speed of response. In virtual interiors, we don’t have enough resolution to read the instrument gauges clearly. If we did, there would be way too much data for the CAVE to respond to when you move your head around.”
Tilove has a shopping list of other features he would like. Like many members of large corporations, he’d like ways to run a single simulation in several CAVEs around the world. The problem, he said, lies not so much in visualization technology as in communication etiquette.
It's like a videoconference,” he explained. “It works fine if you’re reviewing structured information that everyone’s seen. But when you try to collaborate, nobody knows what the other guys are looking at or who talks next. We take a lot of cues from body language and all of that goes away.”
Better ways to manipulate virtual reality are also on Tilove’s wish list. He wants tactile as well as visual feedback, so viewers can actually feel it when they bump into something. Some devices are beginning to emerge which provide that capability, such as penlike devices linked to pulleys and motors that simulate contact forces.
In the end, though, the field’s most important issue is simply putting human factors software into the hands of more engineers.
Today, the technology is limited to companies that do 3-D CAD modeling. That may be changing with the advent of less expensive visualization software. Visualization software does not create CAD models, but it allows users to visualize CAD data.
Division Group produces a broad range of visualization software, including dV/Manikin and dV/Safework digital humans, as well as dV/Immersion for full-scale virtual reality modeling. Product marketing director Ralph Mayer said he can provide visualization software for $4,000 to $10,000 a seat.
It’s not cheap. Yet it’s significantly less costly than it was even two or three years ago. Large companies, said Mayer, can now afford to distribute visualization software throughout the organization. They can even send models—minus their proprietary engineering data—to their smaller suppliers.
This gives engineers who ordinarily don’t have any input into design a chance to test new products and processes for human factors very early in their design. Ideally, it will lead to faster introduction of safer, more comfortable, easier-to-use products.
It’s a revolution in the making, one that suggests the frustrating days for Jack and Jill are far from over.