This paper explores the use of immersive computing or virtual reality throughout a product design, assembly, and disassembly. Virtual reality or immersive computing creates a sense of presence for participants through devices that stimulate the senses. Immersive computing technology goes a step further by allowing the participant to interact with computer-generated models or environments rather than to passively view a screen. The technology is a collection of hardware and software that lets the participant explore digitally created objects within a three-dimensional space. Immersive computing presents a sharp contrast to existing two-dimensional computer interfaces used with computer-aided design (CAD) software programs. In order to illustrate the use of immersive computing in product design, research projects focus on using the technology to explore uncertainty in making design decisions. Industry is realizing the benefits of increased communication and deeper understanding of complex design issues through the use of immersive computing. Experts believe that when every engineer’s desktop includes immersive computing technology, the results will be better products produced more economically and with increased national competitiveness.
Designers and Engineers use Computer-aided design software to aid in decision making throughout the entire design process from product conceptual design, through assembly process planning and analysis to packaging and shipping. Increasingly, disassembly—whether for repair in the middle of its service life or for recovering valuable components and materials at the end—is becoming an important factor in product design. Often, it's not just a simple matter of doing the assembly steps in reverse order. Some complications might not be apparent until after the product has been fabricated.
One way to discover such issues is through the use of virtual reality. Using VR, an engineer could virtually disassemble a product, potentially discovering more optimal disassembly sequences or even the need for a product redesign. Learning such lessons in the design phase, before a single piece has been fabricated, could result in great savings.
We’re at the revolution of virtual reality both in the workplace and in the home, and it's going to affect us all. I expect to see virtual reality tools on every engineer's desktop one day, mirroring the emergence of personal computers in the 1980s. The results will be better products made at less expense, and possibly increased national competitiveness.
Currently, engineers use computer-aided design programs to represent products in three dimensions, but we view, modify, and interact with CAD models using two-dimensional interfaces such as the monitor, keyboard, and mouse. Wouldn’t it be more effective if we used our ability to reach out, pick up objects, and manipulate them to help us make decisions throughout the design process?
Virtual reality, or immersive computing, creates a sense of presence in for participants through devices that stimulate the senses. For instance, IMAX theaters tap into our visual and auditory senses through the use of large projection screens, 3-D glasses, and surround sound, which give theatergoers the sense of being present in the action of a movie.
Immersive computing technology goes a step further by allowing the participant to interact with computer-generated models or environments rather than to passively view a screen. The technology is a collection of hardware and software that lets the participant explore digitally created objects within a three-dimensional space.
Immersive computing presents a sharp contrast to existing two-dimensional computer interfaces used with CAD software programs. Currently, all digital objects, no matter the actual size, are scaled to the size of the computer monitor. Interacting with these objects is done through short-cut keystrokes and mouse interaction. Certain functions of the keyboard and mouse, for instance, can be combined to rotate a model.
But using immersive computing methods to rotate a model, you can reach out with your hand, select the model, and move it as if you held a real object. The difference is significant. The physiological response of using your body to interact with the object is inherently different from using the mouse and the keyboard.
The underlying technology includes electrical, mechanical, or optical sensors that record a person's position and relay that information to a computer and projection surfaces that support 3-D images. Position sensors help detect actions that can be simulated. Immersive computing technologies can be used to map a person's movement in the real environment to correspond to movement in the computer environment. For example, using position sensors, a person can actually walk around the computer image of a table; can reach down and pick up a computer-generated part and place it on the table, and then assemble it with other parts. Usually, the person has some sort of input device in his or her hand to select objects in the virtual environment. All this is done in the computer world, with movement being sensed from the user's actions in the real world in the same way that the Wii remote control senses and simulates a player's movements
Projection of the images can be implemented on large wall projection screens, rooms equipped with projection walls, head-mounted displays or helmets, or mobile devices. Augmented reality supports overlaying a computer image on a video of a real scene, thereby combining both the real and the virtual images.
Immersive Computing Technology Allows the Participant to Interact with Computer-Generated Models or Environments. The Technology is a Collection of Hardware and Software that Lets the Participant Explore Digitally Created Objects Within a Three-Dimensional Space.
Another kind of sensory input comes from haptic devices. Haptic, or force feedback, devices can be as large as a person or as small as the head of a pin. They can be attached to the floor, mounted on a person, or can be moved within a workspace. As users interact with objects in the virtual environment, they feel forces generated by the haptic devices.
In the end, multiple technologies are appropriate at different stages in the product development cycle. They include low-cost mobile and desktop solutions, medium-cost body tracking and projection systems, and high-cost multi-screen projection systems. Each has its own capabilities, drawbacks, and uses.
To illustrate the use of immersive computing in product design, one of our current research projects focuses on using the technology to explore uncertainty in making design decisions. The questions we’re trying to answer are: How does our ability to interact with the digital object in three dimensions influence our design decisions? Can we use this new technology to help us improve our decision making and avoid mistakes in product design?
At present, together with Deborah Thurston, professor of industrial and enterprise engineering at the University of Illinois, Urbana-Champaign, and her students, we’re exploring the use of immersive computing as a means to overcome cognitive biases, or errors in human decision-making.
These errors result from heuristics or “rules of thumb” that we all use in everyday life. For example, when estimating the probability of damaging a component during disassembly, we use the experiences or knowledge available to us to recall past incidents of damage.
That's perfectly reasonable and logical. Unfortunately, our brains recall incidents where the outcome, or impact, was in the extreme much more readily than they recall incidents where the outcome was less than extreme.
The problem is that our conception of the probability of the event is overly influenced by our memory of the magnitude of its impact.
If we easily recall incidents of damage to expensive components during disassembly, we tend to overdesign in order to prevent that damage.
So the major contribution of immersive computing in product design is in increasing mechanical engineers’ ability to test a design very early in the process, before the geometry and functionality are finalized. Designers can explore various configurations quickly to aid the decision-making process. And they can look at more than design tolerances and assemblies. They can also use immersive computing to look at how humans handle the parts during the production process and how the product is used after it's purchased.
It's the potential for intimate interaction with the product while it is still in the digital form that will provide the greatest savings.
One prototype application we’re working on would allow engineers to virtually disassemble a digital product to explore how design revisions could be made in order to aid recovery of a valuable internal component for remanufacture at the end of a product's life.
As the engineer virtually disassembles the product with an eye toward end-of-life recovery, he or she experiences the disassembly while also viewing the disassembly tree and the disassembly path that's already been identified.
In the process of disassembly, the engineer might discover that the product needs to be refixtured or rotated in order to successfully remove the next part. He or she might also discover that the tooling available is insufficient or that more room is needed in the workstation to accommodate comfortable ergonomics during the disassembly operation.
This feedback would result in the selection of another disassembly sequence that wouldn’t exactly match the sequence originally identified, or it might even result in a product redesign.
Virtual reality opens up numerous possibilities for augmenting the decision-making process. On the one hand, if we want to gather the same information when interacting in virtual reality as you would if you interacted with parts in the real world, we need to make the simulation as true to life as possible. On the other hand, the availability of the computer in this process allows us to simulate some operations that bring additional information to the user, such as numerical simulation results, that can’t easily be integrated in the real world.
When the ability to manipulate a part using immersive computing is very close to the way we would manipulate the part in the real world we will have opened up a wide array of applications.
The existing challenges to natural part interaction using immersive computing lie in simulating part-to-part interaction for complex CAD geometry. Our goal is to get to natural part interaction with as little pre-processing of CAD parts as possible, which will reduce the preparation time and cost of immersive computer simulations.
If we had natural part interaction in immersive computing we’d be confident in our evaluation of that disassembly exercise, for example, and in the resulting decision. If the cost of employing immersive computing in this decision-making process was low then overall cost savings could be realized. We could expand on this simulation from disassembly to maintenance planning, design for recycling, reuse and remanufacture, assembly planning, and training.
Immersive computing is not intended to replace CAD, however. There are tasks where CAD is especially suited and tasks where immersive computing is more appropriate. The question to ask is: Am I concerned with the part itself or am I concerned with how a human will interact with the part? If the answer is the latter, then immersive computing is a potential development environment.
For example, using current CAD technology, parts can be designed to extreme precision. Parts can be designed to given tolerances; assemblies can be checked for part interference; fixtures can be designed to accommodate parts; and factories can be designed for production.
Though assembly and disassembly paths can be mathematically determined with current CAD software, if the simulation doesn’t account for the physiology and function of the human hand, the designer may have a design that's not physically practical.
Straight up physically based modeling is insufficient because the common CAD model representations consist of approximated surfaces, not actual surfaces.
If actual surfaces are used, the computational complexity of collision and force feedback for complex CAD models is not sufficient for use in a real-time immersive environment. Manipulating objects in free space is straightforward. It's when these objects collide that issues arise.
To this end we have been exploring methods that combine some methods derived from CAD software with methods of physical modeling.
We take a hybrid approach where we use physically based modeling—where contact and interaction forces are simulated analytically—until we’re close to assembly. Then we switch to a constraint-based approach, which employs geometrically defined constraints between two parts.
So far, we have successfully used this method to approximate natural interaction of placing a pin in a hole. Our next step is to incorporate force feedback into this method.
In recent years there has been a wide range of low-cost consumer devices to provide immersive computing.
You can buy stereo glasses to interface with stereo-enabled televisions so you can bring 3-D movies into the living room. The rumble pad of the Nintendo controller and the Wii remote provide haptic feedback to the game player.
The position tracking of the Wii game system supported game development that got people off the couch to compete in virtual sports. The Kinect position-tracking device performs fast image recognition so users don’t have to wear any additional hardware to track body positions.
All of these devices can be considered forms of immersive computing. While the devices don’t have the resolution or computing ability to do some of the more demanding tasks associated with immersive computing for product design, many people are exploring how we might use such devices to support future immersive computing applications.
Industry has often been a cautious adopter of immersive computing. Early pioneers such as General Motors, Caterpillar, Boeing, and NASA led the way in the early 1990s.
Immersive computing has greatly expanded over the past decade with the emergence of system design and installation companies and commercial immersive computing software.
Industry is realizing the benefits of increased communication and deeper understanding of complex design issues through the use of immersive computing.
Traditional design review brings people with expertise from different areas together to discuss potential issues. Each person brings his or her own perspective and understanding of the form and function of the design to the meeting.
Instead of having someone at a computer console manipulating the design, with immersive computing, any member of the review team can take the position-tracked glasses and guide the team to the area of concern. This type of natural interaction supports increased understanding and, therefore, better decisions.
This technology is not magic, but an extension of 2-D computer interface technology into three dimensions. When every engineer's desktop includes immersive computing technology, the results will be better products produced more economically and increased national competitiveness.