In the past few years, there have been some significant advances in consumer virtual reality (VR) devices. Devices such as the Oculus Rift, HTC Vive, Leap Motion™ Controller, and Microsoft Kinect® are bringing immersive VR experiences into the homes of consumers with much lower cost and space requirements than previous generations of VR hardware. These new devices are also lowering the barrier to entry for VR engineering applications. Past research has suggested that there are significant opportunities for using VR during design tasks to improve results and reduce development time. This work reviews the latest generation of VR hardware and reviews research studying VR in the design process. Additionally, this work extracts the major themes from the reviews and discusses how the latest technology and research may affect the engineering design process. We conclude that these new devices have the potential to significantly improve portions of the design process.
Introduction
Virtual reality (VR) hardware has existed since at least the 1960s [1,2], and more widespread research into applications of VR technology was underway by the late 1980s. By the early 2000s, much of the research had fallen by the wayside, and general interest in VR technology waned. The VR hardware of the time was expensive, bulky, heavy, low resolution, and required specialized computing hardware [1,3–6]. However, in the last five years, a new generation of hardware has emerged. This new hardware is much more affordable and accessible than previous generations have been, which is enabling research into applications that were previously resource prohibitive. This paper will provide an overview of the specifications of the current generation of hardware, as well as areas of the engineering design process that could benefit from the application of this technology. Section 2 will discuss the definition of VR as it pertains to this work. Section 3 will discuss current and upcoming hardware for VR. Section 4 provides a focused review of the research that has been performed in applying VR to the design process. Section 5 will provide a discussion of how the current generation of VR devices may affect research going forward as well as trends seen from the review of the literature. Section 5 will also provide some suggestions for research directions based on the concepts reviewed here.
Definition of Virtual Reality
As discussed by Steuer, the term virtual reality traditionally referred to a hardware setup consisting of items such as a stereoscopic display, computers, headphones, speakers, and 3D input devices [7]. More recently, the term has been broadly used to describe any program that includes a 3D component, regardless of the hardware they utilize [8]. Given this wide variation, it is pertinent to clarify and scope the term virtual reality.
Steuer also proposes that the definition of VR should not be a black-and-white distinction, since such a binary definition does not allow for comparisons between VR systems [7]. Based on this idea, we consider a VR system in the light of the VR experience it provides. A very basic definition of a VR experience is the replacing of one or more physical senses with virtual senses. A simple example of this is people listening to music on noise-canceling headphones; they have replaced the sounds of the physical world with sounds from the virtual world. This VR experience can be rated on two orthogonal scales of immersivity and fidelity, see Fig. 1. Immersivity refers to how much of the physical world is replaced with the virtual world, while fidelity refers to how realistic the inputs are. Returning to the previous example, this scale would rate the headphones as low–medium immersivity since only the hearing sense is affected, but a high fidelity since the audio matches what we might expect to hear in the physical world.
The contrast with augmented reality (AR) should also be noted when discussing VR. While VR seeks to replace physical senses with virtual ones, AR adds virtual information to the physical senses [9]. Continuing the earlier VR example of music on noise-canceling headphones, listening to music from a stereo would be an example of AR. In this case, the virtual sense (music) is added to the physical sense (sounds from the physical world such as cars). Although there is some overlap between AR and VR technologies and applications, we consider here only technologies for VR, and we will focus our discussion of applications on VR. For those interested in AR, Kress and Starner [10] provide a good reference for requirements and headset designs.
We also mention here the concept of mixed reality (MR). Current VR technologies are not able to produce high-fidelity outputs for all senses. Bordegoni et al. discuss the concept of MR as a solution to this issue. MR combines VR with custom made physical implements to provide a higher fidelity experience [11]. One example is an application to prototype the interaction with an appliance. In this case, a user could see the prototype design in VR, and at the same time a simple physical prototype would have buttons and knobs to provide the touch interaction with the prototype. In this paper, we focus our discussion of application and technologies for pure VR, and as such we will discuss MR only in passing. In addition to the work mentioned previously, Ferrise et al. provide some additional information about MR [12].
In the context of the definition presented, we consider VR experiences that—at a minimum—are high enough fidelity to present stereoscopic images to the viewer's eyes and are able to track a user's viewpoint through a virtual environment as they move in physical space. They must also be immersive enough to fully replace the user's sense of sight. The gray area of Fig. 1 shows the area under discussion in this paper.
Virtual Reality Hardware
Various types of hardware are used to provide an immersive, high-fidelity VR experience for users. Given the relative importance the sense of sight has in our interaction with the world, we consider a display system that presents images in such a way that the user perceives them to be 3D (as opposed to seeing a 2D projection of a 3D scene on a common TV or computer screen) in combination with a head tracking system to be the minimum set of requirements for a highly immersive VR experience [1]. This type of hardware was found in almost all VR applications we reviewed, for example, Refs. [1], [3], [6], and [13–22]. This requirement is noted in Fig. 2 as the core capabilities for a VR experience. Usually, some additional features are also included to enhance the experience [7]. These additional features may include motion-capture input, 3D controller input, haptic feedback, voice control input, olfactory displays, gustatory displays, facial tracking, 3D-audio output, and/or audio recording. Figure 2 lists these features as the peripheral capabilities. To understand how core and peripheral capabilities can be used together to create a more compelling experience, consider a VR experience intended to test the ease of a product's assembly. A VR experience with only the core VR capabilities might involve watching an assembly simulation from various angles. However, if haptic feedback and 3D input devices are added to the experience, the experience could now be interactive and the user could attempt to assemble the product themselves in VR while feeling collisions and interferences. On the other hand, adding an olfactory display to produce virtual smells would likely do little to enhance this particular experience. Hence, these peripheral capabilities are optional to a highly immersive VR experience and may be included based on the goals and needs of the experience. Figure 2 lists these core and peripheral capabilities, respectively, in the inner and outer circles. Devices for providing these various core and peripheral capabilities will be discussed in Secs. 3.1–3.3.
Displays.
The display is usually the heart of a VR experience and the first choice to be made when designing a VR application. VR displays differ from standard displays in that they can present a different image to each eye [1]. This ability to display separate images to each eye allows for presenting slightly offset images to each eye similar to how we view the physical world [23]. When the virtual world is presented this way, the user has the impression of seeing a true 3D scene. While the technology to do this has existed since at least the 1960s, it has traditionally been either prohibitively expensive, unwieldy, or a low-quality experience [1,6,24]. VR displays usually fall into one of two groups: cave automatic virtual experience (CAVE) or head mounted displays (HMDs).
CAVE systems typically consist of two or more large projector screens forming a pseudoroom. The participant also wears a special set of glasses that work with the system to track the participant's head position and also to present separate images to each eye. On the other hand, HMDs are devices that are worn on the user's head and typically use half a screen to present an image to each eye. Due to the close proximity of the screen to the eye, these HMDs also typically include some specialized optics to allow the user's eye to better focus on the screen [10,25]. Sections 3.1.1 and 3.1.2 will discuss each of these displays in more detail.
Cave Automatic Virtual Experience.
CAVE technology appears to have been first researched in the Electronic Visualization Lab at the University of Illinois [26]. In its full implementation, the CAVE consists of a room where all four walls, the ceiling, and the floor are projector screens; a special set of glasses that sync with the projectors to provide stereoscopic images; a system to sense and report the location and gaze of the viewer; and a specialized computer to calculate and render the scenes and drive the projectors [4]. When first revealed, CAVE technology was positioned as superior in most aspects to other available stereoscopic displays [27]. Included in these claims were larger field-of-view (FOV), higher visual acuity, and better support for collaboration [27]. While many of these claims were true at the time, HMDs are approaching and rivaling the capabilities of CAVE technology.
The claim about collaboration deserves special consideration. In their paper first introducing CAVE technology, Cruz-Neira et al. state, “One of the most important aspects of visualization is communication. For virtual reality to become an effective and complete visualization tool, it must permit more than one user in the same environment” [27]. CAVE technology is presented as meeting this requirement; however, there are certain caveats that make it less than ideal for many scenarios. The first is occlusion. As people move about the CAVE, they can block each other's view of the screen. In general, this type of occlusion is not a serious issue when parts of the scene are beyond the other participant in virtual space although perhaps inconvenient. However, when the object being occluded is supposed to be between the viewer and someone else (in virtual space), the stereoscopic view collapses along with the usefulness of the simulation [4]. A second issue with collaboration in a CAVE is the issue of distortion. Since only a single viewer is tracked in the classic setup, all other viewers in the CAVE see the stereo image as if they were at that location. However, since two people cannot occupy the same physical space and hence cannot all stand at the same location, all viewers aside from the tracked viewer experience some distortion. The amount of distortion experienced is related to the viewer's distance from the tracked viewer [22]. The proposed solution to this issue is to track all the viewers and calculate stereoscopic images for each person. While this has been shown to work in the two-viewer use case [22], commercial hardware with fast enough refresh rates to handle more than two or three viewers does not yet exist.
A more scalable option for eliminating the distortion associated with too many people in the CAVE is to use multiple networked CAVE systems. Information from each individual CAVE can be passed to the others in the network to build a cohesive virtual experience for each participant. This type of approach was demonstrated by the DDRIVE project which was a collaboration between HRL Laboratories and General Motors Research and Development [18]. The downside to this approach is the additional cost and space requirements associated with additional CAVE systems. Each system is typically custom built, and prices can range from hundreds of thousands to millions of dollars [28,29]. In 2005, Miller et al. published research on a low-cost, portable CAVE system [30]. Their cost of $30,000 is much more affordable than typical systems, but can still be a significant investment when multiple CAVEs are involved.
Head Mounted Display.
As discussed previously, HMDs are a type of VR display that is worn by the user on his or her head. Example HMDs are shown in Fig. 3. These devices typically consist of one or two small flat panel screens placed a few inches from the eyes. The left screen (or left half of the screen) presents an image to the left eye, and the right screen (or right half of the screen) presents an image to the right eye. Because of the difficulty, the human eye has with focusing on objects so close, there are typically some optics placed between the screen and eye that allow the eye to focus better. These optics typically introduce some distortion around the edges that is corrected in software by inversely distorting the images to appear undistorted through the optics. These same optics also magnify the screen, making the pixels and the space between pixels larger and more apparent to the user. This effect is referred to as the “screen-door” effect [31–33].
In addition to displaying separate images for each eye, these displays typically also track the orientation of the device and consequently the user's head. The orientation of the user's head can then be used as an input control for the VR application allowing the user turn the camera by turning his or her head. This allows the user to look around the virtual environment just by turning his or her head. This sort of orientation tracking is generally accomplished with an inertial measurement unit (IMU), which generally consists of a three-axis accelerometer and a three-axis gyroscope.
Shortcomings of this type of display can include: incompatibility with corrective eye-wear (although some devices provide adjustments to help mitigate this problem) [34], blurry images due to slow screen-refresh rates and image persistence [35], latency between user movements and screen redraws [36], the fact that the user must generally be tethered to a computer which can reduce the immersivity of a simulation [37], and the hindrance to collocated communication they can cause [20]. The major advantages of this type of display are: its significantly cheaper cost compared to CAVE technology, its ability to be driven by a standard computer, its much smaller space requirements, its ease of setup and take-down (allowing for temporary installations and uses), and its compatibility with many readily available software tools and development environments. Table 1 compares the specifications of several discrete consumer HMDs discussed more fully below.
Field-of-view | Resolution per eye | Weight | Max. display refresh rate | Cost | |
---|---|---|---|---|---|
Oculus Rift CV1 | 110 [38,39] | 1080 × 1200 [38,39] | 440 g [39] | 90 Hz [38,39] | $599 [38] |
Avegant Glyph | 40 [40] | 1280 × 720 [40] | 434 g [40] | 120 Hz [41] | $699 [42] |
HTC Vive | 110 [39,43] | 1200 × 1080 [39,43] | 550 g [39] | 90 Hz [39,43] | $799 [39,43] |
Google Cardboard | Dependent on smart-phone used | $15 [44] | |||
Samsung Gear VR | Dependent on smart-phone used | $99 [45] | |||
OSVR Hacker DK2 | 110 [25] | 1200 × 1080 [25] | Dependent on configuration | 90 Hz [25] | $399.99 [25] |
Sony Playstation® VR | 100 [46] | 960 × 1080 [46] | 610 g [46] | 120 Hz, 90 Hz [46] | $399.99 [47] |
Dlodlo Glass H1 | Dependent on smart-phone used | Unspecified | |||
Dlodo V1 | 105 [48] | 1200 × 1200 [48] | 88 g [48] | 90 Hz [48] | Expected $559 [49] |
FOVE HMD | 90–100 [50] | 1280 × 1440 [50] | 520 g [50] | 70 Hz [50] | $399 [51] |
StarVR | 210 [52] | 2560 × 1440 [52] | 380 g [52] | 90 Hz [53] | Unspecified |
Vrvana | 120 [54] | 1280 × 1440 [54] | Unspecified | Unspecified | Unspecified |
Sulon HMD | Unspecified | Unspecified | Unspecified | Unspecified | $499 [55] |
ImmersiON VRelia Go | Dependent on smart-phone used | $139.99 [56] | |||
visusVR | Dependent on smart-phone used | $149 [57] | |||
GameFaceLabs HMD | 140 [58] | 1280 × 1440 [58] | 450 g [59] | 75 Hz [59] | $500 [59] |
Field-of-view | Resolution per eye | Weight | Max. display refresh rate | Cost | |
---|---|---|---|---|---|
Oculus Rift CV1 | 110 [38,39] | 1080 × 1200 [38,39] | 440 g [39] | 90 Hz [38,39] | $599 [38] |
Avegant Glyph | 40 [40] | 1280 × 720 [40] | 434 g [40] | 120 Hz [41] | $699 [42] |
HTC Vive | 110 [39,43] | 1200 × 1080 [39,43] | 550 g [39] | 90 Hz [39,43] | $799 [39,43] |
Google Cardboard | Dependent on smart-phone used | $15 [44] | |||
Samsung Gear VR | Dependent on smart-phone used | $99 [45] | |||
OSVR Hacker DK2 | 110 [25] | 1200 × 1080 [25] | Dependent on configuration | 90 Hz [25] | $399.99 [25] |
Sony Playstation® VR | 100 [46] | 960 × 1080 [46] | 610 g [46] | 120 Hz, 90 Hz [46] | $399.99 [47] |
Dlodlo Glass H1 | Dependent on smart-phone used | Unspecified | |||
Dlodo V1 | 105 [48] | 1200 × 1200 [48] | 88 g [48] | 90 Hz [48] | Expected $559 [49] |
FOVE HMD | 90–100 [50] | 1280 × 1440 [50] | 520 g [50] | 70 Hz [50] | $399 [51] |
StarVR | 210 [52] | 2560 × 1440 [52] | 380 g [52] | 90 Hz [53] | Unspecified |
Vrvana | 120 [54] | 1280 × 1440 [54] | Unspecified | Unspecified | Unspecified |
Sulon HMD | Unspecified | Unspecified | Unspecified | Unspecified | $499 [55] |
ImmersiON VRelia Go | Dependent on smart-phone used | $139.99 [56] | |||
visusVR | Dependent on smart-phone used | $149 [57] | |||
GameFaceLabs HMD | 140 [58] | 1280 × 1440 [58] | 450 g [59] | 75 Hz [59] | $500 [59] |
As discussed in Sec. 3.1.1, the ability to communicate effectively is an important consideration of VR technology. Current iterations of VR HMDs obscure the user's face and especially the eyes. This can create a communication barrier for users who are in close proximity which does not exist in a CAVE as discussed by Smith [20]. It should be noted here that this difference applies only to situations in which the collaborators are in the same room. If the collaborators are in different locations, HMDs and CAVE systems are on equal footing as far as communication is concerned. One method for attempting to solve this issue with HMDs is to instead use AR HMDs which allow you to see your collaborators. Billinghurst et al. have published some research in this area [60,61]. A second method for attempting to solve this issue is to take the entire interaction into VR. Movie producers have used facial recognition and motion capture technology to animate computer-generated imagery characters with the actor's same facial expressions and movements. This same technology could and has been applied to VR to animate a virtual avatar. Li et al. have presented research supported by Oculus that demonstrates using facial capture to animate virtual avatars [62], and HMDs with varying levels of facial tracking have already been announced and demonstrated [50,63].
Oculus Rift CV1: The Oculus Rift Development Kit (DK) 1 was the first of the current generation of HMD devices and promised a renewed hope for a low-cost, high-fidelity VR experience and sparked a new interest in VR research, applications, and consumer experiences. The DK1 was first released in 2012 with the second generation (DK2) being released in 2014, and the first consumer version of the Oculus Rift (CV1) released in early 2016. To track head orientation, the Rift uses a six-degree-of-freedom (DOF) IMU along with an external camera. The camera is mounted facing the user to help improve tracking accuracy. Since these devices are using flat screens with optics to expand the field-of-view (FOV), they do show the screen-door effect, but it becomes less noticeable as resolution increases.
Steam VR/HTC Vive: The Steam Vive HMD is the result of a collaboration between HTC and Steam to develop a VR system directly intended for gaming. The actual HMD is similar in design to Oculus Rift. The difference though is that the HMD is only part of the full system. The system also includes a controller for each hand and two sensor stations that are used to track the absolute position of the HMD and the controllers in a roughly 4.5 m × 4.5 m (15 ft × 15 ft) space. These additional features can make the Steam VR system a good choice when the application requires the user to move around a physical room to explore the virtual world.
Avegant Glyph: The Avegant Glyph is primarily designed to allow the user to experience media such as movies in a personal theater. As such, it includes a set of built-in headphones and an audio only mode where it is worn like a traditional set of headphones. However, built into the headband is a stereoscopic display that can be positioned over the eyes that allows the user to view media on a simulated theater screen. Despite this primary purpose, the Avegant Glyph also supports true VR experiences. The really unique feature is that instead of using a screen like the previously discussed HMDs, the Glyph uses a set of mirrors and miniature projectors to project the image onto a user's retina. This does away with pixels in the traditional sense and allows the Glyph to avoid the screen-door problem that plagues other HMDs. The downside to the Glyph, however, is that it has lower resolution and a much smaller FOV. The Glyph also includes a 9DOF IMU to track head position.
Google Cardboard: Google Cardboard is a different approach to VR than any of the previously discussed devices. Google Cardboard was designed to be as low cost as possible while still allowing people to experience VR. Google Cardboard is folded and fastened together from a cardboard template by the user. Once the cardboard template has been assembled, the user's smart-phone is then inserted into the headset and acts as the screen via apps that are specifically designed for Google Cardboard. Since the device is using a smart-phone as the display, it can also use any IMU or other sensors built into the phone. The biggest advantage of Google Cardboard is its affordability, since it is only a fraction of the cost of the previously mentioned devices. However, to achieve this low cost, design choices have been made that make this a temporary, prototype-level device not well suited to everyday use. The other interesting feature of this HMD is that since all processing is done on the phone; no cords are needed to connect the HMD to a computer allowing for extra mobility.
Samsung Gear VR: Like Google Cardboard, the Samsung Gear VR device is designed to turn a Samsung smartphone (compatible only with certain models) into a VR HMD. The major difference between these two is the cost and quality. The Gear VR is designed by Oculus, and once the phone is attached it is similar to an Oculus Rift. Different from many other HMDs, the Gear VR includes a control pad and buttons built into the side of the HMD that can be used as an interaction/navigation method for the VR application. Also like the Google Cardboard, the Gear VR has no cable to attach to a computer, allowing more freedom of movement.
OSVR Hacker DK2: The Open-Source VR project (OSVR) is an attempt by Razer® to develop a modular HMD that users can modify or upgrade as well as software libraries to accompany the device. The shipping configuration of the OSVR Hacker DK2 is very similar to the Oculus Rift CV1. The notable differences are that OSVR uses a 9DOF IMU, and the optics use a dual lens design and diffusion film to reduce distortion and the screen-door effect.
Others: Along with the HMDs mentioned above, there are several other consumer-grade HMDs suitable for VR that are available now or in the near future. These include: The Sony Playstation® VR which is similar to the Oculus Rift, but driven from a PlayStation gaming console [64]. The Dlodlo Glass H1 which is similar to the Samsung Gear VR but is compatible with more than just Samsung phones and includes a built-in low-latency 9-Axis IMU [65]. The Dlodo V1 which is somewhat like the Oculus Rift, but designed to look like a pair of glasses for the more fashion conscious VR users and is also significantly lighter weight [48]. The FOVE HMD which again is similar to the Oculus Rift, but offers eye tracking to provide more engaging VR experiences [50]. The StarVR HMD is similar to the Oculus Rift with the notable difference of a significantly expanded FOV and consequently a larger device [52]. The Vrvana Totem is like the Oculus Rift, but includes built-in pass-through cameras to provide the possibility of AR as well as VR [54]. The Sulon HMD, like the Vrvana Totem, includes cameras for AR, but can also use the cameras for 3D mapping of the user's physical environment [66]. The ImmersiON VRelia Go is similar to the Samsung Gear VR but is compatible with more than just Samsung phones [56]. The visusVR is an interesting cross of the Samsung Gear VR and the Oculus Rift. It uses a smartphone for the screen, but a computer for the actual processing and rendering to provide a fully wireless HMD [57]. The GameFace Labs HMD is another cross between the Samsung Gear VR and the Oculus Rift. However, this HMD has all the processing and power built into the HMD and runs Android OS [58].
Recent Research in Steroscopic Displays.
While currently available and soon-to-be available commercial technologies have been discussed so far, research is ongoing in both HMD and CAVE hardware. Some pertinent research will be highlighted here.
Light-field HMDs: In the physical world, humans use a variety of depth cues to gauge object size and location as discussed by Cruz-Neira et al. [4]. Of the eight cues discussed, only the accommodation cue is not reproducible by current commercial technologies. Accommodation is the term used to describe how our eyes change their shape to be able to focus on an object of interest. Since, with current technologies, users view a flat screen that remains approximately the same distance away, the user's eyes do not change focus regardless of the distance to the virtual object [4]. Research by Akeley et al. prototyped special displays to produce a directional light field [67]. These light-field displays are designed to support the accommodation depth cue by allowing the eye to focus as if the objects were a realistic distance from the user instead of pictures on a screen inches from the eyes. More recent research by Huang et al. has developed light-field displays that are suitable for use in HMDs [2] (Fig. 3).
Television-based CAVEs: Currently, CAVEs use rear-projection technology. This means that for a standard size 3 m × 3 m × 3 m CAVE, a room approximately 10 m × 10 m × 10 m is needed to house the CAVE and projector equipment [24]. Rooms this size must be custom built for the purpose of housing a CAVE, limiting the available locations for housing it and adding to the cost of installation. To reduce the amount of space needed to house a CAVE, some researchers have been exploring CAVEs built with a matrix of television panels instead [24]. These panel-based CAVEs have the advantage of being able to be deployed in more typically sized spaces.
Cybersickness: Aside from the more obvious historical barriers of cost and space to VR adoption, another challenge is cybersickness [68]. The symptoms of cybersickness are similar to motion sickness, but the root causes of cybersickness are not yet well understood [6]. Symptoms of cybersickness range from headache and sweating to disorientation, vertigo, nausea, and vomiting [69]. Researchers are still identifying the root causes, but it seems to be a combination of technological and physiological causes [70]. In some cases, symptoms can become so acute that participants must discontinue the experience to avoid becoming physically ill. It also appears that the severity of the symptoms can be correlated to characteristics of the VR experience, but no definite system for identifying or measuring these factors has been developed to date [6].
Input.
The method of user input must be carefully considered in an interactive VR system. Standard devices such as a keyboard and mouse are difficult to use in a highly immersive VR experience [37]. Given the need for alternative methods of interaction, many different methods and devices have been developed and tested. Past methods include wands [71,72], sensor-gloves [16,73,74], force-balls [75] and joysticks [16,37], voice command [37,76], and marked/markerless IR camera systems [77–80]. More recently, the markerless IR camera systems have been shrunk into consumer products such as the Leap Motion™ Controller and Microsoft Kinect®. Sections 3.2.1 and 3.2.2 will discuss the various devices used to provide input in a virtual environment. We divide the input devices into two categories: those that are primarily intended to digitize human motion, and those that provide other styles of input.
Motion Capture.
In motion capture, systems record and digitize movement, human or otherwise. These systems have found applications ranging from medical research and sports training [81,82] to animation [83] and interactive art installations [84]. Here, we are interested in the use of motion capture specifically as an input to a virtual experience. In VR, motion capture is typically used to digitize the user's position and movements. This movement data can then be used directly to animate a virtual avatar of the user allowing them to see themselves in the virtual environment. The movement data can also be analyzed for gestures that can then be treated as special inputs to the system. For instance, the designer may decide to use a fist as a special gesture which brings up a menu. Then, any time the motion capture system recognises a fist gesture, a menu is displayed for the user. While these systems in the past have been large, expensive, and difficult to set up and maintain, in the past five years a new generation of motion capture devices have been released that are opening up potential new applications. Short descriptions of these devices are below.
Leap Motion™ Controller: The Leap Motion™ Controller is an IR camera device approximately 2 in × 1 in × 0.5 in that is intended for capturing hand, finger, and wrist motion data. The device is small enough that it can either be set on a desk or table in front of the user or mounted to the front of an HMD. Since the device is camera based, it can only track what it can see. This constraint affects the device's capabilities in two important ways: First, the view area of the camera is limited to approximately an 8 ft3 (0.23 m3) volume roughly in the shape of a compressed octahedron depicted in Fig. 4. For some applications, this volume is limiting. The second constraint on the device's capabilities is its loss of tracking capability when its view of the tracked object becomes blocked. This commonly occurs when the fingers are pointed away from the device and the back of the hand blocks the camera's view. Weichert et al. [86] and Guna et al. [87] have performed analyses of the accuracy of the Leap Motion™ Controller. Taken together, these analyses show the Leap Motion™ Controller is reliable and accurate for tracking static points, and adequate for gesture-based human–computer interaction [87]. However, Guna et al. also note that there were issues with the stability of the tracking from the device [87] which can cause frustration or errors from the users. Thompson notes, however, that the manufacturer frequently updates the software with performance improvements [31], and since these analyses have been performed, major updates have been released.
Microsoft Kinect®: The Microsoft Kinect® is also an IR camera device; however, in contrast to the Leap Motion™ Controller, this device is made for tracking the entire skeleton. In addition to the IR depth camera, the Kinect® has some additional features. It includes a standard color camera which can be used with IR camera to produce full-color, depth-mapped images. It also includes a three-axis accelerometer that allows the device to sense which direction is down, and hence its current orientation. Finally, it includes a tilt motor for occasional adjustments to the camera tilt from the software. This can be used to optimize the view area. The limitations of the Kinect® are similar to that of the Leap Motion™ Controller; it can only track what it has a clear view of and a limited tracking volume. The tracking volume is approximately a truncated elliptical cone with a horizontal angle of 57 deg and vertical angle of 47 deg [88]. The truncated cone starts at approximately 4 ft from the camera and extends to approximately 11.5 ft from the camera. For skeletal tracking, the Kinect® also has the limitations of only being able to track two full skeletons at a time; the users must be facing the device, and its supplied libraries cannot track finer details such as fingers. Khoshelham and Elberink [89] and Dutta [90] evaluated the accuracy of the first version of the Kinect® and found it promising but limited. In 2013, Microsoft released an updated Kinect sensor which Wang et al. noted had improved skeletal tracking which would be further improved by use of statistical models [91].
Intel® RealSense™ Camera: The Intel® RealSense™ is also an IR camera device that can be viewed as a hybrid of the Kinect® and Leap Motion™ Camera. It offers the full-color pictures of the Kinect® with the hand tracking of the Leap Motion™ Controller. It is also important to note that the RealSense™ camera comes in two models: short-range and long-range. The long-range camera (R200) is intended more for depth mapping of medium to large objects and environments. The short-range camera (F200) is intended for indoor capture of hands, fingers, and face. One unique feature the short-range RealSense™ offers is the ability to read facial expressions. However, the RealSense™ cameras have similar issues as the previous two devices including difficulty dealing with occlusion and limited capture volume.
Noitom Perception Neuron®: While the previous discussed motion capture devices all work with IR cameras and image processing, the Perception Neuron® is a very different system. It consists of a group of up to 32 IMUs referred to as Neurons. The IMUs are mounted to the user's body and support various configurations for tailoring the resolution of various areas of the body. The motion capture system streams all the data from the IMUs back to a computer for processing. This data stream can be sent via a WiFi network or a universal serial bus (USB) cable. Compared to the camera-based systems, the Perception Neuron system does not suffer from occlusion issues, and it has a relatively large capture area (limited by the length of the USB cable or strength of the WiFi signal). However, the system is not without its own weaknesses. The most prominent are cost and the user's need to wear a “suit” of sensors. The Perception system costs $1000–$1500 depending on the configuration. In contrast, the IR cameras cost $100–$200. Past research has mentioned hardware intrusion as a barrier because of the extra effort to put on and calibrate the hardware, in this case the suit of sensors [20]. An additional weakness is their sensitivity to magnetic interference. Since some of the data collected is orientated by Earth's magnetic field, local magnetic fields such as those generated by computers, electric motors, speakers, and headphones can introduce significant noise when neurons are too close [92].
Controllers.
In contrast to the motion capture devices discussed above, controllers are not primarily intended to capture a user's body movements, but instead they generally allow the user to interact with the virtual world through some 3D embodiment (like a 3D mouse pointer). Many times, the controller supports a variety of actions much like a standard computer mouse provides either a left click or right click. A complete treatment of these controllers is outside the scope of this paper, and the reader is referred to Jayaram et al. [37] and Hayward et al. [93] for more discussion on various input devices. Chapter two of Virtual Reality Technology [94] also covers the underlying technologies used in many controllers.
Recently, the companies behind Oculus Rift and Vive have announced variants of the wand style controller that blur the line between controller and motion capture [43,95]. These controllers both track hand position and provide buttons for input. The Vive controllers are especially interesting as they work within Vive's room-scale tracking system allowing users to move through an approximately 4.5 m × 4.5 m (15 ft × 15 ft) physical space.
Additional Technologies.
Sections 3.1 and 3.2 have discussed viewing and interacting with the virtual world. However, the physical world provides more sensory input than sight. In this section, we will briefly discuss technologies for experiencing the virtual environment through other senses. Given that these areas are entire research fields unto themselves, a thorough treatment of these topics is beyond the scope of this paper, and readers are directed to the works cited for more information.
Haptics.
Haptic display technology, sometimes referred to as force-feedback, allows a user to “feel” the virtual environment. There are a wide variety of ways this has been achieved. Many times haptic feedback motors are added to the input device used, and thus as the user tries to move the controller through the virtual space, the user will experience resistance when they encounter objects [96]. Other methods include using vibration to provide feedback on surface texture [97] or to indicate collisions [98], physical interaction [98], or to notify the user of certain events such as with modern cell phones and console controllers. Other haptics research has explored tension strings [99], exoskeleton suits [100,101], ultrasonics [101], and even electrical muscle stimulation [102].
Currently however, commercially available devices are somewhat limited in their diversity and capability. Xia mentions that currently available devices are high-precision, high-resolution devices with small back-drivable forces (i.e., the force a user is required to apply to move the device), but for many product design applications, they are lacking in workspace size, maximum force of feedback, mechanism flexibility and dexterity, and could use improved back-driveability [103]. For additional information on currently available haptic devices, haptics research in product design, haptic research in product assembly, we refer the reader to works by Xia et al. [103–105]. For more general information on haptics in VR, we refer the reader to a study by Burdea [106].
Audio.
In addition to localizing objects by sight and touch, humans also have the ability to localize objects through sound [107]. Some of our ability to localize audio sources can be explained by differences in the time of arrival and level of the signal at our ears [108]. However, when sound sources are directly in front of or behind us, these effects are essentially nonexistent. Even so, we are still generally able to pinpoint these sound sources due to the sound scattering effects of our bodies and particularly our ears. These scattering effects leave a “fingerprint” on the sounds that varies by sound source position and frequency giving our brains additional clues about the location of the source. This fingerprint can be mathematically defined and is termed the head-related transfer function (HRTF) [109].
One option for recreating virtual sounds is to use a surround-sound speaker system. This style of sound system uses multiple speakers distributed around the physical space. In using a this type of system, the virtual sounds would be played from the speaker(s) that best approximate the location of the virtual source. Since the sound is being produced external to the user, all cues for sound localization would be produced naturally. However, when this system was implemented in early CAVE environments, it was found that sound localization was compromised by reflections (echoes) off the projector screens (walls) [4].
A second option that does not suffer from the echo issue of the surround-sound system is to use specialized audio processing in conjunction with headphones for each user. Since headphones produce sound directly at the ear, all localization cues must be reproduced virtually. While most of the cues are relatively generic, the HRTF is unique to each person and using a poorly matched HRTF to reproduce the localization cues can cause trouble localizing sounds for the participants [109–111]. Thus, for accurately creating virtual sounds getting an accurate HRTF is critical. The standard method for measuring the HRTF of an individual is to place the person in an anechoic chamber with small microphones in their ears (where headphones would normally be placed), and then one-by-one play a known waveform from various locations around the room and record the signal at the ear [112]. This process is time consuming and unfortunately not very scalable for widespread use [111]; however, research into this area is ongoing. One study has suggested that it may be possible to pick a HRTF that is close enough from a database of known HRTFs based on a picture of the user's outer ear [110,113]. Another research group has been studying the inverse of the standard method, whereby speakers are placed in the user's ears and microphones are placed at various locations around the room. This has the advantage that the HRTF can be characterized for all locations at once significantly reducing measurement time [114]. Greg Wilkes of VisiSonics who has licensed this technology hopes to deploy it to booths in stores such as Best Buy where users can have their individual HRTF measured in seconds [115].
Olfactory and Gustatory Displays.
While the senses of taste and smell have not received the same amount of research attention as have sight, touch, and audio; a patent granted to Morton Heilig in 1962 describes a mechanical device for creating a VR experience that engaged the senses of sight, sound, touch, and smell [116]. In more recent years, prototype olfactory displays have been developed by Matsukura et al. [117] and Ariyakul and Nakamoto [118]. In experiments with olfactory displays, Bordegoni and Carulli showed that an olfactory display could be used to positively improve the level of presence a user in a VR experience perceives [119]. Additionally, Miyaura et al. suggest that olfactory displays could be used to improve concentration levels [120]. The olfactory displays discussed here generally work by storing a liquid smell and aerosolizing the liquid on command. Some additionally contain a fan or similar device to help direct the smell to the user's nose. Taste has had even less research than smell; however, research by Narumi et al. showed that by combining a visual AR display with an olfactory display they were able to change the perceived taste of a cookie between chocolate, almond, strawberry, maple, and lemon [121].
Applications of VR in the Design Process
Although different perspectives, domains, and industries may use different terminology, the engineering design process will typically include steps or stages called opportunity development, ideation, and conceptual design, followed by preliminary and detailed design phases [122]. Often the overall design process will include analysis after, during, or mixed in with, the various design stages followed by manufacturing, implementation, operations, and even retirement or disposal [123]. Furthermore, the particular application of a design process takes on various frameworks, such as the classical “waterfall” approach [124], “spiral” model [125], or “Vee” model [126], among others [127]. Each model has their own role in clarifying the design stages and guiding the engineers, designers, and other individuals within the process to realize the end product. As designs become more integrated and complex, the individuals traditionally assigned to the different stages or roles in the design process are required to collaborate in new ways and often concurrently. This, in turn, increases the need for design and communication tools that can meet the requirements for the ever advancing design process.
Finally, while some will consider the formal design stages complete when the manufacturing has begun, a high-level, holistic view of the overall design process from “cradle-to-grave” [128] is most comprehensive and allows the most expansive view for identifying future VR applications. Figure 5 shows a summary of the design process described, along with a listing of the applications discussed hereafter. Sections 4.1–4.5 summarize current applications and briefly suggest additional applications for VR technology. Furthermore, the purpose of Sec. 4 is not to provide a comprehensive review of all of the research in this area, but present a limited overview to frame the discussion of how new VR technology can impact the overall design process.
Opportunity Development.
It is widely accepted that in order to create successful, user-centered products, designers need to develop empathy for the end user of the product [129]. This empathy is crucial for gaining a clear understanding of the user's needs, and it motivates the designer to design according to those needs [130]. While designers can often develop empathy simply by virtue of shared experiences, there are many situations where this approach breaks down, such as a group of young designers working on a product for elderly persons, or a team of male designers designing for pregnant women.
Virtual reality has the potential to provide a novel and effective way of helping designers develop empathy. Recent research has shown that virtual reality can be a powerful tool for creating empathy and even modifying behavior and attitudes. This research has shown that individuals in a virtual environment who are represented by avatars, or virtual representations of themselves, come to have the illusion of ownership over the virtual body by which they are represented [131]. In one experiment, light-skinned participants were shown to exhibit significantly less racial bias against dark-skinned people after the participants were embodied as dark-skinned avatars in virtual reality [132]. A similar study showed that users who were embodied as an avatar with superpowers were more likely to exhibit prosocial behavior after the experiment ended [133].
By leveraging the power of virtual reality, designers could almost literally step into the shoes of those they are designing for and experience the world through their eyes. A simple application that employs only VR displays and VR videos filmed from the prospective of end users could be sufficient to allow designers to better understand the perspective of those for whom they are designing. Employing haptics and/or advanced controllers also has great potential to enhance the experience. The addition of advanced controllers that allow the designer to control a first person avatar in a more natural way improves immersion and the illusion of ownership over a virtual body [134]. Beyond this, the use of advanced controllers would allow the designer to have basic interactions with a virtual environment using an avatar that represents a specific population such as young children or elderly persons. The anatomy and abilities of the virtual avatar and environment can be manipulated to simulate these conditions while maintaining a strong illusion of ownership [135–137], thereby giving designers a powerful tool to develop empathy. As in most applications involving human–computer interaction, employing haptics would allow for more powerful interactions with the virtual environment and could also likely be used to better simulate many conditions and scenarios. This technology would have the potential to simulate a wide range of user conditions including physical disabilities, human variability, and cultural differences. Beyond this, designers could conceivably simulate situations or environments such as zero gravity that would be impossible or impractical for them to experience otherwise.
Ideation and Conceptual Design.
In the early stages of design, designers and engineers draw upon a diversity of sources for inspiration [138], and indeed all new ideas are synthesized from previous knowledge and experiences [139]. This inspiration comes from both closely related and distantly related or even unrelated sources [140], and it is well understood that both the quality and quantity of ideas generated are positively impacted when designers take time to seek out inspiration [141]. One excellent example of this phenomenon is bio-inspired or biomimetic designs, wherein designs are inspired by mechanisms and patterns found in nature, such as the design of flapping micro-air vehicles that mimic flapping patterns of birds [142] or the design of adhesion surfaces patterned after gecko feet [143].
Recent research has shown that technology can facilitate this inspiration process by using computer generated collections of images and concepts that are both closely and distantly related to the subject [144]. Introducing virtual reality to this process has the potential to further facilitate inspiration by giving designers an immersive experience in which they can examine and interact with a huge variety of artifacts. Because these objects exist in a virtual environment, the cost of interacting with these objects is greatly reduced, and the quantity of artifacts that designers have access to is dramatically increased. Furthermore, the juxtaposition of artifacts and environments that would not be found together naturally has the potential to provide creative environments that can be superior to existing methods of design inspiration.
Because visual stimulation alone is sufficient to provide significant inspiration to designers [144], an effective VR application targeted at providing design inspiration could be implemented using only low-cost VR displays, reducing both cost and complexity of implementation. The addition of haptics and advanced controllers would likely provide a more interactive experience, allowing designers to touch and handle objects, and would likely aid inspiration. The potential of such an application is supported by recent research that studied the effectiveness of digital mood boards for industrial designers, showing that VR can be used in early stage design to elicit strong emotional responses from designers and facilitate the creative process [145].
Preliminary and Detailed Design
Computer-Aided Drafting Design.
Performing geometric computer-aided drafting (CAD) design in a virtual environment has the potential to make 3D modeling both more effective and more intuitive for both new and experienced users. Understanding 3D objects represented on a 2D interface requires enhanced spatial reasoning [146]. Conversely, visualizing 3D models in virtual reality makes them considerably easier to understand and is less demanding in terms of spatial reasoning skills [147], and would significantly reduce the learning curve required by 3D modeling applications. By the same reasoning, using virtual reality for model demonstrations to non CAD users such as management and clients could dramatically increase the effectiveness of such meetings. It should also be noted that there are many user-interface related challenges to creating an effective VR CAD system that may be alleviated by the use of advanced controllers in addition to a VR display.
A considerable quantity of research has been and continues to be conducted in the realm of virtual reality CAD. A 1997 paper by Volkswagen describes various methods that were implemented for CAD data visualization and manipulation, including the integration of the CAD geometry kernel ACIS with VR, allowing for basic operations directly on the native CAD data [148]. A similar kernel-based VR modeler was implemented by Fiorentino et al. in 2002 [149] called SpaceDesign, intended for freeform curve and surface modeling for industrial design applications. Krause et al. developed a system for conceptual design in virtual reality that uses advanced controllers to simulate clay modeling in virtual reality [150]. In 2012 and 2013, De Araujo et al. developed and proved a system which provides users with a stereoscopically rendered CAD environment that supports input both on and above a surface. Preliminary user testing of this environment shows favorable results for this interaction scheme [151,152]. Other researchers have further expanded this field by leveraging haptics in order to allow designers to physically interact with and feel aspects of their design. In 2010, Bordegoni implemented a system based on a haptic strip that conforms to a curve, thereby allowing the designer to feel and analyze critical aspects of a design by physically touching them [153]. Kim et al. also showed that haptics can be used to improve upon traditional modeling workflows by using haptically guided selection intent or freeform modeling based on material properties that the user can feel [154].
Much of the research that has been done in this area in the past was limited in application due to the high costs of the VR systems of the 1990 s and 2000 s. The recent advent of high-quality, low-cost VR technology opens the door for VR CAD to be used in everyday settings by engineers and designers. A recent study that uses an Occulus Rift and the Unity game development engine to visualize engineering models demonstrates the feasibility of such applications [155]; however, research in VR CAD needs to expand into this area in order make the use of low-cost VR technology a reality for day-to-day design tasks.
Analysis.
In the same vein as geometric CAD design, virtual reality has the potential to make 3D analysis easier to perform and the results easier to understand, especially for nonanalysts [156]. By making the geometry easier to understand, VR can facilitate preprocessing steps that require spatial reasoning, such as mesh repair and refinement. VR can also facilitate understanding and interpretation of analysis results not only by providing a more natural 3D environment in which to view the results but it can also provide new ways of interacting with the results.
Significant progress has been made in this field in the last 25 years, and researchers have explored a range of applications, from simple 3D viewers to haptically enabled environments that provide new ways of exploring the data. A few early studies proved that VR could be used to simulate a wind tunnel while viewing computational fluid dynamics (CFD) results [157,158]. Bruno et al. also showed that similar techniques can be used to overlay and view analysis results on physical objects using augmented reality [159]. In 2009, Fiorentino et al. expanded on this by creating an augmented reality system that allowed users to deform a physical cantilever beam and see the stress/strain distribution overlaid on the deformed beam in real-time [160,161]. A 2007 study details the methodology and implementation of a VR analysis tool for finite element models that allows users to view and interact with finite element analysis (FEA) results [162]. Another study uses neural nets for surrogate modeling to explore deformation changes in an FEA model in real-time [163]. Similar research from Iowa State University uses NURBS-based freeform deformation, sensitivity analysis, and collision detection to create an interactive environment to view and modify geometry and evaluate stresses in a tractor arm. Ryken and Vance applied the system developed to an industry problem and found that the system allowed the designer to discover a unique solution to the problem [164].
Significant research has also been performed in applying haptic devices and techniques to enhance interaction with results from various types of engineering analyses. Several studies have shown that simple haptic systems can be used to interact with CFD data and provide feedback to the user based on the force gradients [165,166]. Ferrise et al. developed a haptic finite element analysis environment to enhance the learning of mechanical behavior of materials that allows users to feel how different structures behave in real-time. They also showed that learners using their system were able to understand the principles significantly faster and with less errors [167,168]. In 2006, Kim et al. developed a similar system that allows users to explore a limited structural model using high degree-of-freedom haptics [154].
One trend that we can observe from the research in this field is that it has focused on high-level applications of VR to analysis, such as viewing results and low-fidelity interactive analysis. This type of application makes sense in the context of the expensive VR systems that have existed in the past; however, with the advent of modern inexpensive VR headsets, lower-level applications that focus on the day-to-day tasks of analysis become feasible, opening a new direction for research.
Data Visualization.
The notion of using virtual reality as a platform for raw data visualization has been a topic of interest since the early days of VR. Research has shown that virtual reality significantly enhances spatial understanding of 3D data [169]. Furthermore, just as it is possible to visualize 3D data in 2D, virtual reality can make interfacing with higher-dimensional data more meaningful. A 1999 study out of Iowa State shows that VR provides significant advantages over 2D displays for viewing higher dimensionality data [170]. A more recent study found that virtual reality provides a platform for viewing higher-dimensional data and gives “better perception of datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data” [171]. Similar to how analysis results can be explored in virtual reality, haptics and advanced controllers can be used to explore the data in novel ways [147,172]. Brooks et al. also proved this in 1990 by creating a system that allows users to explore molecular geometry and their associated force fields that allowed chemists to better understand receptor sites for new drugs [173].
Design Reviews.
Design reviews are a highly valued step in the design process. Many of the vital decisions that decide the final outcome of a product are made in a design review setting. For this reason, they have been and continue to be an attractive application for virtual reality in the design process and are one of the most common applications of VR to engineering design [174]. Two particularly compelling ways in which virtual reality can enhance design reviews are by introducing the possibility for improved communication paradigms for distributed teams and enhanced engineering data visualization. In this way, most VR design review applications are extensions of collaborative virtual environments (CVEs). CVEs are distributed virtual systems that offer a “graphically realised, potentially infinite, digital landscape” within which “individuals can share information through interaction with each other and through individual and collaborative interaction with data representations” [175].
A number of different architectures have been suggested for improving collaboration through virtual design reviews [174,176,177]. Beyond this, various parties have researched many of the issues surrounding virtual design reviews. A system developed in the late 1990 s called MASSIVE allows distributed users to interact with digital representations (avatars) of each other in a virtual environment [178,179]. A joint project between the National Center for Supercomputing Applications, the German National Research Center for Information Technology, and Caterpillar produced a VR design review system that allows distributed team members to meet and view virtual prototypes [180]. A later project in 2001 also allows users to view engineering models while also representing distributed team members with avatars [181].
It should be noted that considerable effort has also been expended in exploring the potential for leveraging virtual and augmented reality technology to enhance design reviews for collocated teams. In 1998, Szalavári et al. developed an augmented reality system for collocated design meetings that allows users to view a shared model and individually control the view of the model as well as different data layers [182]. A more recent study in 2013 compares the immersivity and effectiveness of two different CAVE systems for virtual architectural design reviews [183]. Other research has examined the use of CAVE systems for collocated virtual design reviews [183]. In 2007, Santos et al. further opened this space by proposing and validating an application for design reviews that can be carried out across multiple VR and AR devices [174]. Yet other research has shown that multiple design tools, such as interactive structural analysis, can be integrated directly into the design review environment [161].
While design reviews are a common and popular application of virtual reality to collaborative engineering, the techniques discussed above could be applied to enhance engineering collaboration between distributed team members in many situations, including both formal and informal meetings.
Virtual Reality Prototype (VRP).
One of the primary elements in engineering, and in design in general, is to evaluate the merit of a given design and identify weak points that need to be refined. Engineers and designers use a wide array of tools to accomplish this task including mathematical models, finite element models, and prototypes.
Another technique that has been the subject of considerable research since the advent of modern computer-aided engineering tools is virtual prototyping [184]. The term virtual prototype has been used in the literature to mean a staggering number of different things; however, Wang defines a virtual prototype as “a computer simulation of a physical product that can be presented, analyzed, and tested from concerned product life-cycle aspects such as design/engineering, manufacturing, service, and recycling as if on a real physical model” [185]. Many have also used the term virtual prototype to imply the involvement of virtual reality technologies, but in an effort to promote specificity and clarity, we propose a new term: virtual reality prototype (VRP), which refers to virtual prototypes for which virtual reality is an enabling technology. VRPs are an especially compelling branch of virtual protypes (VPs) due to the fact that they proffer a set of tools that lend themselves to creating rich human interaction with virtual models, namely, stereoscopic viewing, real-time interaction, naturalistic input devices, and haptics. In cases where VRPs are specifically used in conjunction with haptics or advanced controllers in order to prototype the human interaction with the virtual models, Ferrise et al. have proposed the term interactive Virtual Prototype (iVP), which we employ here to define this subset of VRPs [186].
Aesthetic evaluation: Because virtual reality enables both stereoscopic viewing of 3D models and an immersive environment in which to view them, using VRPs can provide a much more realistic and effective platform for aesthetic evaluation of a design. Not only does VR allow models to be rendered in 3D but they can also be viewed in a virtual environment that is similar to one in which the product would be used, thereby giving better context to the model. Furthermore, VR can enable users to view the model at whatever scale is most beneficial, whether it be viewing small models at a large scale to inspect details, or viewing large models at a one-to-one scale for increased realism. Research at General Motors has found that viewing 3D models of car bodies and interiors at full scale provides a more accurate understanding of the car's true shape than looking at small-scale physical prototypes [20]. Another study at Volvo showed that using VR to view car bodies at full scale was a more effective method for evaluating the aesthetic impact of body panel tolerances than using traditional viewing methods [187].
Usability and ergonomics: The unique input methods and haptic controllers proffered by VR technology provide an ideal platform for simulating and evaluating product–user interactions in a virtual environment. By using hapics and advanced input devices, these iVPs can be used to evaluate the usability and ergonomics of a design. iVPs can enable users to pick up, handle, and operate a virtual model. Based on the evaluation of the iVP, changes can be made to the model and the iVP can be re-evaluated to iterate on a design far more quickly than physical prototypes permit. In 2006, Bordegoni et al. showed that haptic input devices could be used to evaluate the ergonomics of physical control boards [188]. In 2013, Bordegoni et al. extended this research by further defining iVPs and presenting a methodology for designing interaction with these iVPs. In these papers they also presented several user-based case studies that show that iVPs can be used to simulate physical prototypes to an acceptable degree of realism [186,189]. In 2010, Bruno and Muzzupappa corroborated these finding by showing that advanced input devices can be an effective method of evaluating and improving the usability of physical user interfaces represented through VRPs [190]. As mentioned in Sec. 2, another interactive virtual prototyping technique that has shown potential is mixed prototyping. A mixed prototype is an integrated and colocated mix of generally low-fidelity physical and high-fidelity virtual components that allows users to interact with simple physical objects that are digitally overlaid or replaced with virtual representations [11,191–195]. This mix of physical and virtual components can allow for rapid and low-cost evaluation of concepts that have good visual fidelity.
Early stage VRPs: Due to the high cost of building detailed physical prototypes, they are often not used in the early stages of design, such as concept selection and early in the detailed design phase. The cost of creating VRPs however can be much lower because they can be based on CAD geometry of any fidelity. Furthermore, parametric CAD models can be used to quickly explore a wide range of concepts and variations using a single model. Once the CAD geometry has been created, a variety of techniques including those described above can be used to evaluate the model. Consequently, VRPs can enable a more complete evaluation of concepts and models earlier in the design process. In keeping with this concept, Noon et al. created a system that allows designers to quickly create and evaluate concepts in virtual reality [196].
Market testing: By putting VRPs in the hands of a market surrogate, all of the benefits that VRPs provide could be realized for market testing including reduced cost, increased flexibility, and the ability to test earlier in the design process. Additionally VRPs can enable novel approaches to market testing. For example, leveraging parametric CAD models could allow market surrogates to evaluate a large number of design variations rather than a single prototype. Alternatively, users could be given a series of VRPs that vary incrementally from a nominal model. After examining and/or interacting with each model, the user could either toss the VRP to the right or to the left based on whether they felt that the VRP was better or worse than the last one they were presented with. In this way, VRPs could be used to perform a human-guided optimization on design aspects that are difficult to quantify such as aesthetics or ergonomics.
Producibility Refinement.
In an effort to continually reduce costs and time to market, engineers and designers have put increased focus on design for manufacturing and design for assembly and the integration of these activities earlier and earlier in the design process. Knowing this, it comes as no surprise that leveraging virtual reality for these processes has been an area of considerable research over the last 25 years. One of the greatest strengths of virtual manufacturing and assembly is that it is well suited toward analyzing the human factors in manufacturing and assembly. Through VR, designers can closely simulate the manufacturing and assembly steps required for a product using iVPs, and therefore quickly iterate to refine the manufacturability of the design. In this sense, haptics and advanced controllers are well suited to virtual manufacturing and assembly as they allow for more natural interaction with virtual geometry.
Research in this field has ranged from early systems that used positional constraints to verify assembly plans [197] and VR-based training for manufacturing equipment [198] to the exploration of integrated design, manufacturing and assembly in a virtual environment [199], and haptically enabled virtual assembly environments [200].
In the same vein, researchers have explored the extension of virtual reality techniques to design for disassembly and recycling [201,202]. Using many of the same techniques, designers can evaluate the ease of disassembly of a product early in the design process, and therefore more easily design ecofriendly products.
As mentioned above, this field of research is extensive, and treatment of its full breadth and depth is beyond the scope of this paper. For a more complete exploration of this topic, we refer the reader to Seth et al. [203] for a recent survey of virtual assembly and Choi et al. [204] for a recent survey of virtual manufacturing.
Post Release Support, Repair/Maintenance.
As systems grow larger, more complex, and more expensive, maintainability becomes a serious concern, and design for maintainability becomes more and more difficult [205]. One of the issues that exacerbates this difficulty is that serious analysis of the maintainability of a design cannot be performed until high-fidelity prototypes have been created [206]. One way in which designers have attempted to address this issue is through simulated maintenance verification using CAD tools [207]. This approach, however, is limited by the considerable time required to perform the analysis, and the lack of fidelity when using simulated human models.
As with design for assembly, the use of VR has the potential to allow designers to do detailed maintainability and serviceability studies earlier in the design process. Using haptics and advanced controllers can allow designers to simulate maintenance scenarios and then allow them to interact with geometric assembly models in a natural way and thereby evaluate and refine the serviceability of the design.
Many researchers have explored the application of virtual reality to design for maintainability. In 1999, de Sa and Zachmann suggested a combined VR assembly and maintenance verification tool [208]. In 2004, Borro et al. implemented a compelling system for maintenance verification of jet turbine engines using virtual reality and haptic controllers [209]. Peng et al. implemented a system that allows product designers and maintainability technicians to collaborate and evaluate maintenance tasks in a virtual environment [210].
Discussion
From the foregoing explorations, the authors identify a few key themes that should be underscored and recognized as potential avenues to further develop and implement VR in the various stages of the overall design process. Until recently, VR technology has been applied to the “high-cost” activities defined as events, meetings, and other situations where a key set or large group of decision makers gather for investment decisions and/or decisions about the continuation of significant project resources. This is, in part, to justify the high expense of legacy VR systems. More recently, lower cost design activities are now feasible with the corresponding lower cost of VR technology. Another theme is the evidence of realizable and potential impacts that VR can have on the design process. VR is being applied to many more smaller tasks in the design process and initial studies, as explored in Sec. 4, suggest a high probability of continuing and expanding this technology to reap the benefits. A third theme is the potential to leverage the trade between current VR capability and cost. At one tenth the cost, or even lower, current-generation VR systems are approaching the experience, resolution, and benefit of the larger more complex systems. In the future, this capability gap may further shrink, while associated costs may also decrease. The following paragraphs will further discuss and highlight these themes in the context of improving the design process using VR technology.
For years, CAVE systems have been considered the gold standard for VR applications. However, because of the capital investment required to build and maintain a CAVE installation, companies rarely have more than one CAVE if any. This significantly limits access to these systems and their use must be prioritized for only select activities. This situation could be considered analogous to the mainframe computers of the 1960 s and 1970 s. While these mainframes improved the engineering design process and enabled new and improved designs, it was not until the advent of the personal computer (PC) that computing was able to impact day-to-day engineering activities and make previously unimagined applications commonplace. In a similar fashion, we suggest that this new generation of low-cost, high-quality VR technology has the potential to bring the power of VR to day-to-day engineering activities. Much like the PCs we can expect initial implementations and applications to be somewhat crude and unwieldy while the technology continues to grow and better practices emerge, but the ultimate impact is likely only limited by the imagination of engineers and developers.
As noted previously, current VR systems in industry are unavailable for all but the highest priority tasks. This limits both the potential applications and benefits of the CAVE system and limits a CAVE's cost to benefit ratio. HMDs are currently undergoing significant improvements and are fast approaching CAVE systems in terms of the fidelity and immersivity of the experience they can provide. Additionally, even if a few HMDs are required to serve equivalent number participants, the capital investment is a small fraction of what is required for a CAVE system. This means that HMD systems have the potential to provide a much better cost to benefit ratio even when used only for the same applications traditionally requiring a CAVE. However, continuing the same mainframe/PC analogy from before, PCs were not invented to initially enable better communication as email was of limited use until a sufficient number of people had consistent access to it on their PC. Now, the vast majority of communication happens digitally, and communication technology is evolving beyond email to social media. None of this would have been possible with only limited-access mainframe computers. Similarly, when a larger number of people have access to VR tools for their daily tasks, new use-cases can be explored which currently are unimagined. While the benefits of these yet-unimagined use-cases cannot be quantified, they will further improve the cost/benefit ratio of VR HMDs.
Another trend that we observe from the review of the research that has been performed to date is that the majority of what has been done focuses on the mid to later stages of the design process, and that very little has been done to enhance the very early stages of design (e.g., Opportunity Development and Ideation). This trend can clearly be observed in Fig. 5. While the applications of VR technologies to the early stages of design are perhaps less obvious, there is certainly room for the research to expand in this direction, as is indicated by the potential applications described in Secs. 4.1 and 4.2.
Finally, in examining the research that has been performed in this field, we observe the trend that a significant portion of the research merely presents a methodology or details the implementation for a novel or improved application of virtual reality to the design process. The minority of the studies considered performed some form of validation that the application developed was better than existing tools, and indeed only a small minority of studies included a rigorous analysis of the application developed. While this is understandable to some degree since VR applications, and particularly VR applications for design, current possess a large “wow” factor, we posit that there will be a shift toward more rigorous analysis of application of VR technology in the design process. This is especially true in light of the enormous potential VR technology, and particularly the low-cost current-generation VR technology, has to enhance the design process.
Conclusion
In the past few years, VR has come back into the public's awareness with the release of a new generation of VR products targeted at the general consumer. The low-cost, high-quality VR experiences these devices are capable of creating could prove a key enabler for VR to enter the engineering process in a more ubiquitous manner. When VR becomes a tool available to the average engineer, the tools discussed above as well as many more not yet imagined could become everyday realities. As shown above, the applications span the entire development process from the initial early design phase through detailed design, product release, and even into the rest of product life-cycle. This could significantly change the way engineering design work is done and allow new and innovative solutions to a wide variety of issues that today's engineers face. Wide-spread adoption of VR technology in engineering has the potential to be as pivotal a change to engineering as the introduction of the computer.
Acknowledgment
The authors were supported in part by the National Science Foundation, Award No. 1067940 “IUCRC BYU v-CAx Research Site for the Center for e-Design I/UCRC”.