In the past few years, there have been some significant advances in consumer virtual reality (VR) devices. Devices such as the Oculus Rift, HTC Vive, Leap Motion™ Controller, and Microsoft Kinect® are bringing immersive VR experiences into the homes of consumers with much lower cost and space requirements than previous generations of VR hardware. These new devices are also lowering the barrier to entry for VR engineering applications. Past research has suggested that there are significant opportunities for using VR during design tasks to improve results and reduce development time. This work reviews the latest generation of VR hardware and reviews research studying VR in the design process. Additionally, this work extracts the major themes from the reviews and discusses how the latest technology and research may affect the engineering design process. We conclude that these new devices have the potential to significantly improve portions of the design process.

Introduction

Virtual reality (VR) hardware has existed since at least the 1960s [1,2], and more widespread research into applications of VR technology was underway by the late 1980s. By the early 2000s, much of the research had fallen by the wayside, and general interest in VR technology waned. The VR hardware of the time was expensive, bulky, heavy, low resolution, and required specialized computing hardware [1,36]. However, in the last five years, a new generation of hardware has emerged. This new hardware is much more affordable and accessible than previous generations have been, which is enabling research into applications that were previously resource prohibitive. This paper will provide an overview of the specifications of the current generation of hardware, as well as areas of the engineering design process that could benefit from the application of this technology. Section 2 will discuss the definition of VR as it pertains to this work. Section 3 will discuss current and upcoming hardware for VR. Section 4 provides a focused review of the research that has been performed in applying VR to the design process. Section 5 will provide a discussion of how the current generation of VR devices may affect research going forward as well as trends seen from the review of the literature. Section 5 will also provide some suggestions for research directions based on the concepts reviewed here.

Definition of Virtual Reality

As discussed by Steuer, the term virtual reality traditionally referred to a hardware setup consisting of items such as a stereoscopic display, computers, headphones, speakers, and 3D input devices [7]. More recently, the term has been broadly used to describe any program that includes a 3D component, regardless of the hardware they utilize [8]. Given this wide variation, it is pertinent to clarify and scope the term virtual reality.

Steuer also proposes that the definition of VR should not be a black-and-white distinction, since such a binary definition does not allow for comparisons between VR systems [7]. Based on this idea, we consider a VR system in the light of the VR experience it provides. A very basic definition of a VR experience is the replacing of one or more physical senses with virtual senses. A simple example of this is people listening to music on noise-canceling headphones; they have replaced the sounds of the physical world with sounds from the virtual world. This VR experience can be rated on two orthogonal scales of immersivity and fidelity, see Fig. 1. Immersivity refers to how much of the physical world is replaced with the virtual world, while fidelity refers to how realistic the inputs are. Returning to the previous example, this scale would rate the headphones as low–medium immersivity since only the hearing sense is affected, but a high fidelity since the audio matches what we might expect to hear in the physical world.

Fig. 1
Fidelity versus immersivity. The shaded portion represents the portion of the VR spectrum under discussion in this paper.
Fig. 1
Fidelity versus immersivity. The shaded portion represents the portion of the VR spectrum under discussion in this paper.
Close modal

The contrast with augmented reality (AR) should also be noted when discussing VR. While VR seeks to replace physical senses with virtual ones, AR adds virtual information to the physical senses [9]. Continuing the earlier VR example of music on noise-canceling headphones, listening to music from a stereo would be an example of AR. In this case, the virtual sense (music) is added to the physical sense (sounds from the physical world such as cars). Although there is some overlap between AR and VR technologies and applications, we consider here only technologies for VR, and we will focus our discussion of applications on VR. For those interested in AR, Kress and Starner [10] provide a good reference for requirements and headset designs.

We also mention here the concept of mixed reality (MR). Current VR technologies are not able to produce high-fidelity outputs for all senses. Bordegoni et al. discuss the concept of MR as a solution to this issue. MR combines VR with custom made physical implements to provide a higher fidelity experience [11]. One example is an application to prototype the interaction with an appliance. In this case, a user could see the prototype design in VR, and at the same time a simple physical prototype would have buttons and knobs to provide the touch interaction with the prototype. In this paper, we focus our discussion of application and technologies for pure VR, and as such we will discuss MR only in passing. In addition to the work mentioned previously, Ferrise et al. provide some additional information about MR [12].

In the context of the definition presented, we consider VR experiences that—at a minimum—are high enough fidelity to present stereoscopic images to the viewer's eyes and are able to track a user's viewpoint through a virtual environment as they move in physical space. They must also be immersive enough to fully replace the user's sense of sight. The gray area of Fig. 1 shows the area under discussion in this paper.

Virtual Reality Hardware

Various types of hardware are used to provide an immersive, high-fidelity VR experience for users. Given the relative importance the sense of sight has in our interaction with the world, we consider a display system that presents images in such a way that the user perceives them to be 3D (as opposed to seeing a 2D projection of a 3D scene on a common TV or computer screen) in combination with a head tracking system to be the minimum set of requirements for a highly immersive VR experience [1]. This type of hardware was found in almost all VR applications we reviewed, for example, Refs. [1], [3], [6], and [1322]. This requirement is noted in Fig. 2 as the core capabilities for a VR experience. Usually, some additional features are also included to enhance the experience [7]. These additional features may include motion-capture input, 3D controller input, haptic feedback, voice control input, olfactory displays, gustatory displays, facial tracking, 3D-audio output, and/or audio recording. Figure 2 lists these features as the peripheral capabilities. To understand how core and peripheral capabilities can be used together to create a more compelling experience, consider a VR experience intended to test the ease of a product's assembly. A VR experience with only the core VR capabilities might involve watching an assembly simulation from various angles. However, if haptic feedback and 3D input devices are added to the experience, the experience could now be interactive and the user could attempt to assemble the product themselves in VR while feeling collisions and interferences. On the other hand, adding an olfactory display to produce virtual smells would likely do little to enhance this particular experience. Hence, these peripheral capabilities are optional to a highly immersive VR experience and may be included based on the goals and needs of the experience. Figure 2 lists these core and peripheral capabilities, respectively, in the inner and outer circles. Devices for providing these various core and peripheral capabilities will be discussed in Secs. 3.13.3.

Fig. 2
Typical components of a VR experience. Inner components must be included; outer components are optional depending on the goal of the application.
Fig. 2
Typical components of a VR experience. Inner components must be included; outer components are optional depending on the goal of the application.
Close modal

Displays.

The display is usually the heart of a VR experience and the first choice to be made when designing a VR application. VR displays differ from standard displays in that they can present a different image to each eye [1]. This ability to display separate images to each eye allows for presenting slightly offset images to each eye similar to how we view the physical world [23]. When the virtual world is presented this way, the user has the impression of seeing a true 3D scene. While the technology to do this has existed since at least the 1960s, it has traditionally been either prohibitively expensive, unwieldy, or a low-quality experience [1,6,24]. VR displays usually fall into one of two groups: cave automatic virtual experience (CAVE) or head mounted displays (HMDs).

CAVE systems typically consist of two or more large projector screens forming a pseudoroom. The participant also wears a special set of glasses that work with the system to track the participant's head position and also to present separate images to each eye. On the other hand, HMDs are devices that are worn on the user's head and typically use half a screen to present an image to each eye. Due to the close proximity of the screen to the eye, these HMDs also typically include some specialized optics to allow the user's eye to better focus on the screen [10,25]. Sections 3.1.1 and 3.1.2 will discuss each of these displays in more detail.

Cave Automatic Virtual Experience.

CAVE technology appears to have been first researched in the Electronic Visualization Lab at the University of Illinois [26]. In its full implementation, the CAVE consists of a room where all four walls, the ceiling, and the floor are projector screens; a special set of glasses that sync with the projectors to provide stereoscopic images; a system to sense and report the location and gaze of the viewer; and a specialized computer to calculate and render the scenes and drive the projectors [4]. When first revealed, CAVE technology was positioned as superior in most aspects to other available stereoscopic displays [27]. Included in these claims were larger field-of-view (FOV), higher visual acuity, and better support for collaboration [27]. While many of these claims were true at the time, HMDs are approaching and rivaling the capabilities of CAVE technology.

The claim about collaboration deserves special consideration. In their paper first introducing CAVE technology, Cruz-Neira et al. state, “One of the most important aspects of visualization is communication. For virtual reality to become an effective and complete visualization tool, it must permit more than one user in the same environment” [27]. CAVE technology is presented as meeting this requirement; however, there are certain caveats that make it less than ideal for many scenarios. The first is occlusion. As people move about the CAVE, they can block each other's view of the screen. In general, this type of occlusion is not a serious issue when parts of the scene are beyond the other participant in virtual space although perhaps inconvenient. However, when the object being occluded is supposed to be between the viewer and someone else (in virtual space), the stereoscopic view collapses along with the usefulness of the simulation [4]. A second issue with collaboration in a CAVE is the issue of distortion. Since only a single viewer is tracked in the classic setup, all other viewers in the CAVE see the stereo image as if they were at that location. However, since two people cannot occupy the same physical space and hence cannot all stand at the same location, all viewers aside from the tracked viewer experience some distortion. The amount of distortion experienced is related to the viewer's distance from the tracked viewer [22]. The proposed solution to this issue is to track all the viewers and calculate stereoscopic images for each person. While this has been shown to work in the two-viewer use case [22], commercial hardware with fast enough refresh rates to handle more than two or three viewers does not yet exist.

A more scalable option for eliminating the distortion associated with too many people in the CAVE is to use multiple networked CAVE systems. Information from each individual CAVE can be passed to the others in the network to build a cohesive virtual experience for each participant. This type of approach was demonstrated by the DDRIVE project which was a collaboration between HRL Laboratories and General Motors Research and Development [18]. The downside to this approach is the additional cost and space requirements associated with additional CAVE systems. Each system is typically custom built, and prices can range from hundreds of thousands to millions of dollars [28,29]. In 2005, Miller et al. published research on a low-cost, portable CAVE system [30]. Their cost of $30,000 is much more affordable than typical systems, but can still be a significant investment when multiple CAVEs are involved.

Head Mounted Display.

As discussed previously, HMDs are a type of VR display that is worn by the user on his or her head. Example HMDs are shown in Fig. 3. These devices typically consist of one or two small flat panel screens placed a few inches from the eyes. The left screen (or left half of the screen) presents an image to the left eye, and the right screen (or right half of the screen) presents an image to the right eye. Because of the difficulty, the human eye has with focusing on objects so close, there are typically some optics placed between the screen and eye that allow the eye to focus better. These optics typically introduce some distortion around the edges that is corrected in software by inversely distorting the images to appear undistorted through the optics. These same optics also magnify the screen, making the pixels and the space between pixels larger and more apparent to the user. This effect is referred to as the “screen-door” effect [3133].

In addition to displaying separate images for each eye, these displays typically also track the orientation of the device and consequently the user's head. The orientation of the user's head can then be used as an input control for the VR application allowing the user turn the camera by turning his or her head. This allows the user to look around the virtual environment just by turning his or her head. This sort of orientation tracking is generally accomplished with an inertial measurement unit (IMU), which generally consists of a three-axis accelerometer and a three-axis gyroscope.

Shortcomings of this type of display can include: incompatibility with corrective eye-wear (although some devices provide adjustments to help mitigate this problem) [34], blurry images due to slow screen-refresh rates and image persistence [35], latency between user movements and screen redraws [36], the fact that the user must generally be tethered to a computer which can reduce the immersivity of a simulation [37], and the hindrance to collocated communication they can cause [20]. The major advantages of this type of display are: its significantly cheaper cost compared to CAVE technology, its ability to be driven by a standard computer, its much smaller space requirements, its ease of setup and take-down (allowing for temporary installations and uses), and its compatibility with many readily available software tools and development environments. Table 1 compares the specifications of several discrete consumer HMDs discussed more fully below.

Table 1

Discrete consumer HMD specs (prices as of 2016)

Field-of-viewResolution per eyeWeightMax. display refresh rateCost
Oculus Rift CV1110 [38,39]1080 × 1200 [38,39]440 g [39]90 Hz [38,39]$599 [38]
Avegant Glyph40 [40]1280 × 720 [40]434 g [40]120 Hz [41]$699 [42]
HTC Vive110 [39,43]1200 × 1080 [39,43]550 g [39]90 Hz [39,43]$799 [39,43]
Google CardboardDependent on smart-phone used$15 [44]
Samsung Gear VRDependent on smart-phone used$99 [45]
OSVR Hacker DK2110 [25]1200 × 1080 [25]Dependent on configuration90 Hz [25]$399.99 [25]
Sony Playstation® VR100 [46]960 × 1080 [46]610 g [46]120 Hz, 90 Hz [46]$399.99 [47]
Dlodlo Glass H1Dependent on smart-phone usedUnspecified
Dlodo V1105 [48]1200 × 1200 [48]88 g [48]90 Hz [48]Expected $559 [49]
FOVE HMD90–100 [50]1280 × 1440 [50]520 g [50]70 Hz [50]$399 [51]
StarVR210 [52]2560 × 1440 [52]380 g [52]90 Hz [53]Unspecified
Vrvana120 [54]1280 × 1440 [54]UnspecifiedUnspecifiedUnspecified
Sulon HMDUnspecifiedUnspecifiedUnspecifiedUnspecified$499 [55]
ImmersiON VRelia GoDependent on smart-phone used$139.99 [56]
visusVRDependent on smart-phone used$149 [57]
GameFaceLabs HMD140 [58]1280 × 1440 [58]450 g [59]75 Hz [59]$500 [59]
Field-of-viewResolution per eyeWeightMax. display refresh rateCost
Oculus Rift CV1110 [38,39]1080 × 1200 [38,39]440 g [39]90 Hz [38,39]$599 [38]
Avegant Glyph40 [40]1280 × 720 [40]434 g [40]120 Hz [41]$699 [42]
HTC Vive110 [39,43]1200 × 1080 [39,43]550 g [39]90 Hz [39,43]$799 [39,43]
Google CardboardDependent on smart-phone used$15 [44]
Samsung Gear VRDependent on smart-phone used$99 [45]
OSVR Hacker DK2110 [25]1200 × 1080 [25]Dependent on configuration90 Hz [25]$399.99 [25]
Sony Playstation® VR100 [46]960 × 1080 [46]610 g [46]120 Hz, 90 Hz [46]$399.99 [47]
Dlodlo Glass H1Dependent on smart-phone usedUnspecified
Dlodo V1105 [48]1200 × 1200 [48]88 g [48]90 Hz [48]Expected $559 [49]
FOVE HMD90–100 [50]1280 × 1440 [50]520 g [50]70 Hz [50]$399 [51]
StarVR210 [52]2560 × 1440 [52]380 g [52]90 Hz [53]Unspecified
Vrvana120 [54]1280 × 1440 [54]UnspecifiedUnspecifiedUnspecified
Sulon HMDUnspecifiedUnspecifiedUnspecifiedUnspecified$499 [55]
ImmersiON VRelia GoDependent on smart-phone used$139.99 [56]
visusVRDependent on smart-phone used$149 [57]
GameFaceLabs HMD140 [58]1280 × 1440 [58]450 g [59]75 Hz [59]$500 [59]

As discussed in Sec. 3.1.1, the ability to communicate effectively is an important consideration of VR technology. Current iterations of VR HMDs obscure the user's face and especially the eyes. This can create a communication barrier for users who are in close proximity which does not exist in a CAVE as discussed by Smith [20]. It should be noted here that this difference applies only to situations in which the collaborators are in the same room. If the collaborators are in different locations, HMDs and CAVE systems are on equal footing as far as communication is concerned. One method for attempting to solve this issue with HMDs is to instead use AR HMDs which allow you to see your collaborators. Billinghurst et al. have published some research in this area [60,61]. A second method for attempting to solve this issue is to take the entire interaction into VR. Movie producers have used facial recognition and motion capture technology to animate computer-generated imagery characters with the actor's same facial expressions and movements. This same technology could and has been applied to VR to animate a virtual avatar. Li et al. have presented research supported by Oculus that demonstrates using facial capture to animate virtual avatars [62], and HMDs with varying levels of facial tracking have already been announced and demonstrated [50,63].

Oculus Rift CV1: The Oculus Rift Development Kit (DK) 1 was the first of the current generation of HMD devices and promised a renewed hope for a low-cost, high-fidelity VR experience and sparked a new interest in VR research, applications, and consumer experiences. The DK1 was first released in 2012 with the second generation (DK2) being released in 2014, and the first consumer version of the Oculus Rift (CV1) released in early 2016. To track head orientation, the Rift uses a six-degree-of-freedom (DOF) IMU along with an external camera. The camera is mounted facing the user to help improve tracking accuracy. Since these devices are using flat screens with optics to expand the field-of-view (FOV), they do show the screen-door effect, but it becomes less noticeable as resolution increases.

Steam VR/HTC Vive: The Steam Vive HMD is the result of a collaboration between HTC and Steam to develop a VR system directly intended for gaming. The actual HMD is similar in design to Oculus Rift. The difference though is that the HMD is only part of the full system. The system also includes a controller for each hand and two sensor stations that are used to track the absolute position of the HMD and the controllers in a roughly 4.5 m × 4.5 m (15 ft × 15 ft) space. These additional features can make the Steam VR system a good choice when the application requires the user to move around a physical room to explore the virtual world.

Avegant Glyph: The Avegant Glyph is primarily designed to allow the user to experience media such as movies in a personal theater. As such, it includes a set of built-in headphones and an audio only mode where it is worn like a traditional set of headphones. However, built into the headband is a stereoscopic display that can be positioned over the eyes that allows the user to view media on a simulated theater screen. Despite this primary purpose, the Avegant Glyph also supports true VR experiences. The really unique feature is that instead of using a screen like the previously discussed HMDs, the Glyph uses a set of mirrors and miniature projectors to project the image onto a user's retina. This does away with pixels in the traditional sense and allows the Glyph to avoid the screen-door problem that plagues other HMDs. The downside to the Glyph, however, is that it has lower resolution and a much smaller FOV. The Glyph also includes a 9DOF IMU to track head position.

Google Cardboard: Google Cardboard is a different approach to VR than any of the previously discussed devices. Google Cardboard was designed to be as low cost as possible while still allowing people to experience VR. Google Cardboard is folded and fastened together from a cardboard template by the user. Once the cardboard template has been assembled, the user's smart-phone is then inserted into the headset and acts as the screen via apps that are specifically designed for Google Cardboard. Since the device is using a smart-phone as the display, it can also use any IMU or other sensors built into the phone. The biggest advantage of Google Cardboard is its affordability, since it is only a fraction of the cost of the previously mentioned devices. However, to achieve this low cost, design choices have been made that make this a temporary, prototype-level device not well suited to everyday use. The other interesting feature of this HMD is that since all processing is done on the phone; no cords are needed to connect the HMD to a computer allowing for extra mobility.

Samsung Gear VR: Like Google Cardboard, the Samsung Gear VR device is designed to turn a Samsung smartphone (compatible only with certain models) into a VR HMD. The major difference between these two is the cost and quality. The Gear VR is designed by Oculus, and once the phone is attached it is similar to an Oculus Rift. Different from many other HMDs, the Gear VR includes a control pad and buttons built into the side of the HMD that can be used as an interaction/navigation method for the VR application. Also like the Google Cardboard, the Gear VR has no cable to attach to a computer, allowing more freedom of movement.

OSVR Hacker DK2: The Open-Source VR project (OSVR) is an attempt by Razer® to develop a modular HMD that users can modify or upgrade as well as software libraries to accompany the device. The shipping configuration of the OSVR Hacker DK2 is very similar to the Oculus Rift CV1. The notable differences are that OSVR uses a 9DOF IMU, and the optics use a dual lens design and diffusion film to reduce distortion and the screen-door effect.

Others: Along with the HMDs mentioned above, there are several other consumer-grade HMDs suitable for VR that are available now or in the near future. These include: The Sony Playstation® VR which is similar to the Oculus Rift, but driven from a PlayStation gaming console [64]. The Dlodlo Glass H1 which is similar to the Samsung Gear VR but is compatible with more than just Samsung phones and includes a built-in low-latency 9-Axis IMU [65]. The Dlodo V1 which is somewhat like the Oculus Rift, but designed to look like a pair of glasses for the more fashion conscious VR users and is also significantly lighter weight [48]. The FOVE HMD which again is similar to the Oculus Rift, but offers eye tracking to provide more engaging VR experiences [50]. The StarVR HMD is similar to the Oculus Rift with the notable difference of a significantly expanded FOV and consequently a larger device [52]. The Vrvana Totem is like the Oculus Rift, but includes built-in pass-through cameras to provide the possibility of AR as well as VR [54]. The Sulon HMD, like the Vrvana Totem, includes cameras for AR, but can also use the cameras for 3D mapping of the user's physical environment [66]. The ImmersiON VRelia Go is similar to the Samsung Gear VR but is compatible with more than just Samsung phones [56]. The visusVR is an interesting cross of the Samsung Gear VR and the Oculus Rift. It uses a smartphone for the screen, but a computer for the actual processing and rendering to provide a fully wireless HMD [57]. The GameFace Labs HMD is another cross between the Samsung Gear VR and the Oculus Rift. However, this HMD has all the processing and power built into the HMD and runs Android OS [58].

Recent Research in Steroscopic Displays.

While currently available and soon-to-be available commercial technologies have been discussed so far, research is ongoing in both HMD and CAVE hardware. Some pertinent research will be highlighted here.

Light-field HMDs: In the physical world, humans use a variety of depth cues to gauge object size and location as discussed by Cruz-Neira et al. [4]. Of the eight cues discussed, only the accommodation cue is not reproducible by current commercial technologies. Accommodation is the term used to describe how our eyes change their shape to be able to focus on an object of interest. Since, with current technologies, users view a flat screen that remains approximately the same distance away, the user's eyes do not change focus regardless of the distance to the virtual object [4]. Research by Akeley et al. prototyped special displays to produce a directional light field [67]. These light-field displays are designed to support the accommodation depth cue by allowing the eye to focus as if the objects were a realistic distance from the user instead of pictures on a screen inches from the eyes. More recent research by Huang et al. has developed light-field displays that are suitable for use in HMDs [2] (Fig. 3).

Fig. 3
Images of various HMDs discussed in Sec. 3.1.2. Top left to right: Oculus Rift, Steam VR/HTC Vive, and Avegant Glyph. Bottom left to right: Google Cardboard, Samsung Gear VR by Oculus, and OSVR HDK. (Images courtesy of Oculus, HTC, Avegant, and OSVR).
Fig. 3
Images of various HMDs discussed in Sec. 3.1.2. Top left to right: Oculus Rift, Steam VR/HTC Vive, and Avegant Glyph. Bottom left to right: Google Cardboard, Samsung Gear VR by Oculus, and OSVR HDK. (Images courtesy of Oculus, HTC, Avegant, and OSVR).
Close modal

Television-based CAVEs: Currently, CAVEs use rear-projection technology. This means that for a standard size 3 m × 3 m × 3 m CAVE, a room approximately 10 m × 10 m × 10 m is needed to house the CAVE and projector equipment [24]. Rooms this size must be custom built for the purpose of housing a CAVE, limiting the available locations for housing it and adding to the cost of installation. To reduce the amount of space needed to house a CAVE, some researchers have been exploring CAVEs built with a matrix of television panels instead [24]. These panel-based CAVEs have the advantage of being able to be deployed in more typically sized spaces.

Cybersickness: Aside from the more obvious historical barriers of cost and space to VR adoption, another challenge is cybersickness [68]. The symptoms of cybersickness are similar to motion sickness, but the root causes of cybersickness are not yet well understood [6]. Symptoms of cybersickness range from headache and sweating to disorientation, vertigo, nausea, and vomiting [69]. Researchers are still identifying the root causes, but it seems to be a combination of technological and physiological causes [70]. In some cases, symptoms can become so acute that participants must discontinue the experience to avoid becoming physically ill. It also appears that the severity of the symptoms can be correlated to characteristics of the VR experience, but no definite system for identifying or measuring these factors has been developed to date [6].

Input.

The method of user input must be carefully considered in an interactive VR system. Standard devices such as a keyboard and mouse are difficult to use in a highly immersive VR experience [37]. Given the need for alternative methods of interaction, many different methods and devices have been developed and tested. Past methods include wands [71,72], sensor-gloves [16,73,74], force-balls [75] and joysticks [16,37], voice command [37,76], and marked/markerless IR camera systems [7780]. More recently, the markerless IR camera systems have been shrunk into consumer products such as the Leap Motion™ Controller and Microsoft Kinect®. Sections 3.2.1 and 3.2.2 will discuss the various devices used to provide input in a virtual environment. We divide the input devices into two categories: those that are primarily intended to digitize human motion, and those that provide other styles of input.

Motion Capture.

In motion capture, systems record and digitize movement, human or otherwise. These systems have found applications ranging from medical research and sports training [81,82] to animation [83] and interactive art installations [84]. Here, we are interested in the use of motion capture specifically as an input to a virtual experience. In VR, motion capture is typically used to digitize the user's position and movements. This movement data can then be used directly to animate a virtual avatar of the user allowing them to see themselves in the virtual environment. The movement data can also be analyzed for gestures that can then be treated as special inputs to the system. For instance, the designer may decide to use a fist as a special gesture which brings up a menu. Then, any time the motion capture system recognises a fist gesture, a menu is displayed for the user. While these systems in the past have been large, expensive, and difficult to set up and maintain, in the past five years a new generation of motion capture devices have been released that are opening up potential new applications. Short descriptions of these devices are below.

Leap Motion™ Controller: The Leap Motion™ Controller is an IR camera device approximately 2 in × 1 in × 0.5 in that is intended for capturing hand, finger, and wrist motion data. The device is small enough that it can either be set on a desk or table in front of the user or mounted to the front of an HMD. Since the device is camera based, it can only track what it can see. This constraint affects the device's capabilities in two important ways: First, the view area of the camera is limited to approximately an 8 ft3 (0.23 m3) volume roughly in the shape of a compressed octahedron depicted in Fig. 4. For some applications, this volume is limiting. The second constraint on the device's capabilities is its loss of tracking capability when its view of the tracked object becomes blocked. This commonly occurs when the fingers are pointed away from the device and the back of the hand blocks the camera's view. Weichert et al. [86] and Guna et al. [87] have performed analyses of the accuracy of the Leap Motion™ Controller. Taken together, these analyses show the Leap Motion™ Controller is reliable and accurate for tracking static points, and adequate for gesture-based human–computer interaction [87]. However, Guna et al. also note that there were issues with the stability of the tracking from the device [87] which can cause frustration or errors from the users. Thompson notes, however, that the manufacturer frequently updates the software with performance improvements [31], and since these analyses have been performed, major updates have been released.

Fig. 4
Leap Motion™ Controller capture area. Note that newer software has expanded the tracking volume to 2.6 ft (80 cm) above the controller. Figure from the Leap Motion™ Blog [85].
Fig. 4
Leap Motion™ Controller capture area. Note that newer software has expanded the tracking volume to 2.6 ft (80 cm) above the controller. Figure from the Leap Motion™ Blog [85].
Close modal

Microsoft Kinect®: The Microsoft Kinect® is also an IR camera device; however, in contrast to the Leap Motion™ Controller, this device is made for tracking the entire skeleton. In addition to the IR depth camera, the Kinect® has some additional features. It includes a standard color camera which can be used with IR camera to produce full-color, depth-mapped images. It also includes a three-axis accelerometer that allows the device to sense which direction is down, and hence its current orientation. Finally, it includes a tilt motor for occasional adjustments to the camera tilt from the software. This can be used to optimize the view area. The limitations of the Kinect® are similar to that of the Leap Motion™ Controller; it can only track what it has a clear view of and a limited tracking volume. The tracking volume is approximately a truncated elliptical cone with a horizontal angle of 57 deg and vertical angle of 47 deg [88]. The truncated cone starts at approximately 4 ft from the camera and extends to approximately 11.5 ft from the camera. For skeletal tracking, the Kinect® also has the limitations of only being able to track two full skeletons at a time; the users must be facing the device, and its supplied libraries cannot track finer details such as fingers. Khoshelham and Elberink [89] and Dutta [90] evaluated the accuracy of the first version of the Kinect® and found it promising but limited. In 2013, Microsoft released an updated Kinect sensor which Wang et al. noted had improved skeletal tracking which would be further improved by use of statistical models [91].

Intel® RealSense™ Camera: The Intel® RealSense™ is also an IR camera device that can be viewed as a hybrid of the Kinect® and Leap Motion™ Camera. It offers the full-color pictures of the Kinect® with the hand tracking of the Leap Motion™ Controller. It is also important to note that the RealSense™ camera comes in two models: short-range and long-range. The long-range camera (R200) is intended more for depth mapping of medium to large objects and environments. The short-range camera (F200) is intended for indoor capture of hands, fingers, and face. One unique feature the short-range RealSense™ offers is the ability to read facial expressions. However, the RealSense™ cameras have similar issues as the previous two devices including difficulty dealing with occlusion and limited capture volume.

Noitom Perception Neuron®: While the previous discussed motion capture devices all work with IR cameras and image processing, the Perception Neuron® is a very different system. It consists of a group of up to 32 IMUs referred to as Neurons. The IMUs are mounted to the user's body and support various configurations for tailoring the resolution of various areas of the body. The motion capture system streams all the data from the IMUs back to a computer for processing. This data stream can be sent via a WiFi network or a universal serial bus (USB) cable. Compared to the camera-based systems, the Perception Neuron system does not suffer from occlusion issues, and it has a relatively large capture area (limited by the length of the USB cable or strength of the WiFi signal). However, the system is not without its own weaknesses. The most prominent are cost and the user's need to wear a “suit” of sensors. The Perception system costs $1000–$1500 depending on the configuration. In contrast, the IR cameras cost $100–$200. Past research has mentioned hardware intrusion as a barrier because of the extra effort to put on and calibrate the hardware, in this case the suit of sensors [20]. An additional weakness is their sensitivity to magnetic interference. Since some of the data collected is orientated by Earth's magnetic field, local magnetic fields such as those generated by computers, electric motors, speakers, and headphones can introduce significant noise when neurons are too close [92].

Controllers.

In contrast to the motion capture devices discussed above, controllers are not primarily intended to capture a user's body movements, but instead they generally allow the user to interact with the virtual world through some 3D embodiment (like a 3D mouse pointer). Many times, the controller supports a variety of actions much like a standard computer mouse provides either a left click or right click. A complete treatment of these controllers is outside the scope of this paper, and the reader is referred to Jayaram et al. [37] and Hayward et al. [93] for more discussion on various input devices. Chapter two of Virtual Reality Technology [94] also covers the underlying technologies used in many controllers.

Recently, the companies behind Oculus Rift and Vive have announced variants of the wand style controller that blur the line between controller and motion capture [43,95]. These controllers both track hand position and provide buttons for input. The Vive controllers are especially interesting as they work within Vive's room-scale tracking system allowing users to move through an approximately 4.5 m × 4.5 m (15 ft × 15 ft) physical space.

Additional Technologies.

Sections 3.1 and 3.2 have discussed viewing and interacting with the virtual world. However, the physical world provides more sensory input than sight. In this section, we will briefly discuss technologies for experiencing the virtual environment through other senses. Given that these areas are entire research fields unto themselves, a thorough treatment of these topics is beyond the scope of this paper, and readers are directed to the works cited for more information.

Haptics.

Haptic display technology, sometimes referred to as force-feedback, allows a user to “feel” the virtual environment. There are a wide variety of ways this has been achieved. Many times haptic feedback motors are added to the input device used, and thus as the user tries to move the controller through the virtual space, the user will experience resistance when they encounter objects [96]. Other methods include using vibration to provide feedback on surface texture [97] or to indicate collisions [98], physical interaction [98], or to notify the user of certain events such as with modern cell phones and console controllers. Other haptics research has explored tension strings [99], exoskeleton suits [100,101], ultrasonics [101], and even electrical muscle stimulation [102].

Currently however, commercially available devices are somewhat limited in their diversity and capability. Xia mentions that currently available devices are high-precision, high-resolution devices with small back-drivable forces (i.e., the force a user is required to apply to move the device), but for many product design applications, they are lacking in workspace size, maximum force of feedback, mechanism flexibility and dexterity, and could use improved back-driveability [103]. For additional information on currently available haptic devices, haptics research in product design, haptic research in product assembly, we refer the reader to works by Xia et al. [103105]. For more general information on haptics in VR, we refer the reader to a study by Burdea [106].

Audio.

In addition to localizing objects by sight and touch, humans also have the ability to localize objects through sound [107]. Some of our ability to localize audio sources can be explained by differences in the time of arrival and level of the signal at our ears [108]. However, when sound sources are directly in front of or behind us, these effects are essentially nonexistent. Even so, we are still generally able to pinpoint these sound sources due to the sound scattering effects of our bodies and particularly our ears. These scattering effects leave a “fingerprint” on the sounds that varies by sound source position and frequency giving our brains additional clues about the location of the source. This fingerprint can be mathematically defined and is termed the head-related transfer function (HRTF) [109].

One option for recreating virtual sounds is to use a surround-sound speaker system. This style of sound system uses multiple speakers distributed around the physical space. In using a this type of system, the virtual sounds would be played from the speaker(s) that best approximate the location of the virtual source. Since the sound is being produced external to the user, all cues for sound localization would be produced naturally. However, when this system was implemented in early CAVE environments, it was found that sound localization was compromised by reflections (echoes) off the projector screens (walls) [4].

A second option that does not suffer from the echo issue of the surround-sound system is to use specialized audio processing in conjunction with headphones for each user. Since headphones produce sound directly at the ear, all localization cues must be reproduced virtually. While most of the cues are relatively generic, the HRTF is unique to each person and using a poorly matched HRTF to reproduce the localization cues can cause trouble localizing sounds for the participants [109111]. Thus, for accurately creating virtual sounds getting an accurate HRTF is critical. The standard method for measuring the HRTF of an individual is to place the person in an anechoic chamber with small microphones in their ears (where headphones would normally be placed), and then one-by-one play a known waveform from various locations around the room and record the signal at the ear [112]. This process is time consuming and unfortunately not very scalable for widespread use [111]; however, research into this area is ongoing. One study has suggested that it may be possible to pick a HRTF that is close enough from a database of known HRTFs based on a picture of the user's outer ear [110,113]. Another research group has been studying the inverse of the standard method, whereby speakers are placed in the user's ears and microphones are placed at various locations around the room. This has the advantage that the HRTF can be characterized for all locations at once significantly reducing measurement time [114]. Greg Wilkes of VisiSonics who has licensed this technology hopes to deploy it to booths in stores such as Best Buy where users can have their individual HRTF measured in seconds [115].

Olfactory and Gustatory Displays.

While the senses of taste and smell have not received the same amount of research attention as have sight, touch, and audio; a patent granted to Morton Heilig in 1962 describes a mechanical device for creating a VR experience that engaged the senses of sight, sound, touch, and smell [116]. In more recent years, prototype olfactory displays have been developed by Matsukura et al. [117] and Ariyakul and Nakamoto [118]. In experiments with olfactory displays, Bordegoni and Carulli showed that an olfactory display could be used to positively improve the level of presence a user in a VR experience perceives [119]. Additionally, Miyaura et al. suggest that olfactory displays could be used to improve concentration levels [120]. The olfactory displays discussed here generally work by storing a liquid smell and aerosolizing the liquid on command. Some additionally contain a fan or similar device to help direct the smell to the user's nose. Taste has had even less research than smell; however, research by Narumi et al. showed that by combining a visual AR display with an olfactory display they were able to change the perceived taste of a cookie between chocolate, almond, strawberry, maple, and lemon [121].

Applications of VR in the Design Process

Although different perspectives, domains, and industries may use different terminology, the engineering design process will typically include steps or stages called opportunity development, ideation, and conceptual design, followed by preliminary and detailed design phases [122]. Often the overall design process will include analysis after, during, or mixed in with, the various design stages followed by manufacturing, implementation, operations, and even retirement or disposal [123]. Furthermore, the particular application of a design process takes on various frameworks, such as the classical “waterfall” approach [124], “spiral” model [125], or “Vee” model [126], among others [127]. Each model has their own role in clarifying the design stages and guiding the engineers, designers, and other individuals within the process to realize the end product. As designs become more integrated and complex, the individuals traditionally assigned to the different stages or roles in the design process are required to collaborate in new ways and often concurrently. This, in turn, increases the need for design and communication tools that can meet the requirements for the ever advancing design process.

Finally, while some will consider the formal design stages complete when the manufacturing has begun, a high-level, holistic view of the overall design process from “cradle-to-grave” [128] is most comprehensive and allows the most expansive view for identifying future VR applications. Figure 5 shows a summary of the design process described, along with a listing of the applications discussed hereafter. Sections 4.14.5 summarize current applications and briefly suggest additional applications for VR technology. Furthermore, the purpose of Sec. 4 is not to provide a comprehensive review of all of the research in this area, but present a limited overview to frame the discussion of how new VR technology can impact the overall design process.

Fig. 5
Overview of the design process with applications of VR previously explored. Applications in italics represent proposed application rather than existing research.
Fig. 5
Overview of the design process with applications of VR previously explored. Applications in italics represent proposed application rather than existing research.
Close modal

Opportunity Development.

It is widely accepted that in order to create successful, user-centered products, designers need to develop empathy for the end user of the product [129]. This empathy is crucial for gaining a clear understanding of the user's needs, and it motivates the designer to design according to those needs [130]. While designers can often develop empathy simply by virtue of shared experiences, there are many situations where this approach breaks down, such as a group of young designers working on a product for elderly persons, or a team of male designers designing for pregnant women.

Virtual reality has the potential to provide a novel and effective way of helping designers develop empathy. Recent research has shown that virtual reality can be a powerful tool for creating empathy and even modifying behavior and attitudes. This research has shown that individuals in a virtual environment who are represented by avatars, or virtual representations of themselves, come to have the illusion of ownership over the virtual body by which they are represented [131]. In one experiment, light-skinned participants were shown to exhibit significantly less racial bias against dark-skinned people after the participants were embodied as dark-skinned avatars in virtual reality [132]. A similar study showed that users who were embodied as an avatar with superpowers were more likely to exhibit prosocial behavior after the experiment ended [133].

By leveraging the power of virtual reality, designers could almost literally step into the shoes of those they are designing for and experience the world through their eyes. A simple application that employs only VR displays and VR videos filmed from the prospective of end users could be sufficient to allow designers to better understand the perspective of those for whom they are designing. Employing haptics and/or advanced controllers also has great potential to enhance the experience. The addition of advanced controllers that allow the designer to control a first person avatar in a more natural way improves immersion and the illusion of ownership over a virtual body [134]. Beyond this, the use of advanced controllers would allow the designer to have basic interactions with a virtual environment using an avatar that represents a specific population such as young children or elderly persons. The anatomy and abilities of the virtual avatar and environment can be manipulated to simulate these conditions while maintaining a strong illusion of ownership [135137], thereby giving designers a powerful tool to develop empathy. As in most applications involving human–computer interaction, employing haptics would allow for more powerful interactions with the virtual environment and could also likely be used to better simulate many conditions and scenarios. This technology would have the potential to simulate a wide range of user conditions including physical disabilities, human variability, and cultural differences. Beyond this, designers could conceivably simulate situations or environments such as zero gravity that would be impossible or impractical for them to experience otherwise.

Ideation and Conceptual Design.

In the early stages of design, designers and engineers draw upon a diversity of sources for inspiration [138], and indeed all new ideas are synthesized from previous knowledge and experiences [139]. This inspiration comes from both closely related and distantly related or even unrelated sources [140], and it is well understood that both the quality and quantity of ideas generated are positively impacted when designers take time to seek out inspiration [141]. One excellent example of this phenomenon is bio-inspired or biomimetic designs, wherein designs are inspired by mechanisms and patterns found in nature, such as the design of flapping micro-air vehicles that mimic flapping patterns of birds [142] or the design of adhesion surfaces patterned after gecko feet [143].

Recent research has shown that technology can facilitate this inspiration process by using computer generated collections of images and concepts that are both closely and distantly related to the subject [144]. Introducing virtual reality to this process has the potential to further facilitate inspiration by giving designers an immersive experience in which they can examine and interact with a huge variety of artifacts. Because these objects exist in a virtual environment, the cost of interacting with these objects is greatly reduced, and the quantity of artifacts that designers have access to is dramatically increased. Furthermore, the juxtaposition of artifacts and environments that would not be found together naturally has the potential to provide creative environments that can be superior to existing methods of design inspiration.

Because visual stimulation alone is sufficient to provide significant inspiration to designers [144], an effective VR application targeted at providing design inspiration could be implemented using only low-cost VR displays, reducing both cost and complexity of implementation. The addition of haptics and advanced controllers would likely provide a more interactive experience, allowing designers to touch and handle objects, and would likely aid inspiration. The potential of such an application is supported by recent research that studied the effectiveness of digital mood boards for industrial designers, showing that VR can be used in early stage design to elicit strong emotional responses from designers and facilitate the creative process [145].

Preliminary and Detailed Design

Computer-Aided Drafting Design.

Performing geometric computer-aided drafting (CAD) design in a virtual environment has the potential to make 3D modeling both more effective and more intuitive for both new and experienced users. Understanding 3D objects represented on a 2D interface requires enhanced spatial reasoning [146]. Conversely, visualizing 3D models in virtual reality makes them considerably easier to understand and is less demanding in terms of spatial reasoning skills [147], and would significantly reduce the learning curve required by 3D modeling applications. By the same reasoning, using virtual reality for model demonstrations to non CAD users such as management and clients could dramatically increase the effectiveness of such meetings. It should also be noted that there are many user-interface related challenges to creating an effective VR CAD system that may be alleviated by the use of advanced controllers in addition to a VR display.

A considerable quantity of research has been and continues to be conducted in the realm of virtual reality CAD. A 1997 paper by Volkswagen describes various methods that were implemented for CAD data visualization and manipulation, including the integration of the CAD geometry kernel ACIS with VR, allowing for basic operations directly on the native CAD data [148]. A similar kernel-based VR modeler was implemented by Fiorentino et al. in 2002 [149] called SpaceDesign, intended for freeform curve and surface modeling for industrial design applications. Krause et al. developed a system for conceptual design in virtual reality that uses advanced controllers to simulate clay modeling in virtual reality [150]. In 2012 and 2013, De Araujo et al. developed and proved a system which provides users with a stereoscopically rendered CAD environment that supports input both on and above a surface. Preliminary user testing of this environment shows favorable results for this interaction scheme [151,152]. Other researchers have further expanded this field by leveraging haptics in order to allow designers to physically interact with and feel aspects of their design. In 2010, Bordegoni implemented a system based on a haptic strip that conforms to a curve, thereby allowing the designer to feel and analyze critical aspects of a design by physically touching them [153]. Kim et al. also showed that haptics can be used to improve upon traditional modeling workflows by using haptically guided selection intent or freeform modeling based on material properties that the user can feel [154].

Much of the research that has been done in this area in the past was limited in application due to the high costs of the VR systems of the 1990 s and 2000 s. The recent advent of high-quality, low-cost VR technology opens the door for VR CAD to be used in everyday settings by engineers and designers. A recent study that uses an Occulus Rift and the Unity game development engine to visualize engineering models demonstrates the feasibility of such applications [155]; however, research in VR CAD needs to expand into this area in order make the use of low-cost VR technology a reality for day-to-day design tasks.

Analysis.

In the same vein as geometric CAD design, virtual reality has the potential to make 3D analysis easier to perform and the results easier to understand, especially for nonanalysts [156]. By making the geometry easier to understand, VR can facilitate preprocessing steps that require spatial reasoning, such as mesh repair and refinement. VR can also facilitate understanding and interpretation of analysis results not only by providing a more natural 3D environment in which to view the results but it can also provide new ways of interacting with the results.

Significant progress has been made in this field in the last 25 years, and researchers have explored a range of applications, from simple 3D viewers to haptically enabled environments that provide new ways of exploring the data. A few early studies proved that VR could be used to simulate a wind tunnel while viewing computational fluid dynamics (CFD) results [157,158]. Bruno et al. also showed that similar techniques can be used to overlay and view analysis results on physical objects using augmented reality [159]. In 2009, Fiorentino et al. expanded on this by creating an augmented reality system that allowed users to deform a physical cantilever beam and see the stress/strain distribution overlaid on the deformed beam in real-time [160,161]. A 2007 study details the methodology and implementation of a VR analysis tool for finite element models that allows users to view and interact with finite element analysis (FEA) results [162]. Another study uses neural nets for surrogate modeling to explore deformation changes in an FEA model in real-time [163]. Similar research from Iowa State University uses NURBS-based freeform deformation, sensitivity analysis, and collision detection to create an interactive environment to view and modify geometry and evaluate stresses in a tractor arm. Ryken and Vance applied the system developed to an industry problem and found that the system allowed the designer to discover a unique solution to the problem [164].

Significant research has also been performed in applying haptic devices and techniques to enhance interaction with results from various types of engineering analyses. Several studies have shown that simple haptic systems can be used to interact with CFD data and provide feedback to the user based on the force gradients [165,166]. Ferrise et al. developed a haptic finite element analysis environment to enhance the learning of mechanical behavior of materials that allows users to feel how different structures behave in real-time. They also showed that learners using their system were able to understand the principles significantly faster and with less errors [167,168]. In 2006, Kim et al. developed a similar system that allows users to explore a limited structural model using high degree-of-freedom haptics [154].

One trend that we can observe from the research in this field is that it has focused on high-level applications of VR to analysis, such as viewing results and low-fidelity interactive analysis. This type of application makes sense in the context of the expensive VR systems that have existed in the past; however, with the advent of modern inexpensive VR headsets, lower-level applications that focus on the day-to-day tasks of analysis become feasible, opening a new direction for research.

Data Visualization.

The notion of using virtual reality as a platform for raw data visualization has been a topic of interest since the early days of VR. Research has shown that virtual reality significantly enhances spatial understanding of 3D data [169]. Furthermore, just as it is possible to visualize 3D data in 2D, virtual reality can make interfacing with higher-dimensional data more meaningful. A 1999 study out of Iowa State shows that VR provides significant advantages over 2D displays for viewing higher dimensionality data [170]. A more recent study found that virtual reality provides a platform for viewing higher-dimensional data and gives “better perception of datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data” [171]. Similar to how analysis results can be explored in virtual reality, haptics and advanced controllers can be used to explore the data in novel ways [147,172]. Brooks et al. also proved this in 1990 by creating a system that allows users to explore molecular geometry and their associated force fields that allowed chemists to better understand receptor sites for new drugs [173].

Design Reviews.

Design reviews are a highly valued step in the design process. Many of the vital decisions that decide the final outcome of a product are made in a design review setting. For this reason, they have been and continue to be an attractive application for virtual reality in the design process and are one of the most common applications of VR to engineering design [174]. Two particularly compelling ways in which virtual reality can enhance design reviews are by introducing the possibility for improved communication paradigms for distributed teams and enhanced engineering data visualization. In this way, most VR design review applications are extensions of collaborative virtual environments (CVEs). CVEs are distributed virtual systems that offer a “graphically realised, potentially infinite, digital landscape” within which “individuals can share information through interaction with each other and through individual and collaborative interaction with data representations” [175].

A number of different architectures have been suggested for improving collaboration through virtual design reviews [174,176,177]. Beyond this, various parties have researched many of the issues surrounding virtual design reviews. A system developed in the late 1990 s called MASSIVE allows distributed users to interact with digital representations (avatars) of each other in a virtual environment [178,179]. A joint project between the National Center for Supercomputing Applications, the German National Research Center for Information Technology, and Caterpillar produced a VR design review system that allows distributed team members to meet and view virtual prototypes [180]. A later project in 2001 also allows users to view engineering models while also representing distributed team members with avatars [181].

It should be noted that considerable effort has also been expended in exploring the potential for leveraging virtual and augmented reality technology to enhance design reviews for collocated teams. In 1998, Szalavári et al. developed an augmented reality system for collocated design meetings that allows users to view a shared model and individually control the view of the model as well as different data layers [182]. A more recent study in 2013 compares the immersivity and effectiveness of two different CAVE systems for virtual architectural design reviews [183]. Other research has examined the use of CAVE systems for collocated virtual design reviews [183]. In 2007, Santos et al. further opened this space by proposing and validating an application for design reviews that can be carried out across multiple VR and AR devices [174]. Yet other research has shown that multiple design tools, such as interactive structural analysis, can be integrated directly into the design review environment [161].

While design reviews are a common and popular application of virtual reality to collaborative engineering, the techniques discussed above could be applied to enhance engineering collaboration between distributed team members in many situations, including both formal and informal meetings.

Virtual Reality Prototype (VRP).

One of the primary elements in engineering, and in design in general, is to evaluate the merit of a given design and identify weak points that need to be refined. Engineers and designers use a wide array of tools to accomplish this task including mathematical models, finite element models, and prototypes.

Another technique that has been the subject of considerable research since the advent of modern computer-aided engineering tools is virtual prototyping [184]. The term virtual prototype has been used in the literature to mean a staggering number of different things; however, Wang defines a virtual prototype as “a computer simulation of a physical product that can be presented, analyzed, and tested from concerned product life-cycle aspects such as design/engineering, manufacturing, service, and recycling as if on a real physical model” [185]. Many have also used the term virtual prototype to imply the involvement of virtual reality technologies, but in an effort to promote specificity and clarity, we propose a new term: virtual reality prototype (VRP), which refers to virtual prototypes for which virtual reality is an enabling technology. VRPs are an especially compelling branch of virtual protypes (VPs) due to the fact that they proffer a set of tools that lend themselves to creating rich human interaction with virtual models, namely, stereoscopic viewing, real-time interaction, naturalistic input devices, and haptics. In cases where VRPs are specifically used in conjunction with haptics or advanced controllers in order to prototype the human interaction with the virtual models, Ferrise et al. have proposed the term interactive Virtual Prototype (iVP), which we employ here to define this subset of VRPs [186].

Aesthetic evaluation: Because virtual reality enables both stereoscopic viewing of 3D models and an immersive environment in which to view them, using VRPs can provide a much more realistic and effective platform for aesthetic evaluation of a design. Not only does VR allow models to be rendered in 3D but they can also be viewed in a virtual environment that is similar to one in which the product would be used, thereby giving better context to the model. Furthermore, VR can enable users to view the model at whatever scale is most beneficial, whether it be viewing small models at a large scale to inspect details, or viewing large models at a one-to-one scale for increased realism. Research at General Motors has found that viewing 3D models of car bodies and interiors at full scale provides a more accurate understanding of the car's true shape than looking at small-scale physical prototypes [20]. Another study at Volvo showed that using VR to view car bodies at full scale was a more effective method for evaluating the aesthetic impact of body panel tolerances than using traditional viewing methods [187].

Usability and ergonomics: The unique input methods and haptic controllers proffered by VR technology provide an ideal platform for simulating and evaluating product–user interactions in a virtual environment. By using hapics and advanced input devices, these iVPs can be used to evaluate the usability and ergonomics of a design. iVPs can enable users to pick up, handle, and operate a virtual model. Based on the evaluation of the iVP, changes can be made to the model and the iVP can be re-evaluated to iterate on a design far more quickly than physical prototypes permit. In 2006, Bordegoni et al. showed that haptic input devices could be used to evaluate the ergonomics of physical control boards [188]. In 2013, Bordegoni et al. extended this research by further defining iVPs and presenting a methodology for designing interaction with these iVPs. In these papers they also presented several user-based case studies that show that iVPs can be used to simulate physical prototypes to an acceptable degree of realism [186,189]. In 2010, Bruno and Muzzupappa corroborated these finding by showing that advanced input devices can be an effective method of evaluating and improving the usability of physical user interfaces represented through VRPs [190]. As mentioned in Sec. 2, another interactive virtual prototyping technique that has shown potential is mixed prototyping. A mixed prototype is an integrated and colocated mix of generally low-fidelity physical and high-fidelity virtual components that allows users to interact with simple physical objects that are digitally overlaid or replaced with virtual representations [11,191195]. This mix of physical and virtual components can allow for rapid and low-cost evaluation of concepts that have good visual fidelity.

Early stage VRPs: Due to the high cost of building detailed physical prototypes, they are often not used in the early stages of design, such as concept selection and early in the detailed design phase. The cost of creating VRPs however can be much lower because they can be based on CAD geometry of any fidelity. Furthermore, parametric CAD models can be used to quickly explore a wide range of concepts and variations using a single model. Once the CAD geometry has been created, a variety of techniques including those described above can be used to evaluate the model. Consequently, VRPs can enable a more complete evaluation of concepts and models earlier in the design process. In keeping with this concept, Noon et al. created a system that allows designers to quickly create and evaluate concepts in virtual reality [196].

Market testing: By putting VRPs in the hands of a market surrogate, all of the benefits that VRPs provide could be realized for market testing including reduced cost, increased flexibility, and the ability to test earlier in the design process. Additionally VRPs can enable novel approaches to market testing. For example, leveraging parametric CAD models could allow market surrogates to evaluate a large number of design variations rather than a single prototype. Alternatively, users could be given a series of VRPs that vary incrementally from a nominal model. After examining and/or interacting with each model, the user could either toss the VRP to the right or to the left based on whether they felt that the VRP was better or worse than the last one they were presented with. In this way, VRPs could be used to perform a human-guided optimization on design aspects that are difficult to quantify such as aesthetics or ergonomics.

Producibility Refinement.

In an effort to continually reduce costs and time to market, engineers and designers have put increased focus on design for manufacturing and design for assembly and the integration of these activities earlier and earlier in the design process. Knowing this, it comes as no surprise that leveraging virtual reality for these processes has been an area of considerable research over the last 25 years. One of the greatest strengths of virtual manufacturing and assembly is that it is well suited toward analyzing the human factors in manufacturing and assembly. Through VR, designers can closely simulate the manufacturing and assembly steps required for a product using iVPs, and therefore quickly iterate to refine the manufacturability of the design. In this sense, haptics and advanced controllers are well suited to virtual manufacturing and assembly as they allow for more natural interaction with virtual geometry.

Research in this field has ranged from early systems that used positional constraints to verify assembly plans [197] and VR-based training for manufacturing equipment [198] to the exploration of integrated design, manufacturing and assembly in a virtual environment [199], and haptically enabled virtual assembly environments [200].

In the same vein, researchers have explored the extension of virtual reality techniques to design for disassembly and recycling [201,202]. Using many of the same techniques, designers can evaluate the ease of disassembly of a product early in the design process, and therefore more easily design ecofriendly products.

As mentioned above, this field of research is extensive, and treatment of its full breadth and depth is beyond the scope of this paper. For a more complete exploration of this topic, we refer the reader to Seth et al. [203] for a recent survey of virtual assembly and Choi et al. [204] for a recent survey of virtual manufacturing.

Post Release Support, Repair/Maintenance.

As systems grow larger, more complex, and more expensive, maintainability becomes a serious concern, and design for maintainability becomes more and more difficult [205]. One of the issues that exacerbates this difficulty is that serious analysis of the maintainability of a design cannot be performed until high-fidelity prototypes have been created [206]. One way in which designers have attempted to address this issue is through simulated maintenance verification using CAD tools [207]. This approach, however, is limited by the considerable time required to perform the analysis, and the lack of fidelity when using simulated human models.

As with design for assembly, the use of VR has the potential to allow designers to do detailed maintainability and serviceability studies earlier in the design process. Using haptics and advanced controllers can allow designers to simulate maintenance scenarios and then allow them to interact with geometric assembly models in a natural way and thereby evaluate and refine the serviceability of the design.

Many researchers have explored the application of virtual reality to design for maintainability. In 1999, de Sa and Zachmann suggested a combined VR assembly and maintenance verification tool [208]. In 2004, Borro et al. implemented a compelling system for maintenance verification of jet turbine engines using virtual reality and haptic controllers [209]. Peng et al. implemented a system that allows product designers and maintainability technicians to collaborate and evaluate maintenance tasks in a virtual environment [210].

Discussion

From the foregoing explorations, the authors identify a few key themes that should be underscored and recognized as potential avenues to further develop and implement VR in the various stages of the overall design process. Until recently, VR technology has been applied to the “high-cost” activities defined as events, meetings, and other situations where a key set or large group of decision makers gather for investment decisions and/or decisions about the continuation of significant project resources. This is, in part, to justify the high expense of legacy VR systems. More recently, lower cost design activities are now feasible with the corresponding lower cost of VR technology. Another theme is the evidence of realizable and potential impacts that VR can have on the design process. VR is being applied to many more smaller tasks in the design process and initial studies, as explored in Sec. 4, suggest a high probability of continuing and expanding this technology to reap the benefits. A third theme is the potential to leverage the trade between current VR capability and cost. At one tenth the cost, or even lower, current-generation VR systems are approaching the experience, resolution, and benefit of the larger more complex systems. In the future, this capability gap may further shrink, while associated costs may also decrease. The following paragraphs will further discuss and highlight these themes in the context of improving the design process using VR technology.

For years, CAVE systems have been considered the gold standard for VR applications. However, because of the capital investment required to build and maintain a CAVE installation, companies rarely have more than one CAVE if any. This significantly limits access to these systems and their use must be prioritized for only select activities. This situation could be considered analogous to the mainframe computers of the 1960 s and 1970 s. While these mainframes improved the engineering design process and enabled new and improved designs, it was not until the advent of the personal computer (PC) that computing was able to impact day-to-day engineering activities and make previously unimagined applications commonplace. In a similar fashion, we suggest that this new generation of low-cost, high-quality VR technology has the potential to bring the power of VR to day-to-day engineering activities. Much like the PCs we can expect initial implementations and applications to be somewhat crude and unwieldy while the technology continues to grow and better practices emerge, but the ultimate impact is likely only limited by the imagination of engineers and developers.

As noted previously, current VR systems in industry are unavailable for all but the highest priority tasks. This limits both the potential applications and benefits of the CAVE system and limits a CAVE's cost to benefit ratio. HMDs are currently undergoing significant improvements and are fast approaching CAVE systems in terms of the fidelity and immersivity of the experience they can provide. Additionally, even if a few HMDs are required to serve equivalent number participants, the capital investment is a small fraction of what is required for a CAVE system. This means that HMD systems have the potential to provide a much better cost to benefit ratio even when used only for the same applications traditionally requiring a CAVE. However, continuing the same mainframe/PC analogy from before, PCs were not invented to initially enable better communication as email was of limited use until a sufficient number of people had consistent access to it on their PC. Now, the vast majority of communication happens digitally, and communication technology is evolving beyond email to social media. None of this would have been possible with only limited-access mainframe computers. Similarly, when a larger number of people have access to VR tools for their daily tasks, new use-cases can be explored which currently are unimagined. While the benefits of these yet-unimagined use-cases cannot be quantified, they will further improve the cost/benefit ratio of VR HMDs.

Another trend that we observe from the review of the research that has been performed to date is that the majority of what has been done focuses on the mid to later stages of the design process, and that very little has been done to enhance the very early stages of design (e.g., Opportunity Development and Ideation). This trend can clearly be observed in Fig. 5. While the applications of VR technologies to the early stages of design are perhaps less obvious, there is certainly room for the research to expand in this direction, as is indicated by the potential applications described in Secs. 4.1 and 4.2.

Finally, in examining the research that has been performed in this field, we observe the trend that a significant portion of the research merely presents a methodology or details the implementation for a novel or improved application of virtual reality to the design process. The minority of the studies considered performed some form of validation that the application developed was better than existing tools, and indeed only a small minority of studies included a rigorous analysis of the application developed. While this is understandable to some degree since VR applications, and particularly VR applications for design, current possess a large “wow” factor, we posit that there will be a shift toward more rigorous analysis of application of VR technology in the design process. This is especially true in light of the enormous potential VR technology, and particularly the low-cost current-generation VR technology, has to enhance the design process.

Conclusion

In the past few years, VR has come back into the public's awareness with the release of a new generation of VR products targeted at the general consumer. The low-cost, high-quality VR experiences these devices are capable of creating could prove a key enabler for VR to enter the engineering process in a more ubiquitous manner. When VR becomes a tool available to the average engineer, the tools discussed above as well as many more not yet imagined could become everyday realities. As shown above, the applications span the entire development process from the initial early design phase through detailed design, product release, and even into the rest of product life-cycle. This could significantly change the way engineering design work is done and allow new and innovative solutions to a wide variety of issues that today's engineers face. Wide-spread adoption of VR technology in engineering has the potential to be as pivotal a change to engineering as the introduction of the computer.

Acknowledgment

The authors were supported in part by the National Science Foundation, Award No. 1067940 “IUCRC BYU v-CAx Research Site for the Center for e-Design I/UCRC”.

References

1.
Sutherland
,
I. E.
,
1968
, “
A Head-Mounted Three Dimensional Display
,”
Fall Joint Computer Conference—Part I
(
AFIPS (Fall—Part I)
), San Francisco, CA, Dec. 9–11, pp.
757
764
.
2.
Huang
,
F.-C.
,
Chen
,
K.
, and
Wetzstein
,
G.
,
2015
, “
The Light Field Stereoscope: Immersive Computer Graphics Via Factored Near-Eye Light Field Displays With Focus Cues
,”
ACM Trans. Graphics
,
34
(
4
), pp.
60:1
60:12
.
3.
Bochenek
,
G. M.
, and
Ragusa
,
J. M.
,
2004
, “
Improving Integrated Project Team Interaction Through Virtual (3D) Collaboration
,”
Eng. Manage. J.
,
16
(
2
), pp.
3
12
.
4.
Cruz-Neira
,
C.
,
Sandin
,
D. J.
, and
DeFanti
,
T. A.
,
1993
, “
Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the Cave
,”
20th Annual Conference on Computer Graphics and Interactive Techniques
(
SIGGRAPH
), Anaheim, CA, Aug. 2–6, pp.
135
142
.
5.
Arthur
,
J. J.
,
Bailey
,
R. E.
,
Williams
,
S. P.
,
Prinzel
,
L. J.
,
Shelton
,
K. J.
,
Jones
,
D. R.
, and
Houston
,
V.
,
2015
, “
A Review of Head-Worn Display Research at Nasa Langley Research Center
,”
Proc. SPIE
,
9470
, p.
94700W
.
6.
Davis
,
S.
,
Nesbitt
,
K.
, and
Nalivaiko
,
E.
,
2015
, “
Comparing the Onset of Cybersickness Using the Oculus Rift and Two Virtual Roller Coasters
,”
11th Australasian Conference on Interactive Entertainment
(
IE
),
Y.
Pisan
,
K.
Nesbitt
, and
K.
Blackmore
, eds., Sydney, Australia, Jan. 27–30, Vol.
167
, pp.
3
14
.http://crpit.com/confpapers/CRPITV167Davis.pdf
7.
Steuer
,
J.
,
1992
, “
Defining Virtual Reality: Dimensions Determining Telepresence
,”
J. Commun.
,
42
(
4
), pp.
73
93
.
8.
Valin
,
S.
,
Francu
,
A.
,
Trefftz
,
H.
, and
Marsic
,
I.
,
2001
, “
Sharing Viewpoints in Collaborative Virtual Environments
,”
34th Annual Hawaii International Conference on System Sciences
(
HICSS
), Maui, HI, Jan. 3–6, pp. 12–21.
9.
Starner
,
T.
,
Mann
,
S.
,
Rhodes
,
B.
,
Levine
,
J.
,
Healey
,
J.
,
Kirsch
,
D.
,
Picard
,
R. W.
, and
Pentland
,
A.
,
1997
, “
Augmented Reality Through Wearable Computing
,”
Presence: Teleoperators Virtual Environ.
,
6
(
4
), pp.
386
398
.
10.
Kress
,
B.
, and
Starner
,
T.
,
2013
, “
A Review of Head-Mounted Displays (HMD) Technologies and Applications for Consumer Electronics
,”
Proc. SPIE
,
8720
, p.
87200A
.
11.
Bordegoni
,
M.
,
Cugini
,
U.
,
Caruso
,
G.
, and
Polistina
,
S.
,
2009
, “
Mixed Prototyping for Product Assessment: A Reference Framework
,”
Int. J. Interact. Des. Manuf.
,
3
(
3
), pp.
177
187
.
12.
Ferrise
,
F.
,
Graziosi
,
S.
, and
Bordegoni
,
M.
,
2015
, “
Prototyping Strategies for Multisensory Product Experience Engineering
,”
J. Intell. Manuf.
, epub.
13.
Banerjee
,
P.
,
Bochenek
,
G. M.
, and
Ragusa
,
J. M.
,
2002
, “
Analyzing the Relationship of Presence and Immersive Tendencies on the Conceptual Design Review Process
,”
ASME J. Comput. Inf. Sci. Eng.
,
2
(
1
), pp.
59
64
.
14.
Banerjee
,
P.
, and
Basu-Mallick
,
D.
,
2003
, “
Measuring the Effectiveness of Presence and Immersive Tendencies on the Conceptual Design Review Process
,”
ASME J. Comput. Inf. Sci. Eng.
,
3
(
2
), pp.
166
169
.
15.
Benford
,
S.
,
Greenhalgh
,
C.
,
Rodden
,
T.
, and
Pycock
,
J.
,
2001
, “
Collaborative Virtual Environments
,”
Commun. ACM
,
44
(
7
), pp.
79
85
.
16.
Satter
,
K.
, and
Butler
,
A.
,
2015
, “
Competitive Usability Analysis of Immersive Virtual Environments in Engineering Design Review
,”
ASME J. Comput. Inf. Sci. Eng.
,
15
(
3
), p.
031001
.
17.
Dorozhkin
,
D. V.
,
Vance
,
J. M.
,
Rehn
,
G. D.
, and
Lemessi
,
M.
,
2012
, “
Coupling of Interactive Manufacturing Operations Simulation and Immersive Virtual Reality
,”
Virtual Reality
,
16
(
1
), pp.
15
23
.
18.
Daily
,
M.
,
Howard
,
M.
,
Jerald
,
J.
,
Lee
,
C.
,
Martin
,
K.
,
McInnes
,
D.
, and
Tinker
,
P.
,
2000
, “
Distributed Design Review in Virtual Environments
,”
Third International Conference on Collaborative Virtual Environments
(
CVE
), San Francisco, CA, pp.
57
63
.
19.
Conti
,
G.
,
Ucelli
,
G.
, and
Petric
,
J.
,
2002
, “
JCAD-VR: A Collaborative Design Tool for Architects
,”
Fourth International Conference on Collaborative Virtual Environments
(
CVE
), Bonn, Germany, Sept. 30–Oct. 2, pp.
153
154
.
20.
Smith
,
R. C.
,
2001
, “
Shared Vision
,”
Commun. ACM
,
44
(
12
), pp.
45
48
.
21.
Cecil
,
J.
, and
Kanchanapiboon
,
A.
,
2007
, “
Virtual Engineering Approaches in Product and Process Design
,”
Int. J. Adv. Manuf. Technol.
,
31
(
9
), pp.
846
856
.
22.
Agrawala
,
M.
,
Beers
,
A. C.
,
McDowall
,
I.
,
Fröhlich
,
B.
,
Bolas
,
M.
, and
Hanrahan
,
P.
,
1997
, “
The Two-User Responsive Workbench: Support for Collaboration Through Individual Views of a Shared Space
,”
24th Annual Conference on Computer Graphics and Interactive Techniques
(
SIGGRAPH
), Los Angeles, CA, Aug. 3–8, pp.
327
332
.
23.
Cakmakci
,
O.
, and
Rolland
,
J.
,
2006
, “
Head-Worn Displays: A Review
,”
J. Disp. Technol.
,
2
(
3
), pp.
199
216
.
24.
DeFanti
,
T. A.
,
Acevedo
,
D.
,
Ainsworth
,
R. A.
,
Brown
,
M. D.
,
Cutchin
,
S.
,
Dawe
,
G.
,
Doerr
,
K.-U.
,
Johnson
,
A.
,
Knox
,
C.
,
Kooima
,
R.
,
Kuester
,
F.
,
Leigh
,
J.
,
Long
,
L.
,
Otto
,
P.
,
Petrovic
,
V.
,
Ponto
,
K.
,
Prudhomme
,
A.
,
Rao
,
R.
,
Renambot
,
L.
,
Sandin
,
D. J.
,
Schulze
,
J. P.
,
Smarr
,
L.
,
Srinivasan
,
M.
,
Weber
,
P.
, and
Wickham
,
G.
,
2011
, “
The Future of the Cave
,”
Cent. Eur. J. Eng.
,
1
(
1
), pp.
16
37
.
25.
OSVR
,
2016
, “
Open Source Headed-Mounted Display for OSVR
,” Open Source Virtual Reality, accessed June 9, 2017, http://www.osvr.org/hdk2.html
26.
Creagh
,
H.
,
2003
, “
Cave Automatic Virtual Environment
,”
IEEE
Electrical Insulation Conference and Electrical Manufacturing and Coil Winding Technology Conference
, Indianapolis, IN, Sept. 25, pp.
499
504
.
27.
Cruz-Neira
,
C.
,
Sandin
,
D. J.
,
DeFanti
,
T. A.
,
Kenyon
,
R. V.
, and
Hart
,
J. C.
,
1992
, “
The Cave: Audio Visual Experience Automatic Virtual Environment
,”
Commun. ACM
,
35
(
6
), pp.
64
72
.
28.
Katz
,
A.
,
2015
, “
Brown University Unveils 3D Virtual-Reality Room
,” The Boston Globe, Boston, MA, accessed June 9, 2017, https://www.bostonglobe.com/lifestyle/style/2015/06/19/brown-university-unveils-virtual-reality-room/QoTOOp66NpPZeGMF0bapjO/story.html
29.
Ramsey
,
D.
,
2008
, “
3D Virtual Reality Environment Developed at UC San Diego Helps Scientists Innovate
,” California Institute For Telecommunications and Information Technology, La Jolla, CA, accessed May 25, 2016, http://www.calit2.net/newsroom/release.php?id=1383
30.
Miller
,
S. A.
,
Misch
,
N. J.
, and
Dalton
,
A. J.
,
2005
, “
Low-Cost, Portable, Multi-Wall Virtual Reality
,”
Eurographics Symposium on Virtual Environments
(
EGVE
),
E.
Kjems
, and
R.
Blach
, eds., Aalborg, Denmark, Oct. 6–7, pp. 9–14.
31.
Thompson
,
F. V.
, III
,
2014
, “
Evaluation of a Commodity VR Interaction Device for Gestural Object Manipulation in a Three Dimensional Work Environment
,”
Ph.D. thesis
, Iowa State University, Ames, IA.
32.
Desai
,
P. R.
,
Desai
,
P. N.
,
Ajmera
,
K. D.
, and
Mehta
,
K.
,
2014
, “
A Review Paper on Oculus Rift—A Virtual Reality Headset
,”
Int. J. Eng. Trends Technol.
,
13
(
4
), pp.
175
179
.
33.
Goradia
,
I.
,
Doshi
,
J.
, and
Kurup
,
L.
,
2014
, “
A Review Paper on Oculus Rift & Project Morpheus
,”
Int. J. Curr. Eng. Technol.
,
4
(
5
), pp.
3196
3200
.
34.
Mitroff
,
S.
,
2016
, “
How to Wear an Oculus Rift and HTC Vive With Glasses
,” CNET, San Francisco, CA, accessed July 15, 2016, http://www.cnet.com/how-to/how-to-wear-an-oculus-rift-and-htc-vive-with-glasses/
35.
Mitchell
,
N.
,
2014
, “
Palmer Luckey and Nate Mitchell Interview: Low Persistence ‘Fundamentally Changes the Oculus Rift Experience’
,” Road to VR, accessed July 15, 2016, http://www.roadtovr.com/ces-2014-oculus-rift-crystal-cove-prototype-palmer-luckey-nate-mitchell-low-persistence-positional-tracking-interview-video/
36.
Blog
,
O. R.
,
2016
, “
John Carmack's Delivers Some Home Truths on Latency
,” Oculus VR, Menlo Park, CA, accessed July 15, 2016, http://oculusrift-blog.com/john-carmacks-message-of-latency/682/
37.
Jayaram
,
S.
,
Vance
,
J.
,
Gadh
,
R.
,
Jayaram
,
U.
, and
Srinivasan
,
H.
,
2001
, “
Assessment of VR Technology and Its Applications to Engineering Problems
,”
ASME J. Comput. Inf. Sci. Eng.
,
1
(
1
), pp.
72
83
.
38.
Digital Trends Staff
,
2016
, “
Spec Comparison: The Rift Is Less Expensive Than the Vive, But Is It a Better Value?
,” Digital Trends, Portland, OR, accessed June 22, 2016, https://www.f3nws.com/news/spec-comparison-does-the-rift-s-touch-update-make-it-a-true-vive-competitor-GW2mJJ
39.
Orland
,
K.
,
2016
, “
The Ars VR Headset Showdown–Oculus Rift versus. HTC Vive
,” Ars Technica, New York, accessed June 22, 2016, http://arstechnica.com/gaming/2016/04/the-ars-vr-headset-showdown-oculus-rift-vs-htc-vive/
40.
Avegant
,
2016
, “
Avegant Glyph—Product
,” Avegant, Belmont, CA, accessed June 22, 2016, https://www.avegant.com/product
41.
Avegant
,
2015
, “
Avegant Glyph Virtual Reality Headset
,” Avegant, Belmont, CA, accessed June 22, 2016, http://www.vrs.org.uk/virtual-reality-gear/head-mounted-displays/avegant-glyph.html
42.
Avegant
,
2016
, “
Avegant Online Store
,” Avegant, Belmont, CA, accessed June 22, 2016, https://shop.avegant.com/
43.
HTC
,
2016
, “
Vive—Product Hardware
,” HTC, Taoyuan, Taiwan, accessed June 21, 2016, https://www.htcvive.com/us/product/
44.
Google
,
2016
, “
Google Cardboard
,” Google, Mountain View, CA, accessed June 21, 2016, https://vr.google.com/cardboard/
45.
Samsung
,
2016
, “
Samsung Gear VR
,” Samsung, Seoul, South Korea, accessed June 21, 2016, http://www.samsung.com/global/galaxy/gear-vr/
46.
Sony
,
2016
, “
Playstation VR Full Specs
,” Sony, Tokyo, Japan, accessed July 15, 2016, https://www.playstation.com/en-us/explore/playstation-vr/
47.
Bakalar
,
J.
,
2016
, “
Sony Playstation VR Review
,” CNET, San Francisco, CA, accessed July 15, 2016, https://www.cnet.com/products/sony-playstation-vr-review/
48.
Dlodlo
,
2016
, “
Dlodlo V1
,” Dlodlo, Shenzhen, China, accessed July 15, 2016, http://www.dlodlo.com/en/v-one.html
49.
Krol
,
J.
,
2016
, “
Dlodlo's V1, Promises VR in a Slim Form Factor
,” CNET, San Francisco, CA, accessed July 15, 2016, https://www.cnet.com/products/dlodlo-glass-v1/preview/
50.
FOVE
,
2016
, “FOVE Eye Tracking Virtual Reality Headset,” Fove, San Mateo, CA, accessed July 15, 2016, http://www.getfove.com/
51.
Plafke
,
J.
,
2015
, “
Fove Eye-Tracking VR Headset Makes Virtual Reality More Like Reality
,” Geek.com, Temecula, CA, accessed June 15, 2016, http://www.geek.com/games/fove-is-fixing-virtual-reality-with-the-only-eye-tracking-vr-headset-on-the-market-1623166/
52.
Starbreeze Studios
,
2016
, “StarVR-Panoramic Virtual Reality Headset,” StarVR, Stockholm, Sweden, accessed July 15, 2016, http://www.starvr.com/
53.
Martindale
,
J.
,
2015
, “
StarVR Specs Make the Oculus Rift Look Like a Kid's Toy
,” Digital Trends, Portland, OR, accessed July 15, 2016, http://www.digitaltrends.com/computing/starvr-vr-headset-5120-1440-resolution/
54.
VRVANA
,
2016
, “
Vrvana—Technology to Augment Our Reality
,” Vrvana, Montreal, QC, Canada, accessed July 15, 2016, https://www.vrvana.com/
55.
Prasuethsut
,
L.
,
2015
, “
Hands On: Sulon Cortex Review
,” Future US, Inc., San Francisco, CA, accessed July 15, 2016, http://www.techradar.com/reviews/gaming/sulon-cortex-1288470/review
56.
Immersion-Vrelia
,
2016
, “
GO
,” Immersion-Vrelia, Redwood City, CA, accessed July 15, 2016, http://immersionvrelia.com/go/
57.
VisusVR
,
2016
, “
VisusVR
,” VisusVR, Pasadena, CA, accessed July 15, 2016, http://www.visusvr.com/
58.
Labs
,
G.
,
2016
, “
GameFace Labs—The World's First Wireless Virtual Reality Head Mounted Console
,” Gameface Labs, San Francisco, CA, accessed July 15, 2016, http://gamefacelabs.com/
59.
Tyrrel
,
B.
,
2015
, “
Hands-On With GameFace Labs' VR Headset
,” IGN Entertainment, San Francisco, CA, accessed July 15, 2016, http://www.ign.com/articles/2015/07/01/hands-on-with-gameface-labs-vr-headset
60.
Billinghurst
,
M.
,
Weghorst
,
S.
, and
Furness
,
T.
,
1998
, “
Shared Space: An Augmented Reality Approach for Computer Supported Collaborative Work
,”
Virtual Reality
,
3
(
1
), pp.
25
36
.
61.
Billinghurst
,
M.
,
Poupyrev
,
I.
,
Kato
,
H.
, and
May
,
R.
,
2000
, “
Mixing Realities in Shared Space: An Augmented Reality Interface for Collaborative Computing
,”
IEEE International Conference on Multimedia and Expo
(
ICME
), New York, July 30–Aug. 2, pp.
1641
1644
.
62.
Li
,
H.
,
Trutoiu
,
L.
,
Olszewski
,
K.
,
Wei
,
L.
,
Trutna
,
T.
,
Hsieh
,
P.-L.
,
Nicholls
,
A.
, and
Ma
,
C.
,
2015
, “
Facial Performance Sensing Head-Mounted Display
,”
ACM Trans. Graphics
,
34
(
4
), pp.
47:1
47:9
.
63.
Matney
,
L.
,
2016
, “
Veeso Wants to Share Your Smiles and Eye-Rolls in VR
,” Techchrunch, San Francisco, CA, accessed July 15, 2016, https://techcrunch.com/2016/07/25/veeso-wants-to-share-your-smiles-and-eye-rolls-in-vr/
64.
Sony
,
2016
, “
Playstation VR
,” Sony, Tokyo, Japan, accessed July 15, 2016, https://www.playstation.com/en-us/explore/playstation-vr/
65.
Dlodlo
,
2016
, “
Dlodlo Glass H1
,” Dlodlo, Shenzhen, China, accessed July 15, 2016, http://www.dlodlo.com/en/h-one.html
66.
Sulon Technologies,
2016
, “Sulon,” Sulon Technologies, Markahm, ON, Canada, accessed July 15, 2016, http://www.sulon.com/
67.
Akeley
,
K.
,
Watt
,
S. J.
,
Girshick
,
A. R.
, and
Banks
,
M. S.
,
2004
, “
A Stereo Display Prototype With Multiple Focal Distances
,”
ACM Trans. Graphics
,
23
(
3
), pp.
804
813
.
68.
Stanney
,
K. M.
,
Kennedy
,
R. S.
, and
Drexler
,
J. M.
,
1997
, “
Cybersickness Is Not Simulator Sickness
,”
Proc. Hum. Factors Ergon. Soc. Annu. Meet.
,
41
(
2
), pp.
1138
1142
.
69.
LaViola
,
J. J.
, Jr.
,
2000
, “
A Discussion of Cybersickness in Virtual Environments
,”
SIGCHI Bull.
,
32
(
1
), pp.
47
56
.
70.
Rebenitsch
,
L.
, and
Owen
,
C.
,
2014
, “
Individual Variation in Susceptibility to Cybersickness
,” 27th Annual ACM Symposium on User Interface Software and Technology (
UIST
), Honolulu, HI, Oct. 5–8, pp.
309
317
.
71.
Pavlik
,
R. A.
, and
Vance
,
J. M.
,
2015
, “
Interacting With Grasped Objects in Expanded Haptic Workspaces Using the Bubble Technique
,”
ASME J. Comput. Inf. Sci. Eng.
,
15
(
4
), p.
041006
.
72.
Deering
,
M. F.
,
1995
, “
Holosketch: A Virtual Reality Sketching/Animation Tool
,”
ACM Trans. Comput. Hum. Interact.
,
2
(
3
), pp.
220
238
.
73.
Vélaz
,
Y.
,
Rodríguez Arce
,
J.
,
Gutiérrez
,
T.
,
Lozano-Rodero
,
A.
, and
Suescun
,
A.
,
2014
, “
The Influence of Interaction Technology on the Learning of Assembly Tasks Using Virtual Reality
,”
ASME J. Comput. Inf. Sci. Eng.
,
14
(
4
), p.
041007
.
74.
Holt
,
P.
,
Ritchie
,
J.
,
Day
,
P.
,
Simmons
,
J.
,
Robinson
,
G.
,
Russell
,
G.
, and
Ng
,
F.
,
2004
, “
Immersive Virtual Reality in Cable and Pipe Routing: Design Metaphors and Cognitive Ergonomics
,”
ASME J. Comput. Inf. Sci. Eng.
,
4
(
3
), pp.
161
170
.
75.
Massie
,
T. H.
, and
Salisbury
,
J. K.
,
1994
, “
The Phantom Haptic Interface: A Device for Probing Virtual Objects
,”
ASME
Dynamic Systems and Control Conference.
76.
Satter
,
K. M.
, and
Butler
,
A. C.
,
2012
, “
Finding the Value of Immersive, Virtual Environments Using Competitive Usability Analysis
,”
ASME J. Comput. Inf. Sci. Eng.
,
12
(
2
), p.
024504
.
77.
Sato
,
Y.
,
Kobayashi
,
Y.
, and
Koike
,
H.
,
2000
, “
Fast Tracking of Hands and Fingertips in Infrared Images for Augmented Desk Interface
,” Fourth
IEEE
International Conference on Automatic Face and Gesture Recognition
, Grenoble, France, Mar. 28–30, pp.
462
467
.
78.
Ribo
,
M.
,
Pinz
,
A.
, and
Fuhrmann
,
A. L.
,
2001
, “
A New Optical Tracking System for Virtual and Augmented Reality Applications
,”
18th IEEE Instrumentation and Measurement Technology Conference
(
IMTC
), Budapest, Hungary, May 21–23, Vol.
3
, pp.
1932
1936
.
79.
Pintaric
,
T.
, and
Kaufmann
,
H.
,
2007
, “
Affordable Infrared-Optical Pose-Tracking for Virtual and Augmented Reality
,”
IEEE
VR Workshop on Trends and Issues in Tracking for Virtual Environments
,
G.
Zachmann
, ed., Charlotte, NC, Mar. 14–17, pp.
44
51
.
80.
Corazza
,
S.
,
Mündermann
,
L.
,
Chaudhari
,
A. M.
,
Demattio
,
T.
,
Cobelli
,
C.
, and
Andriacchi
,
T. P.
,
2006
, “
A Markerless Motion Capture System to Study Musculoskeletal Biomechanics: Visual Hull and Simulated Annealing Approach
,”
Ann. Biomed. Eng.
,
34
(
6
), pp.
1019
1029
.
81.
Fitzgerald
,
D.
,
Foody
,
J.
,
Kelly
,
D.
,
Ward
,
T.
,
Markham
,
C.
,
McDonald
,
J.
, and
Caulfield
,
B.
,
2007
, “
Development of a Wearable Motion Capture Suit and Virtual Reality Biofeedback System for the Instruction and Analysis of Sports Rehabilitation Exercises
,” 29th Annual International Conference of the
IEEE
Engineering in Medicine and Biology Society
, Lyon, France, Aug. 23–26, pp.
4870
4874
.
82.
Loy
,
G.
,
Eriksson
,
M.
,
Sullivan
,
J.
, and
Carlsson
,
S.
,
2004
, “
Monocular 3D Reconstruction of Human Motion in Long Action Sequences
,”
Computer Vision: Eighth European Conference on Computer Vision
(
ECCV
)—Part IV,
T.
Pajdla
, and
J.
Matas
, eds., Prague, Czech Republic, May 11–14, pp.
442
455
.
83.
Pullen
,
K.
, and
Bregler
,
C.
,
2002
, “
Motion Capture Assisted Animation: Texturing and Synthesis
,”
ACM Trans. Graphics
,
21
(
3
), pp.
501
508
.
84.
Edmonds
,
E.
,
Turner
,
G.
, and
Candy
,
L.
,
2004
, “
Approaches to Interactive Art Systems
,”
Second International Conference on Computer Graphics and Interactive Techniques in Australasia and South East Asia
(
GRAPHITE
), Singapore, June 15–18, pp.
113
117
.
85.
Colgan
,
A.
,
2014
, “
How Does the Leap Motion Controller Work?
,” Leap Motion, San Francisco, CA, accessed July 15, 2016, http://blog.leapmotion.com/hardware-to-software-how-does-the-leap-motion-controller-work/
86.
Weichert
,
F.
,
Bachmann
,
D.
,
Rudak
,
B.
, and
Fisseler
,
D.
,
2013
, “
Analysis of the Accuracy and Robustness of the Leap Motion Controller
,”
Sensors
,
13
(
5
), pp.
6380
6393
.
87.
Guna
,
J.
,
Jakus
,
G.
,
Pogačnik
,
M.
,
Tomažič
,
S.
, and
Sodnik
,
J.
,
2014
, “
An Analysis of the Precision and Reliability of the Leap Motion Sensor and Its Suitability for Static and Dynamic Tracking
,”
Sensors
,
14
(
2
), pp.
3702
3720
.
88.
Microsoft
,
2016
, “
Kinect for Windows SDK 1.8: Skeletal Tracking
,” Microsoft, Redmond, WA, accessed July 15, 2016, https://msdn.microsoft.com/en-us/library/hh973074.aspx
89.
Khoshelham
,
K.
, and
Elberink
,
S. O.
,
2012
, “
Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications
,”
Sensors
,
12
(
2
), p.
1437
.
90.
Dutta
,
T.
,
2012
, “
Evaluation of the Kinect™ Sensor for 3-D Kinematic Measurement in the Workplace
,”
Appl. Ergon.
,
43
(
4
), pp.
645
649
.
91.
Wang
,
Q.
,
Kurillo
,
G.
,
Ofli
,
F.
, and
Bajcsy
,
R.
,
2015
, “
Evaluation of Pose Tracking Accuracy in the First and Second Generations of Microsoft Kinect
,”
International Conference on Healthcare Informatics
(
ICHI
), Dallas, TX, Oct. 21–23, pp.
380
389
.
92.
Noitom
,
2016
, “
Perception Neuron Care & Caution
,” Noitom Ltd., Beijing, China, accessed July 13, 2016, https://neuronmocap.com/support/care-caution
93.
Hayward
,
V.
,
Astley
,
O. R.
,
Cruz-Hernandez
,
M.
,
Grant
,
D.
, and
Robles-De-La-Torre
,
G.
,
2004
, “
Haptic Interfaces and Devices
,”
Sens. Rev.
,
24
(
1
), pp.
16
29
.
94.
Burdea
,
G. C.
, and
Coiffet
,
P.
,
2003
,
Virtual Reality Technology
, Vol.
1
,
Wiley
, Hoboken, NJ.
95.
Oculus
,
2016
, “
Oculus Touch
,” Oculus VR, Menlo Park, CA, accessed July 15, 2016, https://www.oculus.com/en-us/touch/
96.
Volkov
,
S.
, and
Vance
,
J. M.
,
2001
, “
Effectiveness of Haptic Sensation for the Evaluation of Virtual Prototypes
,”
ASME J. Comput. Inf. Sci. Eng.
,
1
(
2
), pp.
123
128
.
97.
Burdea
,
G. C.
,
1999
, “
Haptic Feedback for Virtual Reality
,” Virtual Reality and Prototyping Workshop, Citeseer, New York, pp. 17–29.
98.
Spanlang
,
B.
,
Normand
,
J.-M.
,
Giannopoulos
,
E.
, and
Slater
,
M.
,
2010
, “
A First Person Avatar System With Haptic Feedback
,”
17th ACM Symposium on Virtual Reality Software and Technology
(
VRST
), Hong Kong, China, Nov. 22–24, pp.
47
50
.
99.
Kim
,
S.
,
Hasegawa
,
S.
,
Koike
,
Y.
, and
Sato
,
M.
,
2002
, “
Tension Based 7-DOF Force Feedback Device: SPIDAR-G
,” IEEE Virtual Reality (
VR
), Orlando, FL, Mar. 24–28, pp.
283
284
.
100.
Frisoli
,
A.
,
Salsedo
,
F.
,
Bergamasco
,
M.
,
Rossi
,
B.
, and
Carboncini
,
M. C.
,
2009
, “
A Force-Feedback Exoskeleton for Upper-Limb Rehabilitation in Virtual Reality
,”
Appl. Bionics Biomech.
,
6
(
2
), pp.
115
126
.
101.
Fisch
,
A.
,
Mavroidis
,
C.
,
Melli-Huber
,
J.
, and
Bar-Cohen
,
Y.
,
2003
,
Haptic Devices for Virtual Reality, Telepresence, and Human-Assistive Robots
,
The International Society for Optical Engineering
, Bellingham, WA, Chap. 4.
102.
Kruijff
,
E.
,
Schmalstieg
,
D.
, and
Beckhaus
,
S.
,
2006
, “
Using Neuromuscular Electrical Stimulation for Pseudo-Haptic Feedback
,” ACM Symposium on Virtual Reality Software and Technology (
VRST
), Limassol, Cyprus, Nov. 1–3, pp.
316
319
.
103.
Xia
,
P.
,
2016
, “
Haptics for Product Design and Manufacturing Simulation
,”
IEEE Trans. Haptics
,
9
(
3
), pp.
358
375
.
104.
Xia
,
P.
,
Mendes Lopes
,
A.
, and
Restivo
,
M. T.
,
2013
, “
A Review of Virtual Reality and Haptics for Product Assembly: From Rigid Parts to Soft Cables
,”
Assem. Autom.
,
33
(
2
), pp.
157
164
.
105.
Xia
,
P.
,
Lopes
,
A. M.
, and
Restivo
,
M. T.
,
2013
, “
A Review of Virtual Reality and Haptics for Product Assembly—Part 1: Rigid Parts
,”
Assem. Autom.
,
33
(
1
), pp.
68
77
.
106.
Burdea
,
G. C.
,
2000
, “
Haptics Issues in Virtual Environments
,”
Computer Graphics International 2000
, pp.
295–302
.
107.
Hartmann
,
W. M.
,
1999
, “
How We Localize Sound
,”
Phys. Today
,
52
(
11
), pp.
24
29
.
108.
Brungart
,
D. S.
,
1998
, “
Near-Field Auditory Localization
,”
Ph.D. thesis
, Massachusetts Institute of Technology, Cambridge, MA.
109.
Møller
,
H.
,
Sørensen
,
M. F.
,
Hammershøi
,
D.
, and
Jensen
,
C. B.
,
1995
, “
Head-Related Transfer Functions of Human Subjects
,”
J. Audio Eng. Soc.
,
43
(
5
), pp.
300
321
.
110.
Geronazzo
,
M.
,
Spagnol
,
S.
,
Bedin
,
A.
, and
Avanzini
,
F.
,
2014
, “
Enhancing Vertical Localization With Image-Guided Selection of Non-Individual Head-Related Transfer Functions
,”
IEEE International Conference on Acoustics, Speech and Signal Processing
(
ICASSP
), Florence, Italy, May 4–9, pp.
4463
4467
.
111.
Wenzel
,
E. M.
,
Arruda
,
M.
,
Kistler
,
D. J.
, and
Wightman
,
F. L.
,
1993
, “
Localization Using Nonindividualized Head-Related Transfer Functions
,”
J. Acoust. Soc. Am.
,
94
(
1
), pp.
111
123
.
112.
Brungart
,
D. S.
, and
Rabinowitz
,
W. M.
,
1999
, “
Auditory Localization of Nearby Sources. Head-Related Transfer Functions
,”
J. Acoust. Soc. Am.
,
106
(
3 Pt. 1
), pp.
1465
1479
.
113.
Zotkin
,
D. N.
,
Duraiswami
,
R.
, and
Davis
,
L. S.
,
2004
, “
Rendering Localized Spatial Audio in a Virtual Auditory Space
,”
IEEE Trans. Multimedia
,
6
(
4
), pp.
553
564
.
114.
Zotkin
,
D. N.
,
Duraiswami
,
R.
,
Grassi
,
E.
, and
Gumerov
,
N. A.
,
2006
, “
Fast Head-Related Transfer Function Measurement Via Reciprocity
,”
J. Acoust. Soc. Am.
,
120
(
4
), pp.
2202
2215
.
115.
Lalwani
,
M.
,
2016
, “
For VR to Be Truly Immersive, It Needs Convincing Sound to Match
,” Engadget, San Francisco, CA, accessed July 15, 2016, https://www.engadget.com/2016/01/22/vr-needs-3d-audio/
116.
Heilig
,
M. L.
,
1962
, “
Sensorama Simulator
,” U.S. Patent No.
US 3050870 A
.
117.
Matsukura
,
H.
,
Nihei
,
T.
, and
Ishida
,
H.
,
2011
, “
Multi-Sensorial Field Display: Presenting Spatial Distribution of Airflow and Odor
,”
IEEE Virtual Reality Conference
(
VR
), Singapore, Mar. 19–23, pp.
119
122
.
118.
Ariyakul
,
Y.
, and
Nakamoto
,
T.
,
2011
, “
Olfactory Display Using a Miniaturized Pump and a Saw Atomizer for Presenting Low-Volatile Scents
,”
IEEE Virtual Reality Conference
(
VR
), Singapore, Mar. 19–23, pp.
193
194
.
119.
Bordegoni
,
M.
, and
Carulli
,
M.
,
2016
, “
Evaluating Industrial Products in an Innovative Visual-Olfactory Environment
,”
ASME J. Comput. Inf. Sci. Eng.
,
16
(
3
), p.
030904
.
120.
Miyaura
,
M.
,
Narumi
,
T.
,
Nishimura
,
K.
,
Tanikawa
,
T.
, and
Hirose
,
M.
,
2011
, “
Olfactory Feedback System to Improve the Concentration Level Based on Biological Information
,”
IEEE Virtual Reality Conference
(
VR
), Singapore, Mar. 19–23, pp.
139
142
.
121.
Narumi
,
T.
,
Kajinami
,
T.
,
Nishizaka
,
S.
,
Tanikawa
,
T.
, and
Hirose
,
M.
,
2011
, “
Pseudo-Gustatory Display System Based on Cross-Modal Integration of Vision, Olfaction and Gustation
,”
IEEE Virtual Reality Conference
(
VR
), Singapore, Mar. 19–23, pp.
127
130
.
122.
Dieter
,
G. E.
, and
Schmidt
,
L. C.
,
2013
,
Engineering Design
, Vol.
3
,
McGraw-Hill
,
New York
.
123.
Haskins
,
C.
,
Forsberg
,
K.
,
Krueger
,
M.
,
Walden
,
D.
, and
Hamelin
,
D.
,
2006
,
Systems Engineering Handbook
,
INCOSE
, San Diego, CA.
124.
Royce
,
W. W.
,
1970
, “
Managing the Development of Large Software Systems
,”
Proc. IEEE WESCON
,
26
(
8
), pp.
328
338
.
125.
Evans
,
J. H.
,
1959
, “
Basic Design Concepts
,”
J. Am. Soc. Nav. Eng.
,
71
(
4
), pp.
671
678
.
126.
Forsberg
,
K.
, and
Mooz
,
H.
,
1991
, “
The Relationship of System Engineering to the Project Cycle
,”
INCOSE International Symposium
, Chattanooga, TN, Oct. 20–23, Vol.
1
, pp.
57
65
.
127.
Mooz
,
H.
, and
Forsberg
,
K.
,
2001
, “
4.4.3 A Visual Explanation of Development Methods and Strategies Including the Waterfall, Spiral, Vee, Vee+, and Vee++ Models
,”
INCOSE International Symposium
, Melbourne, Australia, July 1–5, Vol.
11
, pp.
610
617
.
128.
Sheard
,
S. A.
,
1996
, “
Twelve Systems Engineering Roles
,”
INCOSE International Symposium
, Boston, MA, July 7–11, Vol.
6
, pp.
478
485
.
129.
Fritsch
,
J.
,
Judice
,
A.
,
Soini
,
K.
, and
Tretten
,
P.
,
2009
, “
Storytelling and Repetitive Narratives for Design Empathy: Case Suomenlinna
,” Nordes Nordic Design Research Conference, Stockholm, Sweden, May 27–30, Vol.
2
.
130.
Kouprie
,
M.
, and
Visser
,
F. S.
,
2009
, “
A Framework for Empathy in Design: Stepping Into and Out of the User's Life
,”
J. Eng. Des.
,
20
(
5
), pp.
437
448
.
131.
Slater
,
M.
,
Spanlang
,
B.
,
Sanchez-Vives
,
M. V.
, and
Blanke
,
O.
,
2010
, “
First Person Experience of Body Transfer in Virtual Reality
,”
PLoS One
,
5
(
5
), p. e10564.
132.
Peck
,
T. C.
,
Seinfeld
,
S.
,
Aglioti
,
S. M.
, and
Slater
,
M.
,
2013
, “
Putting Yourself in the Skin of a Black Avatar Reduces Implicit Racial Bias
,”
Conscious. Cognition
,
22
(
3
), pp.
779
787
.
133.
Rosenberg
,
R. S.
,
Baughman
,
S. L.
, and
Bailenson
,
J. N.
,
2013
, “
Virtual Superheroes: Using Superpowers in Virtual Reality to Encourage Prosocial Behavior
,”
PLoS One
,
8
(
1
), p. e55003.
134.
Kilteni
,
K.
,
Groten
,
R.
, and
Slater
,
M.
,
2012
, “
The Sense of Embodiment in Virtual Reality
,”
Presence
,
21
(
4
), pp.
373
387
.
135.
Kilteni
,
K.
,
Normand
,
J.-M.
,
Sanchez-Vives
,
M. V.
, and
Slater
,
M.
,
2012
, “
Extending Body Space in Immersive Virtual Reality: A Very Long Arm Illusion
,”
PLoS One
,
7
(
7
), p. e40867.
136.
Normand
,
J.-M.
,
Giannopoulos
,
E.
,
Spanlang
,
B.
, and
Slater
,
M.
,
2011
, “
Multisensory Stimulation Can Induce an Illusion of Larger Belly Size in Immersive Virtual Reality
,”
PLoS One
,
6
(
1
), pp.
1
11
.
137.
van der Hoort
,
B.
,
Guterstam
,
A.
, and
Ehrsson
,
H. H.
,
2011
, “
Being Barbie: The Size of One's Own Body Determines the Perceived Size of the World
,”
PLoS One
,
6
(
5
), p. e20195.
138.
Gonçalves
,
M.
,
Cardoso
,
C.
, and
Badke-Schaub
,
P.
,
2014
, “
What Inspires Designers? Preferences on Inspirational Approaches During Idea Generation
,”
Des. Stud.
,
35
(
1
), pp.
29
53
.
139.
Ward
,
T.
,
1994
, “
Structured Imagination: The Role of Category Structure in Exemplar Generation
,”
Cognit. Psychol.
,
27
(
1
), pp.
1
40
.
140.
Chan
,
J.
,
Dow
,
S. P.
, and
Schunn
,
C. D.
,
2015
, “
Do the Best Design Ideas (Really) Come From Conceptually Distant Sources of Inspiration?
,”
Des. Stud.
,
36
, pp.
31
58
.
141.
Eckert
,
C.
, and
Stacey
,
M.
,
1998
, “
Fortune Favours Only the Prepared Mind: Why Sources of Inspiration Are Essential for Continuing Creativity
,”
Creativity Innovation Manage.
,
7
(
1
), pp.
9
16
.
142.
Jones
,
K.
,
Bradshaw
,
C.
,
Papadopoulos
,
J.
, and
Platzer
,
M.
,
2005
, “
Bio-Inspired Design of Flapping-Wing Micro Air Vehicles
,”
Aeronaut. J.
,
109
(
1098
), pp.
385
394
.
143.
Spolenak
,
R.
,
Gorb
,
S.
, and
Arzt
,
E.
,
2005
, “
Adhesion Design Maps for Bio-Inspired Attachment Systems
,”
Acta Biomater.
,
1
(
1
), pp.
5
13
.
144.
Setchi
,
R.
, and
Bouchard
,
C.
,
2010
, “
In Search of Design Inspiration: A Semantic-Based Approach
,”
ASME J. Comput. Inf. Sci. Eng.
,
10
(
3
), p.
031006
.
145.
Rieuf
,
V.
,
Bouchard
,
C.
, and
Aoussat
,
A.
,
2015
, “
Immersive Moodboards, a Comparative Study of Industrial Design Inspiration Material
,”
J. Des. Res.
,
13
(
1
), pp.
78
106
.
146.
Hsi
,
S.
,
Linn
,
M. C.
, and
Bell
,
J. E.
,
1997
, “
The Role of Spatial Reasoning in Engineering and the Design of Spatial Instruction
,”
J. Eng. Educ.
,
86
(
2
), pp.
151
158
.
147.
Bryson
,
S.
,
1993
, “
Virtual Reality in Scientific Visualization
,”
Comput. Graphics
,
17
(
6
), pp.
679
685
.
148.
Purschke
,
F.
,
Schulze
,
M.
, and
Zimmermann
,
P.
,
1998
, “
Virtual Reality-New Methods for Improving and Accelerating the Development Process in Vehicle Styling and Design
,”
IEEE
Computer Graphics International, Hannover, Germany, June 26, pp.
789–797
.
149.
Fiorentino
,
M.
,
de Amicis
,
R.
,
Monno
,
G.
, and
Stork
,
A.
,
2002
, “
Spacedesign: A Mixed Reality Workspace for Aesthetic Industrial Design
,”
IEEE First International Symposium on Mixed and Augmented Reality
(
ISMAR
), Darmstadt, Germany, Oct. 1, pp. 86–94.
150.
Krause
,
F.-L.
,
Göbel
,
M.
,
Wesche
,
G.
, and
Biahmou
,
T.
,
2004
, “
A Three-Stage Conceptual Design Process Using Virtual Environments
,”
12th International Conference in Central Europe on Computer Graphics
, Visualization and Computer Vision (
WSCG
), Plzen, Czech Republic, Feb. 2–6, pp. 81–84.
151.
De Araùjo
,
B. R.
,
Casiez
,
G.
, and
Jorge
,
J. A.
,
2012
, “
Mockup Builder: Direct 3D Modeling on and Above the Surface in a Continuous Interaction Space
,” Graphics Interface (
GI
), Toronto, ON, Canada, May 28–30, pp.
173
180
.
152.
De Araújo
,
B. R.
,
Casiez
,
G.
,
Jorge
,
J. A.
, and
Hachet
,
M.
,
2013
, “
Mockup Builder: 3D Modeling on and Above the Surface
,”
Comput. Graphics
,
37
(
3
), pp.
165
178
.
153.
Bordegoni
,
M.
,
2010
, “
Haptic and Sound Interface for Shape Rendering
,”
Presence
,
19
(
4
), pp.
341
363
.
154.
Kim
,
S.
,
Berkley
,
J. J.
, and
Sato
,
M.
,
2003
, “
A Novel Seven Degree of Freedom Haptic Device for Engineering Design
,”
Virtual Reality
,
6
(
4
), pp.
217
228
.
155.
Marks
,
S.
,
Estevez
,
J. E.
, and
Connor
,
A. M.
,
2016
, “
Towards the Holodeck: Fully Immersive Virtual Reality Visualisation of Scientific and Engineering Data
,” e-print
arXiv:1604.05797
.
156.
Connell
,
M.
, and
Tullberg
,
O.
,
2002
, “
A Framework for Immersive FEM Visualisation Using Transparent Object Communication in a Distributed Network Environment
,”
Adv. Eng. Software
,
33
(
7–10
), pp.
453
459
.
157.
Bryson
,
S.
, and
Levit
,
C.
,
1992
, “
The Virtual Wind Tunnel
,”
IEEE Comput. Graphics Appl.
,
12
(
4
), pp.
25
34
.
158.
Kuester
,
F.
,
Bruckschen
,
R.
,
Hamann
,
B.
, and
Joy
,
K. I.
,
2001
, “
Visualization of Particle Traces in Virtual Environments
,”
ACM Symposium on Virtual Reality Software and Technology
(
VRST
), Baniff, AB, Canada, Nov. 15–17, pp.
151
157
.
159.
Bruno
,
F.
,
Caruso
,
F.
,
nDe Napoli
,
L.
, and
Muzzupappa
,
M.
,
2006
, “
Visualization of Industrial Engineering Data in Augmented Reality
,”
J. Visualization
,
9
(
3
), pp.
319
329
.
160.
Fiorentino
,
M.
,
Monno
,
G.
, and
Uva
,
A.
,
2009
, “
Interactive ‘Touch and See’ FEM Simulation Using Augmented Reality
,”
Int. J. Eng. Educ.
,
25
(
6
), pp. 1124–1128.
161.
Uva
,
A.
,
Cristiano
,
S.
,
Fiorentino
,
M.
, and
Monno
,
G.
,
2010
, “
Distributed Design Review Using Tangible Augmented Technical Drawings
,”
Comput.-Aided Des.
,
42
(
5
), pp.
364
372
.
162.
Lee
,
E.-J.
, and
El-Tawil
,
S.
,
2008
, “
FEMvrml: An Interactive Virtual Environment for Visualization of Finite Element Simulation Results
,”
Adv. Eng. Software
,
39
(
9
), pp.
737
742
.
163.
Hambli
,
R.
,
Chamekh
,
A.
, and
Salah
,
H. B. H.
,
2006
, “
Real-Time Deformation of Structure Using Finite Element and Neural Networks in Virtual Reality Applications
,”
Finite Elem. Anal. Des.
,
42
(
11
), pp.
985
991
.
164.
Ryken
,
M. J.
, and
Vance
,
J. M.
,
2000
, “
Applying Virtual Reality Techniques to the Interactive Stress Analysis of a Tractor Lift Arm
,”
Finite Elem. Anal. Des.
,
35
(
2
), pp.
141
155
.
165.
Lundin
,
K. E.
,
Sillen
,
M.
,
Cooper
,
M. D.
, and
Ynnerman
,
A.
,
2005
, “
Haptic Visualization of Computational Fluid Dynamics Data Using Reactive Forces
,”
Proc. SPIE
,
5669
, pp.
31
41
.
166.
Lawrence
,
D. A.
,
Lee
,
C. D.
,
Pao
,
L. Y.
, and
Novoselov
,
R. Y.
,
2000
, “
Shock and Vortex Visualization Using a Combined Visual/Haptic Interface
,”
Visualization
(
VIS
), Salt Lake City, UT, Oct. 8–13, pp.
131
137
.
167.
Ferrise
,
F.
,
Bordegoni
,
M.
,
Marseglia
,
L.
,
Fiorentino
,
M.
, and
Uva
,
A. E.
,
2015
, “
Can Interactive Finite Element Analysis Improve the Learning of Mechanical Behavior of Materials? A Case Study
,”
Comput.-Aided Des. Appl.
,
12
(
1
), pp.
45
51
.
168.
Ferrise
,
F.
,
Bordegoni
,
M.
,
Fiorentino
,
M.
, and
Uva
,
A. E.
,
2013
, “
Integration of Realtime Finite Element Analysis and Haptic Feedback for Hands-On Learning of the Mechanical Behavior of Materials
,”
ASME
Paper No. DETC2013-12924.
169.
Ware
,
C.
, and
Franck
,
G.
,
1996
, “
Evaluating Stereo and Motion Cues for Visualizing Information Nets in Three Dimensions
,”
ACM Trans. Graphics
,
15
(
2
), pp.
121
140
.
170.
Nelson
,
L.
,
Cook
,
D.
, and
Cruz-Neira
,
C.
,
1999
, “
XGobi versus the C2: Results of an Experiment Comparing Data Visualization in a 3-D Immersive Virtual Reality Environment With a 2-D Workstation Display
,”
Comput. Stat.
,
14
(
1
), pp.
39
51
.
171.
Donalek
,
C.
,
Djorgovski
,
S. G.
,
Cioc
,
A.
,
Wang
,
A.
,
Zhang
,
J.
,
Lawler
,
E.
,
Yeh
,
S.
,
Mahabal
,
A.
,
Graham
,
M.
,
Drake
,
A.
,
Davidoff
,
S.
,
Norris
,
J. S.
, and
Longo
,
G.
,
2014
, “
Immersive and Collaborative Data Visualization Using Virtual Reality Platforms
,”
IEEE International Conference on Big Data
(
Big Data
), Washington, DC, Oct. 27–30, pp.
609
614
.
172.
Bryson
,
S.
, and
Levit
,
C.
,
1991
, “
The Virtual Windtunnel—An Environment for the Exploration of Three-Dimensional Unsteady Flows
,”
1991 IEEE Conference on Visualization
(
Visualization
), San Diego, CA, Oct. 22–25, pp.
17
24
.
173.
Brooks
,
F. P.
, Jr.
,
Ouh-Young
,
M.
,
Batter
,
J. J.
, and
Jerome Kilpatrick
,
P.
,
1990
, “
Project Gropehaptic Displays for Scientific Visualization
,”
SIGGRAPH Comput. Graphics
,
24
(
4
), pp.
177
185
.
174.
Santos
,
P.
,
Stork
,
A.
,
Gierlinger
,
T.
,
Pagani
,
A.
,
Paloc
,
C.
,
Barandarian
,
I.
,
Conti
,
G.
,
de Amicis
,
R.
,
Witzel
,
M.
,
Machui
,
O.
,
Jiménez
,
J. M.
,
Araujo
,
B.
,
Jorge
,
J.
, and
Bodammer
,
G.
,
2007
, “
Improve: An Innovative Application for Collaborative Mobile Mixed Reality Design Review
,”
Int. J. Interact. Des. Manuf.
,
1
(
2
), pp.
115
126
.
175.
Churchill
,
E. F.
, and
Snowdon
,
D.
,
1998
, “
Collaborative Virtual Environments: An Introductory Review of Issues and Systems
,”
Virtual Reality
,
3
(
1
), pp.
3
15
.
176.
Chryssolouris
,
G.
,
Mavrikios
,
D.
,
Pappas
,
M.
,
Xanthakis
,
E.
, and
Smparounis
,
K.
,
2009
,
A Web and Virtual Reality-Based Platform for Collaborative Product Review and Customisation
,
Springer
,
London
, pp.
137
152
.
177.
Hren
,
G.
, and
Jezernik
,
A.
,
2009
, “
A Framework for Collaborative Product Review
,”
Int. J. Adv. Manuf. Technol.
,
42
(
7
), pp.
822
830
.
178.
Greenhalgh
,
C.
, and
Benford
,
S.
,
1995
, “
Massive: A Distributed Virtual Reality System Incorporating Spatial Trading
,”
IEEE
15th International Conference on Distributed Computing Systems
, Vancouver, BC, Canada, May 30–June 2, pp.
27
34
.
179.
Greenhalgh
,
C. M.
,
1997
, “
Evaluating the Network and Usability Characteristics of Virtual Reality Conferencing
,”
BT Technol. J.
,
15
(
4
), pp.
101
119
.
180.
Lehner
,
V. D.
, and
DeFanti
,
T. A.
,
1997
, “
Distributed Virtual Reality: Supporting Remote Collaboration in Vehicle Design
,”
IEEE Comput. Graphics Appl.
,
17
(
2
), pp.
13
17
.
181.
Kan
,
H.
,
Duffy
,
V. G.
, and
Su
,
C.-J.
,
2001
, “
An Internet Virtual Reality Collaborative Environment for Effective Product Design
,”
Comput. Ind.
,
45
(
2
), pp.
197
213
.
182.
Szalavári
,
Z.
,
Schmalstieg
,
D.
,
Fuhrmann
,
A.
, and
Gervautz
,
M.
,
1998
, “
‘Studierstube’: An Environment for Collaboration in Augmented Reality
,”
Virtual Reality
,
3
(
1
), pp.
37
48
.
183.
Castronovo
,
F.
,
Nikolic
,
D.
,
Liu
,
Y.
, and
Messner
,
J.
,
2013
, “
An Evaluation of Immersive Virtual Reality Systems for Design Reviews
,”
13th International Conference on Construction Applications of Virtual Reality
(
CONVR
), London, Oct. 30–31, pp. 22–29.
184.
Zorriassatine
,
F.
,
Wykes
,
C.
,
Parkin
,
R.
, and
Gindy
,
N.
,
2003
, “
A Survey of Virtual Prototyping Techniques for Mechanical Product Development
,”
Proc. Inst. Mech. Eng., Part B
,
217
(
4
), pp.
513
530
.
185.
Wang
,
G. G.
,
2003
, “
Definition and Review of Virtual Prototyping
,”
ASME J. Comput. Inf. Sci. Eng.
,
2
(
3
), pp.
232
236
.
186.
Ferrise
,
F.
,
Bordegoni
,
M.
, and
Cugini
,
U.
,
2013
, “
Interactive Virtual Prototypes for Testing the Interaction With New Products
,”
Comput.-Aided Des. Appl.
,
10
(
3
), pp.
515
525
.
187.
Wickman
,
C.
, and
Söderberg
,
R.
,
2003
, “
Increased Concurrency Between Industrial and Engineering Design Using CAT Technology Combined With Virtual Reality
,”
Concurrent Eng.
,
11
(
1
), pp.
7
15
.
188.
Bordegoni
,
M.
,
Colombo
,
G.
, and
Formentini
,
L.
,
2006
, “
Haptic Technologies for the Conceptual and Validation Phases of Product Design
,”
Comput. Graphics
,
30
(
3
), pp.
377
390
.
189.
Bordegoni
,
M.
, and
Ferrise
,
F.
,
2013
, “
Designing Interaction With Consumer Products in a Multisensory Virtual Reality Environment
,”
Virtual Phys. Prototyping
,
8
(
1
), pp.
51
64
.
190.
Bruno
,
F.
, and
Muzzupappa
,
M.
,
2010
, “
Product Interface Design: A Participatory Approach Based on Virtual Reality
,”
Int. J. Hum.-Comput. Stud.
,
68
(
5
), pp.
254
269
.
191.
Kanai
,
S.
,
Horiuchi
,
S.
,
Kikuta
,
Y.
,
Yokoyama
,
A.
, and
Shiroma
,
Y.
,
2007
, “
An Integrated Environment for Testing and Assessing the Usability of Information Appliances Using Digital and Physical Mock-Ups
,”
Second International Conference on Virtual Reality
(
ICVR
), R. Shumaker, ed., Beijing, China, July 22–27, pp.
478
487
.
192.
Kimishima
,
Y.
, and
Aoyama
,
H.
,
2006
, “
Evaluation Method of Style Design by Using Mixed Reality Technology
,”
The Japan Society for Precision Engineering (JSPE)
, pp. 25–26.
193.
Verlinden
,
J.
, and
Horváth
,
I.
,
2006
, “
Framework for Testing and Validating Interactive Augmented Prototyping as a Design Means in Industrial Practice
,”
Virtual Concept
, Playa Del Carmen, Mexico, Nov. 26–Dec. 1, pp.
1
10
.
194.
Tideman
,
M.
,
van der Voort
,
M. C.
, and
van Houten
,
F. J. A. M.
,
2008
, “
A New Product Design Method Based on Virtual Reality, Gaming and Scenarios
,”
Int. J. Interact. Des. Manuf.
,
2
(
4
), pp.
195
205
.
195.
Nam
,
T.-J.
, and
Lee
,
W.
,
2003
, “
Integrating Hardware and Software: Augmented Reality Based Prototyping Method for Digital Products
,” Extended Abstracts on Human Factors in Computing Systems (
CHI EA
), Ft. Lauderdale, FL, Apr. 5–10, pp.
956
957
.
196.
Noon
,
C.
,
Zhang
,
R.
,
Winer
,
E.
,
Oliver
,
J.
,
Gilmore
,
B.
, and
Duncan
,
J.
,
2012
, “
A System for Rapid Creation and Assessment of Conceptual Large Vehicle Designs Using Immersive Virtual Reality
,”
Comput. Ind.
,
63
(
5
), pp.
500
512
.
197.
Kuehne
,
R.
, and
Oliver
,
J.
,
1995
, “
A Virtual Environment for Interactive Assembly Planning and Evaluation
,”
ASME
Design Automation Conference.
198.
Xiaoling
,
W.
,
Peng
,
Z.
,
Zhifang
,
W.
,
Yan
,
S.
,
Bin
,
L.
, and
Yangchun
,
L.
,
2004
, “
Development an Interactive VR Training for CNC Machining
,”
ACM SIGGRAPH International Conference on Virtual Reality Continuum and Its Applications in Industry
(
VRCAI
), Singapore, June 16–18, pp.
131
133
.
199.
Angster
,
S. R.
, and
Jayaram
,
S.
,
1996
, “
VEDAM: Virtual Environments for Design and Manufacturing
,”
Ph.D. thesis
, Washington State University, Pullman, WA.
200.
Lim
,
T.
,
Thin
,
A.
,
Ritchie
,
J.
,
Sung
,
R.
,
Liu
,
Y.
, and
Kosmadoudi
,
Z.
,
2010
,
Haptic Virtual Reality Assembly-Moving Towards Real Engineering Applications
,
INTECH
, Rijeka, Croatia.
201.
Coutee
,
A. S.
,
McDermott
,
S. D.
, and
Bras
,
B.
,
2001
, “
A Haptic Assembly and Disassembly Simulation Environment and Associated Computational Load Optimization Techniques
,”
ASME J. Comput. Inf. Sci. Eng.
,
1
(
2
), pp.
113
122
.
202.
Coutee
,
A. S.
, and
Bras
,
B.
,
2002
, “
Collision Detection for Virtual Objects in a Haptic Assembly and Disassembly Simulation Environment
,”
ASME
Paper No. DETC2002/CIE-34385.
203.
Seth
,
A.
,
Vance
,
J. M.
, and
Oliver
,
J. H.
,
2011
, “
Virtual Reality for Assembly Methods Prototyping: A Review
,”
Virtual Reality
,
15
(
1
), pp.
5
20
.
204.
Choi
,
S.
,
Jung
,
K.
, and
Noh
,
S. D.
,
2015
, “
Virtual Reality Applications in Manufacturing Industries: Past Research, Present Findings, and Future Directions
,”
Concurrent Eng.
,
23
(
1
), pp.
40
63
.
205.
Eubanks
,
C. F.
, and
Ishii
,
K.
,
1993
, “
AI Methods for Life-Cycle Serviceability Design of Mechanical Systems
,”
Artif. Intell. Eng.
,
8
(
2
), pp.
127
140
.
206.
Ishii
,
K.
,
Eubanks
,
C.
, and
Marks
,
M.
,
1993
, “
Evaluation Methodology for Post-Manufacturing Issues in Life-Cycle Design
,”
Concurrent Eng.
,
1
(
1
), pp.
61
68
.
207.
Gynn
,
M.
, and
Steele
,
J.
,
2015
, “
Virtual Automotive Maintenance and Service Confirmation
,”
SAE
Paper No. 2015-01-0498.
208.
de Sá
,
A. G.
, and
Zachmann
,
G.
,
1999
, “
Virtual Reality as a Tool for Verification of Assembly and Maintenance Processes
,”
Comput. Graphics
,
23
(
3
), pp.
389
403
.
209.
Borro
,
D.
,
Savall
,
J.
,
Amundarain
,
A.
,
Gil
,
J. J.
,
Garcia-Alonso
,
A.
, and
Matey
,
L.
,
2004
, “
A Large Haptic Device for Aircraft Engine Maintainability
,”
IEEE Comput. Graphics Appl.
,
24
(
6
), pp.
70
74
.
210.
Peng
,
G.
,
Hou
,
X.
,
Gao
,
J.
, and
Cheng
,
D.
,
2012
, “
A Visualization System for Integrating Maintainability Design and Evaluation at Product Design Stage
,”
Int. J. Adv. Manuf. Technol.
,
61
(
1
), pp.
269
284
.