Skip Nav Destination
Close Modal
Update search
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
Filter
- Title
- Author
- Author Affiliations
- Full Text
- Abstract
- Keyword
- DOI
- ISBN
- ISBN-10
- ISSN
- EISSN
- Issue
- Volume
- References
- Conference Volume
- Paper No
NARROW
Date
Availability
1-20 of 46
Virtual reality
Close
Follow your search
Access your saved searches in your account
Would you like to receive an alert when new items match your search?
Sort by
Journal Articles
Accepted Manuscript
Daniel Gracia De Luna, Roel Tijernia, Alley Butler, Emmett Tomai, Douglas Timmer, Dumitru I. Caruntu
Article Type: Research Papers
J. Comput. Inf. Sci. Eng.
Paper No: JCISE-20-1262
Published Online: April 2, 2021
Abstract
This paper reports on an experiment in human subject balance and coordination using a HTC Vive head mounted display to create a virtual environment. For the experiment, 30 male human subjects of college age and 30 female subjects of college age were asked to navigate along a clear path in a virtual world using a controller with their dominant hand and asked to balance a virtual ball on a virtual plate using the other controller in the non-dominant hand. The test subjects moved along a clearly marked path, with three surprise obstacles occurring: a large rock landing near the path, and explosion near the path, and a flock of birds coming across the path. Data included 6 degree of freedom trajectories for the head, and both hands, as well as data gathered by the computer system on ball location and velocity, plate location and velocity and ball status. Likert scale questionnaires were answered by the test subjects relative to video game experience, sense of presence, and ease of managing the ball movement. Statistics showed that the male students dropped the ball less frequently at p = 0.0254 and p = 0.0036. In contrast, female students were aware of their performance with correlation levels of 0.632 and 0.588.
Journal Articles
Article Type: Guest Editorial
J. Comput. Inf. Sci. Eng. October 2020, 20(5): 050301.
Paper No: JCISE-20-1225
Published Online: September 14, 2020
Journal Articles
Article Type: Research Papers
J. Comput. Inf. Sci. Eng. October 2020, 20(5): 051005.
Paper No: JCISE-19-1276
Published Online: June 3, 2020
Abstract
This work presents a deep reinforcement learning (DRL) approach for procedural content generation (PCG) to automatically generate three-dimensional (3D) virtual environments that users can interact with. The primary objective of PCG methods is to algorithmically generate new content in order to improve user experience. Researchers have started exploring the use of machine learning (ML) methods to generate content. However, these approaches frequently implement supervised ML algorithms that require initial datasets to train their generative models. In contrast, RL algorithms do not require training data to be collected a priori since they take advantage of simulation to train their models. Considering the advantages of RL algorithms, this work presents a method that generates new 3D virtual environments by training an RL agent using a 3D simulation platform. This work extends the authors’ previous work and presents the results of a case study that supports the capability of the proposed method to generate new 3D virtual environments. The ability to automatically generate new content has the potential to maintain users’ engagement in a wide variety of applications such as virtual reality applications for education and training, and engineering conceptual design.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2019, 19(3): 031015.
Paper No: JCISE-18-1263
Published Online: July 30, 2019
Abstract
The paper describes the design of an innovative virtual reality (VR) system, based on a combination of an olfactory display and a visual display, to be used for investigating the directionality of the sense of olfaction. In particular, the design of an experimental setup to understand and determine to what extent the sense of olfaction is directional and whether there is prevalence of the sense of vision over the one of smell when determining the direction of an odor, is described. The experimental setup is based on low-cost VR technologies. In particular, the system is based on a custom directional olfactory display (OD), a head mounted display (HMD) to deliver both visual and olfactory cues, and an input device to register subjects' answers. The paper reports the design of the olfactory interface as well as its integration with the overall system.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. December 2018, 18(4): 041008.
Paper No: JCISE-17-1299
Published Online: July 13, 2018
Abstract
Augmented reality (AR) has experienced a breakthrough in many areas of application thanks to cheaper hardware and a strong industry commitment. In the field of management of urban facilities, this technology allows virtual access and interaction with hidden underground elements. This paper presents a new approach to enable AR in mobile devices such as Google Tango, which has specific capabilities to be used outdoors. The first objective is to provide full functionality in the life-cycle management of subsoil infrastructures through this technology. This implies not only visualization, interaction, and free navigation, but also editing, deleting, and inserting elements ubiquitously. For this, a topological data model for three-dimensional (3D) data has been designed. Another important contribution of the paper is getting exact location and orientation performed in only a few minutes, using no additional markers or hardware. This accuracy in the initial positioning, together with the device sensing, avoids the usual errors during the navigation process in AR. Similar functionality has also been implemented in a nonubiquitous way to be supported by any other device through virtual reality (VR). The tests have been performed using real data of the city of Jaén (Spain).
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2018, 18(3): 031006.
Paper No: JCISE-17-1230
Published Online: June 12, 2018
Abstract
Modern color and depth (RGB-D) sensing systems are capable of reconstructing convincing virtual representations of real world environments. These virtual reconstructions can be used as the foundation for virtual reality (VR) and augmented reality environments due to their high-quality visualizations. However, a main limitation of modern virtual reconstruction methods is the time it takes to incorporate new data and update the virtual reconstruction. This delay prevents the reconstruction from accurately rendering dynamic objects or portions of the environment (like an engineer performing an inspection of a machinery or laboratory space). The authors propose a multisensor method to dynamically capture objects in an indoor environment. The method automatically aligns the sensors using modern image homography techniques, leverages graphics processing units (GPUs) to process the large number of independent RGB-D data points, and renders them in real time. Incorporating and aligning multiple sensors allows a larger area to be captured from multiple angles, providing a more complete virtual representation of the physical space. Performing processing on GPU's leverages the large number of processing cores available to minimize the delay between data capture and rendering. A case study using commodity RGB-D sensors, computing hardware, and standard transmission control protocol internet connections is presented to demonstrate the viability of the proposed method.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2017, 17(3): 031013.
Paper No: JCISE-16-2027
Published Online: July 18, 2017
Abstract
In the past few years, there have been some significant advances in consumer virtual reality (VR) devices. Devices such as the Oculus Rift, HTC Vive, Leap Motion™ Controller, and Microsoft Kinect ® are bringing immersive VR experiences into the homes of consumers with much lower cost and space requirements than previous generations of VR hardware. These new devices are also lowering the barrier to entry for VR engineering applications. Past research has suggested that there are significant opportunities for using VR during design tasks to improve results and reduce development time. This work reviews the latest generation of VR hardware and reviews research studying VR in the design process. Additionally, this work extracts the major themes from the reviews and discusses how the latest technology and research may affect the engineering design process. We conclude that these new devices have the potential to significantly improve portions of the design process.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. December 2017, 17(4): 041009.
Paper No: JCISE-16-2045
Published Online: June 15, 2017
Abstract
Existing techniques for motion imitation often suffer a certain level of latency due to their computational overhead or a large set of correspondence samples to search. To achieve real-time imitation with small latency, we present a framework in this paper to reconstruct motion on humanoids based on sparsely sampled correspondence. The imitation problem is formulated as finding the projection of a point from the configuration space of a human's poses into the configuration space of a humanoid. An optimal projection is defined as the one that minimizes a back-projected deviation among a group of candidates, which can be determined in a very efficient way. Benefited from this formulation, effective projections can be obtained by using sparsely sampled correspondence, whose generation scheme is also introduced in this paper. Our method is evaluated by applying the human's motion captured by an RGB-depth (RGB-D) sensor to a humanoid in real time. Continuous motion can be realized and used in the example application of teleoperation.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2017, 17(3): 031010.
Paper No: JCISE-16-2084
Published Online: February 16, 2017
Abstract
With design teams becoming more distributed, the sharing and interpreting of complex data about design concepts/prototypes and environments have become increasingly challenging. The size and quality of data that can be captured and shared directly affects the ability of receivers of that data to collaborate and provide meaningful feedback. To mitigate these challenges, the authors of this work propose the real-time translation of physical objects into an immersive virtual reality environment using readily available red, green, blue, and depth (RGB-D) sensing systems and standard networking connections. The emergence of commercial, off-the-shelf RGB-D sensing systems, such as the Microsoft Kinect, has enabled the rapid three-dimensional (3D) reconstruction of physical environments. The authors present a method that employs 3D mesh reconstruction algorithms and real-time rendering techniques to capture physical objects in the real world and represent their 3D reconstruction in an immersive virtual reality environment with which the user can then interact. Providing these features allows distributed design teams to share and interpret complex 3D data in a natural manner. The method reduces the processing requirements of the data capture system while enabling it to be portable. The method also provides an immersive environment in which designers can view and interpret the data remotely. A case study involving a commodity RGB-D sensor and multiple computers connected through standard TCP internet connections is presented to demonstrate the viability of the proposed method.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. March 2017, 17(1): 011003.
Paper No: JCISE-15-1319
Published Online: November 7, 2016
Abstract
Tracking refers to a set of techniques that allows one to calculate the position and orientation of an object with respect to a global reference coordinate system in real time. A common method for tracking with point clouds is the iterative closest point (ICP) algorithm, which relies on the continuous matching of sequential sampled point clouds with a reference point cloud. Modern commodity range cameras provide point cloud data that can be used for that purpose. However, this point cloud data is generally considered as low-fidelity and insufficient for accurate object tracking. Mesh reconstruction algorithms can improve the fidelity of the point cloud by reconstructing the overall shape of the object. This paper explores the potential for point cloud fidelity improvement via the Poisson mesh reconstruction (PMR) algorithm and compares the accuracy with a common ICP-based tracking technique and a local mesh reconstruction operator. The results of an offline simulation are promising.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. March 2017, 17(1): 011001.
Paper No: JCISE-15-1200
Published Online: November 7, 2016
Abstract
The research presented here describes an industry case study of the use of immersive virtual reality (VR) as a general design tool with a focus on the decision making process. A group of design and manufacturing engineers, who were involved in an active new product development project, were invited to participate in three design reviews in an immersive environment. Observations, interviews, and focus groups were conducted to evaluate the effect of using this interface on decision making in early product design. Because the team members were actively engaged in a current product design task, they were motivated to use the immersive technology to address specific challenges they needed to solve to move forward with detailed product design. This case study takes the approach of asking not only what can users do from a technology standpoint but also how their actions in the virtual environment influence decision making. The results clearly show that the team identified design issues and potential solutions that were not identified or verified using traditional computer tools. The design changes that were the outcome of the experience were implemented in the final product design. Another result was that software familiarity played a significant role in the comfort level and subsequent effectiveness of the team discussions. Finally, participants commented on how the immersive VR environment encouraged an increased sense of team engagement that led to better discussions and fuller participation of the team members in the decision process.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2016, 16(3): 030904.
Paper No: JCISE-16-1012
Published Online: June 30, 2016
Abstract
The sense of smell has a great importance in our daily life. Recently, smells have been used for marketing purposes for improving the people's mood and for communicating information about products as household cleaners and food. However, the scent design discipline can be used for creating a “scent identity” of these products not traditionally associated to a specific smell, in order to communicate their features to customers. In the area of virtual reality (VR), several researches concerned the integration of smells in virtual environments. The research questions addressed in this paper concern if virtual prototypes (VP), including smell simulation, can be used for evaluating products as effectively as studies performed in real environments, and also if smells can enhance the users' sense of presence in virtual environments. For this purpose, a VR experimental framework including a prototype of a wearable olfactory display (wOD) has been set up, and experimental tests have been carried out.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. December 2015, 15(4): 041006.
Paper No: JCISE-13-1229
Published Online: October 29, 2015
Abstract
Haptic force-feedback can provide useful cues to users of virtual environments. Body-based haptic devices are portable but the more commonly used ground-based devices have workspaces that are limited by their physical grounding to a single base position and their operation as purely position-control devices. The “bubble technique” has recently been presented as one method of expanding a user's haptic workspace. The bubble technique is a hybrid position-rate control system in which a volume, or “bubble,” is defined entirely within the physical workspace of the haptic device. When the device's end effector is within this bubble, interaction is through position control. When the end effector moves outside this volume, an elastic restoring force is rendered, and a rate is applied that moves the virtual accessible workspace. Publications have described the use of the bubble technique for point-based touching tasks. However, when this technique is applied to simulations where the user is grasping virtual objects with part-to-part collision detection, unforeseen interaction problems surface. Methods of addressing these challenges are introduced, along with discussion of their implementation and an informal investigation.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2015, 15(3): 031001.
Paper No: JCISE-12-1005
Published Online: September 1, 2015
Abstract
A competitive usability study was employed to measure user performance and user preference for immersive virtual environments (VEs) with multimodal gestural interfaces when compared directly with nonstereoscopic traditional CAD interfaces that use keyboard and mouse. The immersive interfaces included a wand and a data glove with voice interface with an 86 in. stereoscopic rendering to screen; whereas, the traditional CAD interfaces included a 19 in. workstation with keyboard and mouse and an 86 in. nonstereoscopic display with keyboard and mouse. The context for this study was a set of “real world” engineering design scenarios. These design scenarios include benchmark 1—navigation, benchmark 2—error finding and repair, and benchmark 3—spatial awareness. For this study, two populations of users were employed, novice (n = 15) and experienced (n = 15). All users experienced three successive trials to quantify the effects of limited learning. Statistically based comparisons were made using both parametric and nonparametric methods. Conclusions included improved capability and user preference for immersive VEs and their interfaces were statistically significant for navigation and error finding/repair for immersive interfaces.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. December 2014, 14(4): 041007.
Paper No: JCISE-12-1121
Published Online: October 7, 2014
Abstract
This paper focuses on the use of virtual reality (VR) systems for teaching industrial assembly tasks and studies the influence of the interaction technology on the learning process. The experiment conducted follows a between-subjects design with 60 participants distributed in five groups. Four groups were trained on the target assembly task with a VR system, but each group used a different interaction technology: mouse-based, Phantom Omni® haptic, and two configurations of the Markerless Motion Capture (Mmocap) system (with 2D or 3D tracking of hands). The fifth group was trained with a video tutorial. A post-training test carried out the day after evaluated performance in the real task. The experiment studies the efficiency and effectiveness of each interaction technology for learning the task, taking in consideration both quantitative measures (such as training time, real task performance, evolution from the virtual task to real one), and qualitative data (user feedback from a questionnaire). Results show that there were no significant differences in the final performance among the five groups. However, users trained under mouse and 2D-tracking Mmocap systems took significantly less training time than the rest of the virtual modalities. This brings out two main outcomes: (1) the perception of collisions using haptics does not increase the learning transfer of procedural tasks demanding low motor skills and (2) Mmocap-based interactions can be valid for training this kind of tasks.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. September 2013, 13(3): 031003.
Paper No: JCISE-13-1018
Published Online: May 14, 2013
Abstract
When digitally realized, virtual environments (VEs) do not perfectly match the physical environments they are supposed to emulate. This paper deals with energy aspects of such a mismatch, i.e., artificial energy leaks. A methodology is developed that employs smooth correction (SC) and leak dissipation (LD) to achieve a stable interconnection of the VE with the haptic device. The SC-LD naturally blends with the original laws for rendering the VE and gives rise to modified force feedback laws. These laws can be regarded as energy-consistent discretizations of their continuous-time counterparts. For some fundamental examples including virtual springs and masses, these laws are analytically reduced to simple closed-form equations. The methodology is then generalized to the multivariable case. Several experiments are conducted including a 2-DOF coupled nonlinear VE example, and a scenario leading to a sequence of contacts with a virtual object. Besides the conceptual advantage, simulation and experimental results demonstrate some other advantages of the SC-LD over well-known time-domain passivity methods. These advantages include improved fidelity, simpler implementation, and less susceptibility to produce impulsive/chattering response.
Journal Articles
Article Type: Research-Article
J. Comput. Inf. Sci. Eng. December 2012, 12(4): 041001.
Published Online: September 18, 2012
Abstract
In the field of minimally invasive surgery (MIS), trainers based on virtual reality provide a very useful, nondegradable, realistic training environment. The project of building this new type of trainers requires the development of new tools. In this paper, we describe a set of new measures that allow calculating the optimal position and orientation of haptic devices versus the virtual workspace of the application. We illustrate the use of these new tools applying them to a practical application.
Journal Articles
Article Type: Technical Briefs
J. Comput. Inf. Sci. Eng. June 2012, 12(2): 024504.
Published Online: May 14, 2012
Abstract
Competitive usability studies are employed providing empirical results in a design evaluation and review context. Populations of novice and experienced users are tested against benchmarks. Benchmark 1 is used to evaluate error identification and correction. Benchmark 2 is employed to evaluate the user’s ability to understand spatial relationships. Both benchmarks 1 and 2 compare individual performance with performance of teams. Benchmarks 3 measures quantity of errors found in a 4 min time frame. For benchmark 1, there is a statistically significant difference, but for benchmark 2, there is no statistical difference. For benchmark 3, there is a statistically significant increase in errors found. This increase is evaluated for impact as cost avoidance. It is concluded that cost avoidance by using a cave automatic virtual environment (CAVE) immersive virtual environment easily justifies the CAVE system.
Journal Articles
Article Type: Research Papers
J. Comput. Inf. Sci. Eng. September 2011, 11(3): 031005.
Published Online: August 10, 2011
Abstract
The role of virtual environments (VEs) is crucial in efficient design and operation of unmanned vehicles. VEs are extensively used in operator training for tele-operation, planning using programming by demonstration, and hardware and software designs. VE for unmanned sea surface vehicles (USSV) requires a 6 degree of freedom dynamics simulation in the time domain. In order to be interactive, the VE requires real-time performance of the underlying dynamics simulator. In general, the dynamics simulation of USSVs involves the following four main operations: (1) computation of dynamic pressure head due to fluid flow around the hull under the ocean wave, (2) computation of wet surface, (3) computing the surface integral of the dynamic pressure head over the wet surface, and (4) solving the rigid body dynamics equation. The first three operations depend upon the boat geometry complexity and need to be performed at each time step, making the simulation run very slow. In this paper, we address the problem of physics preserving model simplification for real-time potential flow based simulator for a USSV in the time domain, with an arbitrary hull geometry. This paper reports model simplification algorithms based on clustering, temporal coherence, and hardware acceleration using parallel computing on multiple cores to obtain real time simulation performance for the developed VE.
Journal Articles
Article Type: Editorial
J. Comput. Inf. Sci. Eng. September 2009, 9(3): 030201.
Published Online: September 2, 2009