## 1 Introduction

Increasingly, robotics are assigned to complete real-world tasks where treating task surfaces such as simple primitives (cylinders, squares, etc.) is unrealistic. An illustrative and motivating example is the dismantling of complex piping, as shown in Fig. 1, bottom [1].

Fig. 1
Fig. 1
Close modal

Virtual fixtures (VFs) [2] are designed to simplify the operator’s job. VFs are virtual constraints imposed on a task’s workspace, much like jigs used in machining or computer-aided design (CAD) assembly constraints. VFs can provide motion guidance, Guidance VF, or an exclusion volume, Forbidden Region VF. Semi-autonomous behaviors are easily defined using VFs. Artificial end effector (EEF) motion guidance/constraints demonstrate decreased operator mental load and increased efficiency [3]. Examples include obstacle avoidance and collision detection in multiple domains such as mobile manipulation [4,5], surgical [6,7], and nuclear [8,9] research areas. Environments studied also include cluttered home environments [5] and human anatomy variability [7]. However, robotic systems using VFs lack the flexibility to assist operators with process adjustments for complex, unique geometries [8,10,11].

Volumetric primitive VFs, constructed from system and sensor data (Fig. 1, top left and right), were previously utilized to simplify inspection tasks by removing spatial transform management from the operator’s mental load [12,13]. Other work generated point cloud Guidance VFs based on a set of parametric surfaces and task parameters [14]. Volumetric primitive and parametric surface VFs (Fig. 2) are useful in some cases but cannot be adapted to complex geometries from common data sources (CAD, polygonal meshes, sensor data, etc.). Simple superposition of primitives is not viable as it created irreconcilable conflicts during VF generation and operator interface interpretability. This research expands VF generation to include complex geometries from all common data sources. It provides operators with more expressive, intuitive VFs to assist with any task.

Fig. 2
Fig. 2
Close modal

In addition to generating VFs, it is necessary to not increase the burden on the robotic operator. With primitives (Figs. 2(a), (b), (d), and (h)), guidance is often simple (left, right, up, down, in, and out), but VFs generated from complex objects cannot be easily mapped to directions and the meaning of such terms can be ambiguous and change with perspective. Guidance tools must also be generated automatically and be easy to use. Robotic systems are operated by trained technicians with little or no intervention from robotics experts. To make the system usable, the interface must address the correspondence problem and context switching where differences between operator and manipulator kinematics complicate motion planning. For example, a user commanding a robot’s joints using a keyboard for a drum inspection (Fig. 1, top left) must internally manage every corresponding cause and effect (arm presses button, the button turns joint, joint motion changes gripper location relative to the wall, etc.). Complex VFs would unnecessarily burden an operator’s mental load. Therefore, a VF interaction method must be part and parcel with their generation.

Section 2 summarizes related research efforts and confirms the need for the developed tools in the robotics domain. Section 3 presents the VF construction pipeline, including algorithms for mesh interpolation, verification, point-normal calculations, and VF layer generation. We then outline a visualization and reachability analysis tool using VFs generated for general complex geometries. Section 5 discusses VF generation verification from a variety of data sources. Section 6 summarizes feedback and performance data from operator experiments. User evaluation is critical to ensure more complex VFs reduce the operator’s burden instead of increasing it. Section 7 presents conclusions and outlines future work.

## 2 Review of Virtual Fixture Generation and Usage

Similar efforts to this work were identified in other domains including, computation geometry [1517], sensor analysis [18,19], and 3D reconstruction fields [2022]. Many of these solve portions of the problem. Some common techniques constitute aspects of the generation pipeline, which are documented below. However, the conversion of multiple layers of point cloud-based Guidance VFs into a graph structure and pairing with a Forbidden Region VF for use with a 3D robot control interface is a unique approach to reducing an operator’s mental load and is the focus of this article.

In the robotics domain, Bowyer et al. [23] provide a recent and comprehensive review on VFs and active constraints by reviewing 120+ publications where the majority pertain to teleoperated or hands-on master-slave systems with haptic feedback. There are various VF generation techniques presented (Fig. 2) with a high percentage of computer-assisted surgery research efforts among topic publications. In several cases, the variability of the human body led researchers to create polygonal mesh Forbidden Region VFs from sensor data [6,7]. This procedure constrains the surgical tool location and protects sensitive tissue near the surgery site. For patient protection and other high-value operations, automated system failure requires personnel to complete the task manually [8], repair the equipment [24], or both. Bowyer et al. draw several conclusions:

• Flexible structures (Fig. 2 meshes, point clouds) are more expressive and, therefore, of greater use.

• Effectively generating constraint geometries is the most significant current challenge.

• Future constraint research will proceed toward a more complete working environment representation.

There are numerous examples in the literature using primitive-based VFs. For instance, DeJong et al. [24] applied VFs to facility piping size reduction with a saw. A planar Forbidden Region VF perpendicular to the pipe-restricted EEF motion to reduce cutting blade stresses (Fig. 1, top left) and minimize the blade damage. Another robotics system is designed to clean spherical vessel interiors and utilized VFs to improve teleoperation and enable semi-autonomous behaviors to reduce operator burden [25]. Khan and Hilton [10,11] discuss the benefits of noncontact fiber optic laser systems for volume reduction decommissioning tasks (Fig. 1, top right). This system lacked the intelligence to flexibly adjust to complex objects or piping arrangements and depended on linear cutting motions to accomplish tasks. Hilton et al. [11] noted that future nuclear facility decommissioning will require noncontact remote cutting techniques, which can intelligently adapt to complex geometries. Kruusamäe and Pryor [26] used primitive Forbidden Region VFs to augment a hand gesture robotic teleoperation interface. Operator evaluation showed the interface-enabled high-precision task completion with similar performance to using robot operating system (ROS) based [27] interactive markers. Another approach by Bi and Lang [28] presents an analysis of a robotic-coating process. This approach has similar goals but relies on the coating cone concept describing the volume between a robot with a coating tool, a paint gun, and the task surface.

Researchers [12] generated more complex, multilayer point cloud Guidance VFs, from geometric primitives and task parameters to enable directional command steps (“Left,” “Right,” “Up,” “Down,” “In,” and “Out”). The system was demonstrated on a container inspection task. Multilayer point cloud Guidance VFs and the navigation interface were also combined with motion planning methods to demonstrate size-reduction tasks [29]. Additional control buttons (“Add Pt,” “Remove Pt,” “Plan,” “Execute,” and “Clear”) provided operators the ability to construct size-reduction paths, Fig. 1, top right for example. The controller reduced the operator’s mental burden, but noted geometric primitives have insufficient expressiveness for most real-world applications (Fig. 1, bottom). Other investigations [14] employed parametric surfaces instead of geometric primitives to generate multilayer point cloud Guidance VFs. Results demonstrated correlations between surface concavity/offset distance and the need to control layer point cloud growth through cloud density analysis.

Broad applicability for the tasks and domains identified requires VF generation from parametric surfaces, polygonal mesh models, CAD files, and RGB-depth sensor data. The computational geometry literature offers many tools to analyze and alter these file types. Also, the robotic literature demonstrates a clear need for general and flexible VF generation methods to provide broad and expressive environmental representations. Finally, since multilayer VFs generated from complex shapes have not been evaluated, it will be necessary to ensure their increased complexity does not itself unduly burden potential users. The usability study developed is based on the principles outlined by Nielsen and Landauer [30], which suggests a series for smaller user studies for newly developed tools or tools currently under development.

## 3 Virtual Fixture Generation Algorithms

Many processes require a specific tool or sensor offset and orientation to the task’s surface. When a specific tool to surface transformation is required, the normal vector, $n^$, at a task surface vertex, v, (Eq. (1)) provides a “depth” direction. EEF motion can, therefore, be restricted to a SE(3) offset using Guidance VF defined from a general task surface. Varying the distance along the task surface point’s normal vector defines different layers of Guidance VFs. These layers are constructed at task-defined distances to maximize the VF’s usefulness to operators. Point cloud-based Guidance VFs allow the operator to choose from precalculated poses instead of teleoperating without 6DOF EEF location information. The layers of point cloud-based Guidance VFs are then combined into a novel bidirectional graph VF structure. When the bidirectional Guidance VF graph is combined with the Forbidden Region VF, it becomes an innovative task virtual fixture. A Task VF restricts and simplifies the task execution volume analogous to the planar EEF restriction in [24]. DeJong et al. enabled motion in a plane while forbidding translation/rotations outside of that plane (Fig. 1, left). Here Task VFs enable motion at one or multiple task-defined distances while forbidding motions that violate task execution requirements. Examples include distances along the current operating point normal, which would result in unwanted surface damage, collisions, or be too far away to complete the task. This approach is compatible with VFs generated for complex geometries and scalable to virtually any task.
$Psurf(i)⋅v=[vxvyvz]Psurf(i)⋅n^=[nxnynz]$
(1)

VF generation is conceptualized as a data pipeline traversing multiple algorithms (Fig. 3). Required data inputs are a task surface and task parameters. Guidance VF elements are calculated based on minimum and maximum distances (dmin, dmax) and intralayer and interlayer distances (dintra, dinter) task parameters (Fig. 4). The interlayer distance, dinter, defines the distance between Guidance VF layers, while intralayer resolution, dintra, determines the distance between points within a Guidance VF layer. The pipeline output is a graph structure containing layers of vectors, which are the inverse of the task or object normal at a surface location (Fig. 4, right).

Fig. 3
Fig. 3
Close modal
Fig. 4
Fig. 4
Close modal

The role of each element in the pipeline is as follows:

• Input: STL input and sources are detailed in Sec. 3.1.

• Interpolate: Input interpolation and conversion is discussed in Sec. 3.2.

• Extend and Interpolate: The creation of VF layers by extending and interpolating the data is described in Sec. 3.3. If sensor data or reconstructed point clouds are used, this is the where it enters the pipeline.

• Convert: VF layer conversion into a graph structure is detailed in Sec. 3.4.

The elements, input, output, and intermediary data structures are detailed here. While some actions utilized the referenced and established algorithms, several new components were developed and are discussed in more depth.

### 3.1 Task Surface Pipeline Input.

Since surface input data can be formatted as either spline-based models, which are popular in CAD software packages [31,32], or polygonal meshes, which consist of vertices defining polygons and a polygonal face normal. The developed algorithm pipeline must be compatible with both inputs. Converting from spline-based models to polygonal meshes is a straightforward surface sampling process. Conversely, converting polygonal input into a spline-based format requires surface reconstruction methods that require operator input to achieve acceptable meshes [15,31,33]. Therefore, developing Task VF generation algorithms based on the polygonal mesh input allows for spline-based file integration and maintains data input flexibility. Available polygonal meshes include millions of downloadable polygonal models from non-spline sources such as online model repositories, e.g., Refs. [34,35]. The binary STL format is widely available [31,32,36] and represents data as a list of triangles T(i) where i = 1, …, N. Each triangle, T(i), includes its normal vector, $n^$, and three vertices, v(j), (Eq. (2)).
$Tsurf(i)⋅n^=[nxnynz]Tsurf(i)⋅v(j)=[v(j)x,v(j)y,v(j)z]j=1,2,3$
(2)

### 3.2 Mesh Interpolation and Point-Normal Calculations.

The STL triangle side lengths of a valid mesh Tsurf are compared to the desired task intralayer distance, dintra. Triangles with sides exceeding dintra are subdivided iteratively until appropriately sized to have sufficient fidelity for task execution. The triangles are divided by defining points at each side’s midpoint, which divides the triangle into four geometrically similar triangles. This primal approximating dissection method was chosen to maintain original surface points, retain fine surface features, and increase point dispersion over simpler triangle bisection methods. More complex methods for mesh subdivision are available, including parameterization-based or surface-oriented remeshing techniques [15] and nonlinear subdivision [37,38], but were deemed unnecessary.

Point normals (Eq. (1)) are calculated using the normalized weighted average of the triangle’s $n^$ for triangles in which the point is a vertex. Each triangle normal is weighted by the ratio of the triangle’s area to the area of all triangles in which the point is co-located (Algorithm 1, Fig. 5). Thus, the Delaunay neighborhood is utilized similarly to Ref. [16], but the STL file provides the normal information. The integration of triangle area weighting better accommodates triangle size variations than previous methods as normals will be shifted toward the largest triangle in the neighborhood (Fig. 5). It also avoids recalculating surface information as with plane fitting approaches. Plane fitting algorithms function best with point distribution consistency [16], which may be lacking in the surface mesh and may require additional information such as a viewpoint [18]. If future algorithm development requires additional surface information, such as curvature, a plane fitting method [17,19] integration into the Task VF generation pipeline is an option.

Fig. 5
Fig. 5
Close modal

#### Point-normal calculation.

Algorithm 1

1: $warnings=0$, $percentage=0$

2: for$Psurf(i);i=1,...,I$ where I is the total points do

3: $nsum=[0x0y0z]$, $numtri=0$, $sumA$

4: for$Tsurf(j):j=1,...,J$ where J is the total triangles do

5:  if$Psurf(i)⋅v∈Tsurf(j)⋅v$then

6:   $nsum+=Tsurf(j)⋅n^*Tsurf(j)A$

7:   $numtri+=1$, $sumA+=Tsurf(j)A$

8:  end if

9: end for

10: $n¯sum=nsumnumtri*sumA$

11: if$|n¯sum|<ε2$then

12:  $warnings=warnings+1$

13: end if

14: $Psurf(i)⋅n^=n¯sum|n¯sum|$

15: end for

16: $percentage=warningsI$, Report percentage of warnings to operator.

During preliminary investigations, nonmanifold inputs were found to disrupt the Task VF generation pipeline. These nonmanifold inputs took the form of inverted $Tsurf(i)⋅n^$ regions from the Visualization Toolkit (VTK) [39] and duplicated polygons but with inverted surface normals from 3DWarehouse [35] and Sketchup [40]. Two straightforward checks were implemented to warn operators of defective meshes. The first check area weights and averages $Tsurf(i)⋅n^$ over the entire mesh, which should output approximately zero. However, if mesh regions are inconsistent, the average $n^$ will exceed the threshold, ɛ1, and the operator is warned to examine the mesh. A secondary check compares each calculated $Psurf⋅n^$ to another threshold, ɛ2, which is set to a small value (Algorithm 1). If $|Psurf⋅n^|$ is less than the threshold, the mesh is likely nonmanifold. If inconsistencies are detected, the operator is warned and the mesh can be reconstructed using common techniques [15], but reconstruction is outside the scope of this work.

Once the validation checks, interpolation, and conversion to a point cloud with normals (PCN) are completed, the task surface PCN is voxelized. Voxel filtering discretizes space into boxes of a provided size, and all points present within the same box are approximated a point located at their centroid [18]. The discretization box is a cube with side lengths based on a small percentage of the task dintra to maintain high surface density without duplicate points.

### 3.3 Virtual Fixture Layer Construction.

VF offset surfaces are calculated from the converted task surface PCN, Psurf(i), and task parameters (Fig. 4). This is the Task VF generation stage where sensor information and techniques such as sensor analysis [18,19] and 3D reconstruction [2022] could provide future expansions. However, this effort focused on describing the Task VF concept and creating a generation pipeline. VF PCN layers are constructed between dmin and dmax at intervals of dinter to eliminate unintended interactions (Eq. (3)). This approach is a similar to but a more expressive process than surface coating cones [28], which is limited to a single layer. For complex, real-world tasks, the potentially complicating geometries are too numerous to address categorically. Therefore, each VF layer is calculated from the task surface PCN, Psurf(i), instead of neighboring VF layers (Eq. (3)).
$dlayer=dmin+dinter*layer,layer=0,…,kPvf(i)⋅v=Tsurf(i)⋅v+Tsurf(i)⋅n^*dlayerPvf(i)⋅n^=−Tsurf(i)⋅n^=[−nx−ny−nz]$
(3)

The objective is to maintain the direction and distance to fine surface features and create a PCN of extended surface features. It is important to note that VF PCN layers are not required to form manifold surfaces, which is one of the advantages of this VF generation methodology. This distinction mainly applies to interior features’ extended points. Figure 6 illustrates possibilities where internal features composed of noncurved, discontinuous surfaces creates ambiguities in how layers should be generated. At close distances to the surface, they still look similar to the task surface (Fig. 6, layers 1 and 2). However, as the distance increases, the calculated PCN clusters in the center of the interior feature (Fig. 6, layer 3). Eventually, the VF PCN layer will approach the opposite side of the feature. In such cases, surface reconstruction methods would smooth away important information (Fig. 6, bottom black circle) or result in a nonmanifold geometry.

Fig. 6
Fig. 6
Close modal

Complications can arise as dlayer increases and $Psurf(i)⋅n^$ are extended due to curvature-related convergence/divergence phenomena, discontinuities in convex external features, or the presence of internal features. Convex regions have lower $Tsurf(i)⋅n^$ densities for VF layers further from the surface and thus require interpolated points to maintain VF intralayer resolution. After extending $Tsurf(i)⋅n^$, point neighborhood information is used for VF layer interpolation. To limit extraneous point creation—and thus extended computation times—a minimum interpolation radius, rmin, is defined. The rradius is increased proportionally to dlayer, rprop. Thus, a percentage of the VF layer surface area is maintained as distance increases while avoiding interpolating through thin surfaces. Distance proportional rprop neighborhoods provide a more generic neighborhood selection process than hand-tuning knearest for each model. Once the local neighborhood is selected, the distance between the current point and each neighbor is calculated. If the Cartesian distance is between rprop and rmin, a new Pvf is linearly interpolated and assigned the average of the two point’s $n^$.

Concave and interior surface features will contain regions of converging $Tsurf⋅n^$. For example, if the concave region is an interior slot, surface normals will intersect (Fig. 6). As with the task PCN, voxel filtering is one way to correct VF layer resolution. In this case, VF layer resolution needs to be limited, so the discretization box is a cube with side lengths proportional to dintra. Interpolation and voxelization are executed only once per VF layer instead of iteratively to limit surface distortions in the resulting PCN.

Once constructed, the number of neighbors within a dintra based rradius volume of Pvf(i) is calculated to check VF layer resolution. The number of neighbors is averaged over an entire VF layer to ensure the VF layer resolution is maintained as dlayer increases. The VF PCN layer can be used as a Guidance VF or Forbidden Region VF. For a Guidance VF, normal vectors are determined by inverting each $Psurf(i)⋅n^$ to face toward the task surface (Eq. (3)). Utilizing Guidance VFs generated from PCN is a previously uninvestigated approach due to complexities related to data storage, visualization, and utilization. For a Forbidden Region VF, the PCN layer is converted into a polygonal mesh, or an exclusion volume is placed around each point similarly to previous approaches [41,42]. This process allows VF PCN layers to define a Forbidden Region VF offset from a complex geometry (Fig. 7, left). This implementation is particularly useful for noncontact tasks where dmin is small. Forbidden Region VFs also handle cases where dmax is greater than an interior space by removing regions altogether. As previously mentioned, VF PCN layer generation can result in nonmanifold surfaces, and Forbidden Region VFs conversion is only recommended when the ratio of task surface feature size to layer distance is high.

Fig. 7
Fig. 7
Close modal

### 3.4 Task Virtual Fixture Bidirectional Graph Storage.

Locations in a global coordinate frame (or poses) in a Guidance VF layer must be traversable in the sense that a robot or other agent must be able to move from one to another. We note that a pose generally includes both a position and orientation. Simple commands for relative motion concepts (“up,” “right,” etc.) are insufficient to guide a robot or other agent. Thus, information is stored in a bidirectional graph structure, which also enables graph search tools such as Breadth First, Depth First, and Uniform Cost Search in addition to Dijkstra’s Shortest Paths and other path algorithms (Fig. 7). Task VFs include both the Forbidden Region VF task surface model and the Guidance VF bidirectional graph, which enables tasks for complex geometries (Fig. 1, top left).

The Task VF graph is constructed by converting each Guidance VF layer’s PCN into graph vertices. Each vertex contains its unique id, Guidance VF layer, and pose in the Task VF frame but could include other data such as curvature or local neighborhood information. Vertices are linked together using an adjacency list. Edge weights record Cartesian distance information but could employ a weighting of multiple metrics such as Cartesian and angular distances. Vertices are linked to all others in a Guidance VF layer but only the Cartesian closest vertex in neighboring Guidance VF layers (Fig. 7). Linking between Guidance VF layers can result in more than one interlayer edge. For example, in Fig. 6, converging $n^$ would cause multiple vertices in the blue layer to link to the same vertex in the green layer or vice versa. Using these linking methods simplifies visualization, operator intralayer and interlayer navigation, and future automated path planning.

## 4 Manipulator to Task Transform Tool

Once the Task VF has been generated as stated earlier, the intention of conceptually directional actions (“Left,” “Up,” “Out,” etc.) are ambiguous. Even for a simple spherical model, commanded motions over the “top” (which is also ambiguous) yields different relative action concepts and final viewpoint than moving around the “side.” Therefore, to visualize and test operator interpretability of Task VF generation pipeline output, a new control interface was constructed. The design of the new control interface followed an analysis of 8 of the 15 DARPA Robotics Challenge Trial teams. It concluded integrating sensor/input windows, and more autonomy led to better performance [43]. Our interface uses ROS’s 3D visualization environment, RViz. Separate directional buttons [29] were replaced with interactive markers in the same RViz window presenting sensor and VF data. A single interface displays the task Forbidden Region VF surface and the Guidance VF layers and allows graph navigation. The interface maintains and displays the operator’s current Task VF vertex and neighboring vertices, including those in other Guidance VF layers.

The task surface and every Task VF vertex are represented as interactive markers. This allows a user to move the task frame (red: X, green: Z, blue: Z) around the virtual workspace (Fig. 8, bottom right) using the arrows. As the user moves the task surface around the workspace, they can also interact with specific Task VF vertices as shown by the colored ellipsoid markers in Fig. 8. Each Task VF interactive marker includes a list of menu accessible options:

• Make Current Pose: Make the selected Task VF vertex the current Task VF vertex and update the nearest neighbor graph and interlayer linking markers.

• Update Pose: Update the Task VF vertex information with the current interactive marker pose.

• Move to Pose: Plan and execute EEF motion to the current interactive marker pose.

• Add Pose to Path: Add the current interactive marker vertex to a graph for path motion.

• Remove Pose From Path: Remove the current interactive marker vertex from a graph for path motion.

• Test Path: Pass the path graph to Descartes planning and execution package [44].

• Clear Path: Clear the path graph of path.

Fig. 8
Fig. 8
Close modal

The user’s current Task VF vertex is displayed as a white marker, which can be modified when the operator clicks Make current pose. All Task VF interactive markers have a fixed location in the task frame but can be rotated around their $n^$ (Fig. 8, white circle bottom left, bottom middle). A default orientation is updated by clicking Update pose in the menu. Neighboring vertices representing interlayer linking are displayed with blue markers, and moving to between layers updates the Guidance VF layer (Fig. 8) visualization.

The developed control and visualization interface was first considered for assistance with spatially discrete noncontact tasks such as visual inspection. Therefore, the first required control was EEF movement to the Task VF pose desired. When the operator commands a Move to pose, the motion will be successful if the pose has an IK solution and the manipulator is unobstructed by the planning scene. The tedium of manually testing poses led to the integration of reachability analysis into the Task VF visualization interface. Reachability analysis utilizes well-established IK analytical tools available as part of MoveIt! [45,46] to verify a collision-free motion plan from the current location to the Task VF pose. Task VF vertices are green (reachable) or red (not reachable) as shown in Fig. 8 for a given robot or other kinematically constrained device. The interface also aggregates reachability analysis information and displays the number and percentage of Task VF vertices along with the current task frame pose (Fig. 8, top). Each visualization is controlled through the RViz window (Fig. 8, top left side). These simple but necessary analytical tools are encapsulated in the manipulator to task transform tool (MTTT), allowing operators to evaluate possible task locations quickly. It is independent of the Task VF generation pipeline and, therefore, utilizes a bidirectional graph of poses from any source.

## 5 Task Virtual Fixture Input Evaluation

Task VF generation pipeline inputs include surface models from multiple sources including parametric surfaces, meshes, and point cloud data. Thus, a broad range of input data sets must be evaluated to assert the VF pipeline thst will be robust for general, real-world, complex geometries. Therefore, a large variety of each input data type were tested to identify incorrect surface $n^$, and that VF layers are generated, resolutions are reasonable, and the Forbidden Region VF and Guidance VF layers are successfully converted into a Task VF graph.

### 5.1 Parametric Surface Task Virtual Fixture Generation.

Superellipsoids (Eq. (4)) and supertoroids (Eq. (5)) are mathematically generated parametric surfaces with multiple continuous parameters, Nxy, Nz ∈ [0, ∞), producing a multi-manifold continuum of 3D objects (Figs. 9 and 10). They provide a generalizable, repeatable data set for testing and validation inclusive of edge cases. The constant parameters are set equal to one, rx, ry, rz, r0, r1 = 1, resulting in models approximately 2 m across. Generation of the superellipsoids and supertoroids took place in the parameter space 0 ≤ Nz, Nxy ≤ 4 at intervals of 2 providing nine of each super solid for Task VF generation pipeline evaluation. Model STLs (Figs. 9 and 10) were created using VTK [39].
$x=rx*cosθNz*cosβNxyy=ry*cosθNz*sinβNxyz=rz*sinθNz−π2≤θ≤π2,−π≤β≤π,0≤Nxy,Nz<∞$
(4)
$x=cosθNxy*(r0+r1*cosϕNz)y=sinθNxy*(r0+r1*cosϕNz)z=sinϕNz0≤θ≤2*π,0≤ϕ≤2*π,0≤Nxy,Nz<∞$
(5)
Fig. 9
Fig. 9
Close modal
Fig. 10
Fig. 10
Close modal

The Task VF generation pipeline mesh $n^$ and point $n^$ and thresholds were set to ɛ1 = 1e − 5, ɛ2 = 1e − 8, respectively. At these thresholds, a bowl model [47], known to be nonmanifold, warned the operator 100% of points were above the ɛ1 threshold (Algorithm 1). None of the superellipsoids generated operator warnings as expected. Seven of the eight non-manifold supertoroids generated operator warnings to evaluate the mesh. The eighth model was nonmanifold but symmetric, so it passed the test. Thus, the supertoroid surfaces should be reconstructed to contain correct surface information before being Task VF generation pipeline input but testing did validate the check for inverted $n^$. The pipeline does not (and should not) correct ill-formed inputs, but it correctly informs users when they are present.

Previously, Task VF generation parameters were set dmin = 0.05 m, dmax = 1.05 m, dintra = 0.5 m, and dinter = 0.5 m to provide three Guidance VF layers. Repeating Task VF tests provides a comparison with previous pipeline algorithms. VF layer calculations reinforce the expectation that point the cloud growth is proportional to surface concavity (Fig. 11). Convex models, Nxy = Nz = 0 (Fig. 9, top left), show significant growth in the number of vertices with increasing distance, but the concave models Nxy = Nz = 4 (Fig. 9, bottom left) show less growth (Fig. 11). Supertoroids exhibit similar trends but are more erratic due to inverted $n^$ (Fig. 11).

Fig. 11
Fig. 11
Close modal

Checking the selected superellipsoids and supertoroids VF layer resolutions provides several important observations (Fig. 12). Superellipsoid resolution varies between models but is maintained between VF layers. The erratic supertoroid results highlight the effect of inverted $n^$. Superellipsoid and supertoroid VF layer sizes are smaller than previous results [14] but layer resolution is maintained.

Fig. 12
Fig. 12
Close modal
Task VF graph structure construction exhibits high connectivity within VF layers and sparse connectivity between VF layers. As with the previous approach evaluations, examining intralayer and interlayer edge growth displayed expected data trends. The number of intralayer edges (Fig. 13) increases with the size of the VF layer (Eq. (6)). Interlayer edges increase more slowly and are approximately equal to the larger of the two linked VF layers (Fig. 14). Thus, the generated VFs are intuitively maintained in reasonable resolutions for properly formed input data.
$Eintralayer=(Nlayervertices2−Nlayervertices)/2$
(6)
Fig. 13
Fig. 13
Close modal
Fig. 14
Fig. 14
Close modal

### 5.2 Polygonal Mesh Task Virtual Fixture Generation.

Crowdsource-selected complex polygonal meshes from Thingiverse [34] provided additional Task VF generation pipeline input for evaluation. Eleven individuals each provided two to four models (32 total). Seven were eliminated as unsuitable as they contained multiple meshes or were impractically large. While Task VF generation completes for large STL files, the resulting graph structure computational strained MTTT’s RViz environment. The remaining 25 models (Table 1) represent a large variety of items, including cable holders (70549), desk figurines (906951), tools (1187995), and phone stands (2120591).

Table 1

Thingiverse www.thingiverse.com model numbers

173143884070549906951908062
10148451187995167778421205913101067
31061293108035310855431194943110862
31147183118241311884731188553119665
31195803119670311973531198023119803
173143884070549906951908062
10148451187995167778421205913101067
31061293108035310855431194943110862
31147183118241311884731188553119665
31195803119670311973531198023119803

Point and mesh $n^$ thresholds were set to ɛ1 = 1e − 5, ɛ2 = 1e − 8, respectively, for mesh testing. Only one of the 25 models (3119803) was flagged by input data checks. Visual inspection found inverted $T(i)⋅n^$ on two holes (Fig. 15), demonstrating the effectiveness of this straightforward technique for finding mesh inconsistencies.

Fig. 15
Fig. 15
Close modal
Developing a general formula for assigning task parameters allowed comparison over a set of meshes with varying complexity (units, physical size, data size, etc.). Task parameters were based on the original STL average triangle side length, $T¯(i)side$, including the voxelization and resolution evaluation distance, rres, (Eq. (7)). Another approach is to base task parameters on the largest model dimension but doing so does not account for any large differences between model features and overall dimensions.
$dintra=dinter=dmin=10*T¯(i)sidedmax=100*T¯(i)sidedvoxel=dintrarres=3*dintra$
(7)

These parameters were tested on the parametric surfaces, and three variations ($dintra=10*T¯(i)side,15*T¯(i)side,20*T¯(i)side$) were applied to the polygonal mesh testing pool are shown in the red, green, and blue point clouds in Fig. 18. Average VF layer size grew at roughly an N = C * dist2 rate, where N is the number of points and C is a constant (Fig. 16). This result was expected since spherical surface area, 4 * π * r2, grows at a r2 rate. VF layer resolution evaluation used the same parameters. The results show a correlation between superellipsoid and polygonal mesh VF layer growth and resolution, suggesting that estimating task parameters based on $T¯(i)side$ is an effective approach. Average VF layer resolution does slowly decrease for all input types (Fig. 17). Resolution decay at high dlayer to model size ratios could lead to cases where dintra is unattainable. In such cases, authors recommend higher interpolation of the original task surface over multiple interpolation and voxelization iterations during the generation of a VF layer.

Fig. 16
Fig. 16
Close modal
Fig. 17
Fig. 17
Close modal
Fig. 18
Fig. 18
Close modal

Four models were randomly selected for additional investigation (Fig. 18). Visual inspection of the first two Guidance VF layers displayed the PCNs oriented back toward the surface at regular intervals. The first Guidance VF layers are sparse, suggesting that a lower dintra might be necessary for task completion. Parameters are easily adjusted to task-specific Task VF generation parameters from the generalized, model-based, parameters used in this evaluation.

### 5.3 Point Cloud Task Virtual Fixture Generation.

3D LIDAR data were gathered in a mock setting constructed at UT Austin of a nuclear facility’s exhaust tunnel (Fig. 19, top), where a mobile manipulator must periodically perform visual and radiation surveys [48]. A section of the data around the large pipe was segmented for evaluation. Testing parameters varied from dmin = 0.05 m to dmax = 1.05 m and dintra = 0.1 m, 0.5 m. Comparing results to superellipsoid and supertoroid calculations shows slower VF layer growth when dintra = 0.5 m due to the extension of a partial surface instead of an enclosed 3D object (Fig. 20). Decreasing dintra raises the layer size above superellipsoid and supertoroid levels.

Fig. 19
Fig. 19
Close modal
Fig. 20
Fig. 20
Close modal

Virtually inserting a Yaskawa SIA20 into MTTT RViz interface provides reachability information and the ability to move the manipulator tool among the generated Task VF (Fig. 19, bottom). This combination demonstrates the applicability of the Task VF generation pipeline to real-time sensor data for open-world scenarios.

## 6 Task Virtual Fixture Operator Evaluation

The proposed Task VFs can be generated for virtually any surface but are inherently more complex than previous VFs. So even though Task VFs are automatically generated and more broadly applicable, the additional complexity may increase operator mental load instead of decreasing it. Thus, a preliminary experimental evaluation was completed to determine operator interpretability and MTTT usefulness. After some initial usability reviews, an experiment was setup following some of the principles outlined by Nielsen and Landauer [30]. Their work presents the idea that most usability problems can be found with a relatively small number of users, approximately 5, and the more users tested the less you learn from each. However, since the goal of the MTTT interface for Task VFs is to be usable by diversely experienced operators, none all the way to expert, a goal of ten users was identified.

### 6.1 Experimental Virtual Workspace.

The experiment was performed in a virtual workspace consisting of the ROS and RViz interfaces for a Yakasawa SIA20 manipulator with an IPG Photonics Compact Cutting Head [49] laser cutter (Fig. 21). This interface was used in the previous work along this research thread, which looked into pairing volumetric primitive VFs with laser cutting [29]. To keep the reachability analysis, calculations below 10 s required Guidance VF layers containing only ∼50 poses. Operator Task VF evaluation surfaces include a subset of the superellipsoids (Fig. 9) and a Thingiverse [34] model of a stone monolith (908062), as shown in Fig. 21.

Fig. 21
Fig. 21
Close modal

### 6.2 Experimental Procedure.

Volunteer subjects2 rated their experience with robotic manipulators on a scale of 1 (no robotics experience) to 10 (robotics expert). A (1–10) scale assured sufficient fidelity in the presence of floor/ceiling effects [50]. Subjects were shown the MTTT interface (Fig. 21) and read a script describing the interface, testing procedure, and recorded data. Their objective was to maximize the percentage of reachable Task VF points for several models in a virtual robot environment. Users explored the MTTT interface for 5 min to familiarize themselves with its functionality using the Nz = 4, Nxy = 4 superellipsoid model (Fig. 9, bottom right). Task VF pose reachability results were available during this training/exploration, and thus, the model was excluded from evaluated trials.

Users were divided into tracks T1 and T2 where each track included four models and four trials for each model. The first three superellipsoids were randomized, but the monolith model was always last. For the first and second trials, T1 users were provided with the task model and asked to place it in a location where they thought the robot would be able to reach the largest portion of the task surface. They were provided aggregated reachability data at the task surface location (Fig. 8, top white writing) as feedback without being able to examine individual vertex reachability (Fig. 21). Users were then asked to choose a location with higher reachability based on the provided feedback. Once the first two trials were completed, users were asked to complete a Likert scale questionnaire. For the 3th and 4th trials, users were allowed to examine individual Task VF vertex reachability in addition to the aggregated data (Fig. 21). After completing the third and fourth trials, users completed another Likert scale questionnaire.

In contrast, T2 users examined individual Task VF vertex reachability and aggregated data during all four trials with each of the four models (Fig. 8, top). Users filled out the same questionnaire as above after trials 2 and 4. After the 4th trial on the stone monolith, users in T1 and T2 could search for locations with higher reachability. For this test, Task VF pose reachability feedback was left active during object motion, thus providing rapid feedback to the user and increasing the search rate.

### 6.3 Experimental Results.

The user group consisted of 11 participants, one volunteer above goal of 10, which were all students and the University of Texas at Austin but at varying undergraduate and graduate educational levels. Users possessed sufficiently varied levels of robot experience to maintain test diversity when distributed between tracks T1, 6, and T2, 5, (Table 2). All but one user opted to try immediate feedback with the monolith resulting in 185 trials.

Table 2

User robotic manipulator experience levels and track

Experience12345678910
T1 users1101110100
T2 users1010011100
Experience12345678910
T1 users1101110100
T2 users1010011100

Each participant evaluated four models using the procedure detailed above. T1 and T2 results for four trials with each of the four models were averaged. The results show T1 users took longer to choose test locations even when Task VF vertex reachability was provided in trials 3 and 4 (Fig. 22, left scale). T2 users also achieved higher reachability even during the final trial with the monolith model (Fig. 22, right scale).

Fig. 22
Fig. 22
Close modal

Likert scale questionnaire data was also aggregated (Fig. 23). The results show users thought the task was more difficult and frustrating without Task VF vertex reachability information for complex shapes. The usability between the T1 and T2 user interfaces was almost the same, suggesting the additional Task VF information was interpretable, which was one of the main reasons for this user study. Users also thought they were more likely to improve with practice and successfully complete the task in track T2 compared to T1. The last inquiry results demonstrate a greater placement certainty when Task VF vertex reachability information was provided from the beginning.

Fig. 23
Fig. 23
Close modal

## 7 Conclusions and Future Work

Ongoing and future work includes upgrading the Task VF generation pipeline with curvature aware PCN interpolation advancements, possible integration of surface region clustering, and continued user evaluation. Clustering surface regions could reduce intralayer connections and provide subgraphs, which would increase the applicability of Dijkstra’s Shortest Paths and other algorithms to improve execution efficiency. Future evaluations will include spatially continuous tasks, such as laser cutting or painting, and integration with modern trajectory planners to generate smooth trajectories. Further user testing is planned with the MTTT on additional tasks and also outside of ROS with industrial manipulator interfaces.

## Footnote

2

UT Austin IRB 2018-06-0092.

## Acknowledgment

This material is based upon work supported by Los Alamos National Laboratory and the University of Texas at Austin.

## Conflict of Interest

There are no conflicts of interest.

## Data Availability Statement

The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The authors attest that all data for this study are included in the paper.

## References

1.
United States Nuclear Regulatory Commission
,
2018
, “
Code of Federal Regulations
,” https://www.nrc.gov/reading-rm/doc-collections/cfr/part020/part020-1003.html, Accessed January 12, 2018.
2.
Rosenberg
,
L.
,
1993
, “
Virtual Fixtures: Perceptual Tools for Telerobotic Manipulation
,”
Virtual Reality Annual International Symposium
,
Seattle, WA
,
Sept. 18–22
,
IEEE
, pp.
76
82
.
3.
Abbott
,
J. J.
,
Marayong
,
P.
, and
Okamuray
,
A. M.
,
2007
, “
Haptic Virtual Fixtures for Robot-Assisted Manipulation
,”
Results of the 12th International Symposium ISRR, Robotics Research 2007
,
Springer
,
Berlin/Heidelberg
, pp.
49
64
.
4.
Hebert
,
P.
,
Bajracharya
,
M.
,
Ma
,
J.
,
Hudson
,
N.
,
Aydemir
,
A.
,
Reid
,
J.
,
Bergh
,
C.
,
Borders
,
J.
,
Frost
,
M.
,
Hagman
,
M.
,
Leichty
,
J.
,
Backes
,
P.
,
Kennedy
,
B.
,
Karplus
,
P.
,
Satzinger
,
B.
,
Byl
,
K.
,
Shankar
,
K.
, and
Burdick
,
J.
,
2015
, “
Mobile Manipulation and Mobility as Manipulation-Design and Algorithms of RoboSimian
,”
J. Field Rob.
,
32
(
2
), pp.
255
274
.
5.
Ciocarlie
,
M.
,
Hsiao
,
K.
,
Leeper
,
A.
, and
Gossow
,
D.
,
2012
, “
Mobile Manipulation Through an Assistive Home Robot
,”
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
,
Vilamoura, Algarve, Portugal
,
Oct. 7–12
,
IEEE
, pp.
5313
5320
.
6.
Ren
,
J.
,
Patel
,
R.
,
McIsaac
,
K.
,
Guiraudon
,
G.
, and
Peters
,
T.
,
2008
, “
Dynamic 3-d Virtual Fixtures for Minimally Invasive Beating Heart Procedures
,”
IEEE Trans. Med. Imaging
,
27
(
8
), pp.
1061
1070
.
7.
Li
,
M.
,
Ishii
,
M.
, and
Taylor
,
R.
,
2007
, “
Spatial Motion Constraints Using Virtual Fixtures Generated by Anatomy
,”
IEEE Trans. Robot.
,
23
(
1
), pp.
4
19
.
8.
Turner
,
C. J.
,
Harden
,
T. A.
, and
Lloyd
,
J. A.
,
2009
, “
Robotics in Nuclear Materials Processing at Lanl: Capabilities and Needs
,”
ASME 2009 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
American Society of Mechanical Engineers
, pp.
701
710
.
9.
House of Commons Committee of Public Accounts
,
2013
, “
UK Parliament Select Committee Report (2013): Nuclear Decommissioning Authority: Managing Risks at Sellafield Ltd
,” https://publications.parliament.uk/pa/cm201213/cmselect/cmpubacc/746/746.pdf, Accessed January 9, 2018.
10.
Khan
,
A.
, and
Hilton
,
P.
,
2013
, “
Fibre Delivered Laser Beams—An Alternative Cost Effective Decommissioning Technology
11.
Hilton
,
P.
, and
Khan
,
A.
,
2014
, “
New Developments in Laser Cutting for Nuclear Decommissioning
,”
WM2014 Conference
,
Phoenix, AZ
,
Mar. 2
, p.
4
.
12.
Sharp
,
A.
, and
Pryor
,
M.
,
2016
, “
Variable Normal Surface Virtual Fixtures (VNSVF) for Semi-autonomous Task Completion
,”
ANS Decommissioning and Remote Systems (D&RS) Joint Topical Meeting
,
Pittsburgh, PA
,
July 31
,
American Nuclear Society
.
13.
Sharp
,
A.
,
Kruusamae
,
K.
,
Ebersole
,
B.
, and
Pryor
,
M.
,
2017
, “
Semiautonomous Dual-Arm Mobile Manipulator System With Intuitive Supervisory User Interfaces
,”
2017 IEEE International Workshop on Advanced Robotics and its Social Impacts
,
Austin, TX
,
Mar. 6
,
IEEE
, pp.
1
6
.
14.
Sharp
,
A.
, and
Pryor
,
M.
,
2018
, “
Data Driven Virtual Fixtures for Improved Shared Control
,”
2018 International Design Engineering Technical Conferences and Computers and Information in Engineering Conference
,
,
Aug. 26–29
,
ASME
, Vol.
51814
, p.
V05BT07A028
.
15.
Botsch
,
M.
,
Kobbelt
,
L.
,
Pauly
,
M.
,
Alliez
,
P.
, and
Levy
,
B.
,
2010
,
Polygon Mesh Processing
,
AK Peters/CRC Press
,
Cleveland, OH
.
16.
Ma
,
J.
,
Feng
,
H.-Y.
, and
Wang
,
L.
,
2013
, “
Normal Vector Estimation for Point Clouds via Local Delaunay Triangle Mesh Matching
,”
Computer Aided Design Appl.
,
10
(
3
), pp.
399
411
.
17.
Zhihong
,
M.
,
Guo
,
C.
,
Yanzhao
,
M.
, and
Lee
,
K.
,
2011
, “
Curvature Estimation for Meshes Based on Vertex Normal Triangles
,”
Computer Aided Design
,
43
(
12
), pp.
1561
1566
.
18.
Rusu
,
R. B.
,
2009
, “
Semantic 3d Object Maps for Everyday Manipulation in Human Living Environments
,” Ph.D. thesis,
Computer Science Department, Technische Universitaet
,
Muenchen, Germany
, p.
10
.
19.
,
A.
, and
Khalili
,
K.
,
2014
, “
Umbrella Curvature: A New Curvature Estimation Method for Point Clouds
,”
Proc. Technol.
,
12
, pp.
347
352
.
20.
,
S.
,
Kim
,
D.
,
Hilliges
,
O.
,
Molyneaux
,
D.
,
Newcombe
,
R.
,
Kohli
,
P.
,
Shotton
,
J.
,
Hodges
,
S.
,
Freeman
,
D.
,
Davison
,
A.
, and
Fitzgibbon
,
A.
,
2011
, “
Kinectfusion: Real-Time 3d Reconstruction and Interaction Using a Moving Depth Camera
,”
Proceedings of the 24th Annual ACM Symposium on User Interface Software and Technology
,
Santa Barbara, CA
,
Oct. 16
, pp.
559
568
.
21.
Dai
,
A.
,
Nießner
,
M.
,
Zollhöfer
,
M.
,
,
S.
, and
Theobalt
,
C.
,
2017
, “
Bundlefusion: Real-Time Globally Consistent 3d Reconstruction Using on-the-Fly Surface Reintegration
,”
ACM Trans. Graphics (ToG)
,
36
(
4
), p.
1
.
22.
Huang
,
J.
,
Dai
,
A.
,
Guibas
,
L. J.
, and
Nießner
,
M.
,
2017
, “
3dlite: Towards Commodity 3d Scanning for Content Creation
,”
ACM Trans. Graph.
,
36
(
6
), pp.
203
1
.
23.
Bowyer
,
S. A.
,
Davies
,
B. L.
, and
Rodriguez y Baena
,
F.
,
2014
, “
Active Constraints/Virtual Fixtures: A Survey
,”
IEEE Trans. Rob.
,
30
(
1
), pp.
138
157
.
24.
DeJong
,
B. P.
,
Faulring
,
E. L.
,
Colgate
,
J. E.
,
Peshkin
,
M. A.
,
Kang
,
H.
,
Park
,
Y. S.
, and
Ewing
,
T. F.
,
2006
, “
Lessons Learned From a Novel Teleoperation Testbed
,”
Industrial Robot Int. J.
25.
Harden
,
T.
, and
Pittman
,
P.
,
2008
, “
Development of a Robotic System to Clean Out Spherical Dynamic Experiment Containment Vessels
,”
American Nuclear Society EP&R and RR&S Topical Meeting, Albuquerque, NM, March
, pp.
358
364
.
26.
Kruusamae
,
K.
, and
Pryor
,
M.
,
2016
, “
High-Precision Telerobot With Human-Centered Variable Perspective and Scalable Gestural Interface
,”
2016 9th International Conference on Human System Interactions (HSI)
,
Portsmouth, UK
,
July 6, IEEE
, pp.
190
196
.
27.
Quigley
,
M.
,
Gerkey
,
B.
, and
Smart
,
W. D.
,
2015
,
Programming Robots With ROS: A Practical Introduction to the Robot Operating System
,
O’Reilly Media, Inc.
,
Newton, MA
.
28.
Bi
,
Z.
, and
Lang
,
S. Y.
,
2007
, “
A Framework for CAD- and Sensor-Based Robotic Coating Automation
,”
IEEE Trans. Indus. Inform.
,
3
(
1
), pp.
84
91
.
29.
Sharp
,
A.
,
Petlowany
,
C.
, and
Pryor
,
M.
,
2018
, “
Virtual Fixture Augmentation of Operator Selection of Non-Contact Material Reduction Task Paths
,”
2018 International Conference on Nuclear Engineering
,
London, UK
,
ASME
, Vol.
51531
, p.
V009T16A083
.
30.
Nielsen
,
J.
, and
Landauer
,
T. K.
,
1993
, “
A Mathematical Model of the Finding of Usability Problems
,”
Proceedings of the INTERACT’93 and CHI’93 Conference on Human Factors in Computing Systems
,
Amsterdam, The Netherlands
,
May 1
, pp.
206
213
.
31.
PTC
,
2017
, “
CREO
,” http://www.ptc.com/cad/creo, Accessed March 2, 2017.
32.
SOLIDWORKS
,
2017
, “
Solidworks
,” http://www.solidworks.com/, Accessed December 26, 2017.
33.
Innvometric
,
2018
, “
PolyWorks
,” https://www.innovmetric.com/en, Accessed March 9, 2018.
34.
Thingiverse
,
2018
, “
Thingiverse
,” https://www.thingiverse.com/, Accessed September 12, 2018.
35.
SketchUp
,
2017
, “
SketchUp 3D Warehouse
,” https://3dwarehouse.sketchup.com/, Accessed March 2, 2017.
36.
MeshLab
,
2018
, “
MeshLab
,” http://www.meshlab.net/, Accessed October 3, 2018.
37.
Aspert
,
N.
,
Ebrahimi
,
T.
, and
Vandergheynst
,
P.
,
2003
, “
Non-Linear Subdivision Using Local Spherical Coordinates
,”
Comput. Aided Geometric Design
,
20
(
3
), pp.
165
187
, EPFL-ARTICLE-86961.
38.
Schaefer
,
S.
,
Vouga
,
E.
, and
Goldman
,
R.
,
2008
, “
Nonlinear Subdivision Through Nonlinear Averaging
,”
Comput. Aided Geometric Design
,
25
(
3
), pp.
162
180
.
39.
The Visualization Toolkit
,
2018
, “
The Visualization Toolkit
,” https://www.vtk.org/, Accessed March 6, 2018.
40.
SketchUp
,
2017
, “
SketchUp
,” https://sketchup.com/, Accessed March 2, 2017.
41.
Yamamoto
,
T.
,
Abolhassani
,
N.
,
Jung
,
S.
,
Okamura
,
A. M.
, and
Judkins
,
T. N.
,
2012
, “
Augmented Reality and Haptic Interfaces for Robot-Assisted Surgery
,”
Int. J. Med. Rob. Computer Assisted Surgery
,
8
(
1
), pp.
45
56
.
42.
Kosari
,
S. N.
,
Rydén
,
F.
,
Lendvay
,
T. S.
,
Hannaford
,
B.
, and
Chizeck
,
H. J.
,
2014
, “
Forbidden Region Virtual Fixtures From Streaming Point Clouds
,”
,
28
(
22
), pp.
1507
1518
.
43.
Yanco
,
H. A.
,
Norton
,
A.
,
Ober
,
W.
,
Shane
,
D.
,
Skinner
,
A.
, and
Vice
,
J.
,
2015
, “
Analysis of Human-Robot Interaction at the Darpa Robotics Challenge Trials
,”
J. Field Rob.
,
32
(
3
), pp.
420
444
.
44.
,
2015
, “
Descartes Package Summary
,” http://wiki.ros.org/descartes, Accessed July 17, 2015.
45.
Sucan
,
I.
,
Moll
,
M.
, and
Kavraki
,
L.
,
2012
, “
The Open Motion Planning Library
,”
IEEE Rob. Auto. Magaz.
,
19
(
4
), pp.
72
82
.
46.
Sucan
,
I. A.
, and
Chitta
,
S.
,
2017
, “
MoveIt!
,” http://moveit.ros.org/, Accessed February 14, 2017.
47.
Men in Black
,
2018
, “
Bowl
,” https://3dwarehouse.sketchup.com/model/6dc5e034-c223-4b84-9c13-96511f451665/bowl, Accessed October 3, 2018.
48.
Pryor
,
M.
, and
Landsberger
,
S.
,
2017
, “
Mobile Manipulation and Survey System for H-Canyon and Other Applications Across the DOE Complex
,”
WM2017 Conference
,
Phoenix, AZ
,
Mar. 5–9
, pp.
1
15
.
49.
Photonics
,
I.
,
2018
, “
,” http://www.ipgphotonics.com/en/162/Widget/FLC+30+Cutting+Head+Brochure.pdf, Accessed January 10, 2018.
50.
Cindy Passmore
,
M.
,
Michsael Parchman
,
M.
, and
James Tysinger
,
P.
,
2002
, “
Guidelines for Constructing a Survey
,”
Family Med.
,
34
(
4
), pp.
281
286
.