Freehand sketching is an integral part of early design process. Recent years have seen an increased interest in supporting sketching in computer-based design systems. In this paper, we present finite element analysis made easy (FEAsy), a naturalistic environment for static finite element analysis. This tool allows users to transform, simulate, and analyze their finite element models quickly and easily through freehand sketching. A major challenge here is to beautify freehand sketches, and to this extent, we present a domain-independent, multistroke, multiprimitive method which automatically detects and uses the spatial relationships implied in the sketches for beautification. Further, we have also developed a domain-specific rules-based algorithm for recognizing commonly used symbols in finite element analysis (FEA) and a method for identifying different contexts in finite element modeling through combined interpretation of text and geometry. The results of the user study suggest that our proposed algorithms are efficient and robust. Pilot users found the interface to be effective and easy to use.

## Introduction

Freehand sketching is an activity that can take place throughout the engineering design process and is a natural, efficient, and convenient way to capture, represent, and communicate design ideas [1,2]. Sketches are particularly useful in early stages of design where their fluidity and ease of construction enable creativity and the rapid exploration of ideas [3,4]. Over the past few years, there has been increased interest in supporting freehand sketching in user interfaces and tools for various applications in diverse domains such as computer-aided design (CAD), simulation, and computer animation. As freehand sketch-based interfaces mimic the pen–paper paradigm of interaction, they may provide a host of advantages over the traditional windows, icons, menus, and pointers (WIMP) style graphical user interfaces (GUIs). Designers can seamlessly and directly interact with the computer with only limited training, whereas in menu-based interfaces, the users are forced to learn the system rather than the system having to learn users' intentions.

In this paper, we describe finite element analysis made easy (FEAsy), a sketch-based interface for static finite element analysis. FEAsy allows users to transform, simulate, and analyze their finite element models quickly and easily through freehand sketching (Fig. 1). A major challenge here is the need for techniques to transform ambiguous freehand strokes of a sketch into usable parametric geometric entities making up a formal diagram. Such a transformation is referred to as “beautification” [5–7]. Most current beautification methods do not consider important information implied in sketches such as the spatial relationships between different primitives in a stroke and between strokes [8]. These spatial relationships are usually represented as geometric constraints (such as parallelism and tangency). Cognitive studies show that users preferentially attend toward certain geometric features while drawing and recognizing shapes [9]. To this extent, we present a multistroke, multiprimitive beautification method that identifies these spatial relationships and uses them to drive beautification. We also posit that it is more intuitive to specify loading and boundary conditions through symbols as shown in Fig. 1, than through traditional menu-based input. Hence, we have developed a domain-specific algorithm for recognizing commonly used symbols in FEA. In addition, we have also developed an algorithm for combined interpretation of geometry and symbols in the sketch to identify different contexts like loading and boundary conditions observed in finite element analysis.

We foresee this tool to be used in engineering education and early design. It will allow analyses to be conducted even earlier in the design process because it reduces the reliance on preparing formal computer models beforehand. Formal, computational models such as CAD models may require a great deal of preparation and a clear understanding of design details. However, at the preliminary stages of design, hand generated sketches are often more appropriate to explore a wide range of potential ideas. Our tool would permit formal analysis of ideas that exist as only a hand-drawn sketch. It can be used as a learning tool for undergraduate students especially in mechanical and civil engineering. The students can use this tool to quickly verify answers to hand-worked problems and also in preliminary stages of design projects to evaluate their ideas.

### Contributions.

This paper extends our prior work [10] and makes a number of contributions to research in sketching, interfaces, and analysis in engineering design:

- (1)
a multistroke, multiprimitive beautification method that incorporates automatic constraint detection and solving, to transform the ambiguous freehand input into more structured formal drawings

- (2)
a symbol and text recognition algorithm for the finite element domain

- (3)
an algorithm for combined text and geometry interpretation, based on different contexts

- (4)
a novel interface that integrates freehand sketching, geometry constraint solving, and symbol recognition in a unified framework for structural and thermal finite element analysis

## Related Work

This section provides an overview of the past work in beautification and sketch-based interfaces for varied applications.

### Beautification—Segmentation and Recognition.

Two of the main challenges that have hindered the development of a robust beautification system are: *segmentation*—identification of critical points on the strokes, and *recognition*—classifying the segment between adjacent critical points as low-level geometric primitives (such as lines, circles, and arcs). Much earlier work [5,11–13] assumed that each pen stroke represented a single primitive such as a line segment or a curve in sketches. In spite of their simplicity, the strategy based on single primitive or stroke usually results in a less natural interaction because of the constraints imposed on the users' drawing freedom. Other works [14,15] have also utilized predefined templates of higher-order splines to neaten sketch inputs and smoothly combine the segments. By taking advantage of the interactive nature of sketching, several works [16–18] have used the pen-speed and curvature properties of the stroke to determine the critical points. They found that it was natural to slow the pen when making intentional discontinuities in the shape. When a user is sketching at a constant speed, many segmentation points will be missed due to this biased assumption. Kim and Kim [19] proposed new metrics based on curvature—local convexity and local monotonicity for segmentation. Hammond and coworkers [20] introduced an effective method to find corners in polylines. Their method is founded on a simple concept based on the length between two points. They showed higher accuracy over Sezgin et al. [16] and Kim and Kim [19]. Other approaches to segmentation utilized artificial intelligence, such as the template-based approach [21], conic section fitting [3], and domain-specific knowledge [22]. Despite their relative success in sketch segmentation, these are dependent on various restrictive conditions. For example, a large number of sketch examples are required for the purpose of training the computer in the methods proposed in Ref. [3]; otherwise, the segmentation performance will be affected. For recognizing the segments, Shpitalni and Lipson [3] and Zhang et al. [23] used a least-squares based method. Sezgin et al. [16] and Wolin et al. [20] compared the Euclidean distance between adjacent critical points and accumulated arc length of the segment. The ratio of arc length to Euclidean distance is close to 1 for a linear region and significantly higher for curved region. Xiong and LaViola [24] improved the algorithm described in Ref. [20] to include curves in addition to just lines. However, the algorithm does not recognize corners at a place where a line smoothly (tangentially) transitions into an arc and also those between two arcs.

More recently, with the advent of commodity level depth sensors (e.g., MS Kinect^{™}), there has been some research devoted toward segmentation and recognition of freeform 3D strokes drawn in midair using finger-based gestures. For example, Taele and Hammond [25] investigate techniques for developing intelligent interfaces and optimal interaction techniques for surfaceless sketching. Similarly, Babu et al. [26] provide a system from recognizing freehand 3D input strokes and matching them to specific predefined 3D symbols. However, 3D sketching methods are limited as they cannot provide tactile feedback required for controlled sketch creation and also require advanced display media for colocation of the interaction and modeling spaces. As a result, their use is restricted to simple symbolic inputs and is therefore unsuitable for creating shapes for structural analysis. Wang [27] utilizes stereoscopic display with bimanual interactions through digital motion trackers to enhance sketching in 3D. But their system relies on dedicated hardware that are not commonly available.

### Sketch-Based Interfaces.

The emergence of pen-input devices such as tablet PCs, large electronic Whiteboards, and personal digital assistants has led to demand for sketch-based interfaces in diverse applications [6]. Here, we list a few examples of such existing experimental systems. In CAD-based applications such as QuickSketch [28] and SKETCH [29], the user has to draw objects in pieces, i.e., only one primitive at a time, thereby reducing the sense of natural sketching. Arisoy et al. [30] utilize a predictive modeling approach to automatically complete preliminary rough sketches created by users. Our interface also facilitates automatic completion of sketches, but in addition provides a suggestive interface to allow users to explore different possibilities resulting from their input.

Sketch-based interfaces have also been used in early design [31] and in user-interface design [32]. ShadowDraw [33] provides dynamically adaptive suggestions in the background to guide users create esthetically pleasing sketches. Similarly, in Juxtapose [34], sketch inputs are used to drive search of 2D images with the intent of making serendipitous discoveries during clipart composition. In contrast, our system provides dynamically updating suggestions to guide parametric sketching for engineering applications. Gesture-based systems have been explored in 2D pen-based applications [35,36] where input strokes are converted or replaced with predefined primitives. Other works have also explored creation of 3D wireframe models based on multiview planar sketch inputs [37] or scaffold-based perspective drawings [38]. In contrast, our work is mainly related to creation of 2D geometry for structural analysis. ParSketch [39] is a sketch-based interface for editing 2D parametric geometry. MathPad [40] is a tool for solving mathematical problems. Kara et al. [41] developed a sketch-based system for vibratory mechanical systems. Kirchoff's pen [42] is a pen-based tutoring system that teaches students to apply Kirchhoff's voltage law and current law. Hutchinson et al. [43] developed a unified framework for structural analysis. They used an existing freehand sketch recognition interface which is not robust in handling freehand strokes representing multiple primitives combined together. In addition open circular arcs or curves are not handled, constraining the variety of input that can be specified and also designer's drawing freedom. Moreover, the system does not address the problems related to the ambiguous nature of freehand input.

For symbol recognition, Fonseca and Jorge [44] developed an online scribble recognizer called CALI. The recognition algorithm uses fuzzy logic and geometric features, combined with an extensible set of heuristics, to classify scribbles. Since their classification relies on aggregated features of the pen strokes, it might be difficult to differentiate between similar shapes. Kara and Stahovich [45] described a hand-drawn symbol recognizer based on a multilayer image recognition scheme. Similarly, Johnson et al. [46] enable users to apply standard drafting symbols to define constraints such as equality and perpendicularity, and edit 2D shapes by latching or erasing sketch segments. However, these methods require training, and in the case of Ref. [45] are also sensitive to nonuniform scaling. Veselova and Davis [9] used results from perceptual studies to build a system capable of learning descriptions of hand-drawn symbols which are invariant to rotation and scaling.

## Overview of the Approach

Freehand sketches are usually composed of a series of strokes. A stroke is a set of temporally ordered sampling points captured in a single sequence of pen-down, pen-move, and pen-up events [47]. Sketches can be created in our system using any of a variety of devices that closely mimic the pen–paper paradigm. We use a Wacom Cintiq 21UX digitizer with stylus, tablet PCs, and a traditional mouse. Both Wacom and tablet PCs are particularly suited to natural interaction, enabling the user to sketch directly on a computer display. Figure 2 shows the pipeline through which the input strokes are processed in our system. In FEAsy, the strokes are input in either the “geometry” mode or in the “symbol” mode. Accordingly, the raw input strokes representing geometry are colored in black and those representing symbols are colored in red. Each stroke input in geometry mode is beautified, i.e., decomposed into low-level geometric primitives with minimal error. The system then identifies the spatial relationships between the primitives. These relationships are represented as geometric constraints which are then solved by a geometry constraint solver. The output from the solver is the beautified version of the input which is updated on the screen automatically. The strokes input in symbol mode are processed unlike in geometry mode. The red-colored strokes are first clustered into stroke groups. Then, the stroke groups are classified as either text or symbol and recognized. Finally, the symbols, text, and geometry are interpreted together for understanding the various contexts in the sketch. The sketch is now ready for finite element problem study. Sections 4–6 describe each of these steps in detail.

### An Example.

Figure 3 shows a step-by-step process of analyzing a bracket from its freehand sketch. The user starts the geometry creation process by sketching a freehand stroke in the geometry mode as shown in Fig. 3(a). The lower circle is the starting point of the stroke, and the upper circle depicts the end point. The blue arrows show the direction of the stroke. At the completion of the stroke, the system automatically beautifies the input. The beautified output is shown in Fig. 3(b). Next, the user adds a freehand stroke to the sketch. In this case, it is a hole in the bracket (see Fig. 3(c)). The final result after beautification is shown in Fig. 3(d). The geometry creation process for the bracket is now complete. The user then switches modes to insert symbols. Figure 3(e) shows a beautified sketch with input symbols. In the next step, the symbols are recognized and the sketch is interpreted. The output is updated on the screen as shown in Fig. 3(f)). Once the sketch is complete and processed, the user specifies the material properties, element description, and meshing parameters for “finite element integration” (Fig. 3(g)), and all the information is exported as a set of commands suitable for import in ansys (Fig. 3(h)). These commands are then run and solved in ansys. Figure 3(i) shows the deformation results of the bracket.

## Beautification

Beautification aims at simplifying the representation of the input where the various points of the strokes are interpreted and represented in a more meaningful manner. Our approach for transforming the input to formalized representations (i.e., beautification) is based on the architecture shown in Fig. 4. There are five steps in the pipeline, namely, resampling, segmentation, recognition, merging, and geometry constraint solving. Figure 4 shows the various steps along with the actual outputs generated in the system. However, it is to be noted that only the final beautified sketch is visible to the users and other intermediate outputs are generated for illustration purposes. Figure 4(a) shows a user-drawn freehand stroke. This is an example of a single stroke representing multiple primitives connected together. Figure 4(b) shows the raw data points (blue circles) as sampled by the hardware, and Fig. 4(c) illustrates the uniformly spaced points after resampling (green circles). The segmentation step explained in Sec. 4.2 identifies the critical points (red circles) shown in Fig. 4(d). Then, the segments between the adjacent critical points are recognized and fit with primitives (Fig. 4(e)). The status of the freehand sketch after merging is shown in Fig. 4(f). Finally, the sketch is beautified considering the geometric constraints (Fig. 4(g)). The aforementioned steps are explained in detail in Secs. 4.1–4.6. For simplicity, we limit the discussion to a single stroke in a sketch. All the other strokes are processed similarly.

### Stroke Resampling.

The sampling frequency of the mechanical hardware coupled with the drawing speed of the user results in nonuniform samples of the raw freehand input. Evenly spaced points are important for the segmentation algorithm to work efficiently. To achieve uniform sampling, we resample the points of the input stroke such that they are evenly spaced. We used a fixed interspacing distance, *I _{d}* of 200 HIMETRIC units (1 HIMETRIC = 0.01 mm = 0.0378 pixels). The resampling algorithm discards any sample within the

*I*of earlier samples and interpolates between samples that are separated by more than

_{d}*I*. The start and end points of the stroke are by default added to the resampled set of points. Figure 4(c) shows the result of resampling for the stroke.

_{d}### Segmentation.

In our system, a single freehand stroke can represent any number of primitives connected together. The task of the segmentation routine is to find those critical points that divide the stroke into its constituent primitives. These critical points are “corners” of the piecewise linear strokes and also the places where curve and line (curve) segments connect. Our segmentation algorithm builds upon the approach described in Ref. [20], which works well for strokes composed of only line segments. One of the drawbacks of this method is that the algorithm often misses identification of corners at heavily obtuse angles. We address this drawback and also improve their algorithm to accommodate curves in addition to line segments. We are interested in improving this algorithm especially for its simplicity (easy to program) in implementation, high efficiency, and at the same time not being computationally intensive. They described a measure called straw (chord length) which in essence is a naive representation of the curvature of the stroke. The *straw* at each point *p _{i}* is computed as $strawi=|pi\u2212w,pi+w|$, where

*w*is a constant window and $|pi\u2212w,pi+w|$ is the Euclidean distance between the points $pi\u2212w$ and $pi+w$. As the stroke turns around a corner, the

*straw*length starts to decrease and a local minimum value corresponds to a likely critical point. However, when there is smooth continuity between a line and an arc or between two arcs, the

*straw*length does not vary much and it fails to identify the transition in such regions. Hence, we use another such measure,

*chord angle,*which is effective in identifying these gradual changes in addition to finding the corners.

*chord angle*for the resampled points

*p*to $pn\u2212w$, where “

_{w}*n*” is the total number of resampled points and

*w*is a constant window. The

*chord angle*at each point

*p*is computed as follows:

_{i}The likely critical points of the stroke are those indices where the “chord angle” is a local minimum, which is lesser than a threshold (“*t*”). Figure 5 shows the computation of the chord angle. The blue circles represent the resampled points, and “$\theta $” represents the chord angle computed using Eq. (1). To avoid the problem posed by choosing a fixed threshold, we set the threshold to be equal to the median of all the chord angle values. For the stroke in Fig. 4(a), the initial set of critical points obtained is shown in Fig. 4(d). By default, the start and end points of a stroke are considered as critical points. A window of uniformly spaced points is used to compute the curvature (chord angle), which smoothens out the noise, if any in the input stroke. The larger the window, the larger the smoothing effect resulting in missed critical points. As in Ref. [17], we found that setting the window size, *w* = 3 to be effective irrespective of the user or the input device used.

### Recognition.

The next task after segmentation is to classify and fit the segments between adjacent critical points as low-level geometric primitives. The current implementation of our system recognizes lines, circular arcs, and circles. Our recognition method is based on least-squares analysis [48], but the computation of parameters of best fit line and circular arc differ from the traditional approach. Usually, the least-square fit of lines and arcs results in the end points of the primitives to be moved to new locations as shown in Fig. 6. These new positions do not coincide with the original critical points of the stroke, and hence cause discontinuities between adjacent primitives of the stroke. To prevent such discontinuities, we fix the end points of the primitives to coincide with original critical points and then perform the analysis. Figure 6 shows the actual result of our recognition algorithm which has no discontinuities.

#### Fitting a Straight Line.

*N*points are fitted by a straight line, $y=mx+c$, where

*m*and

*c*represent the slope and the intercept, respectively. As the end points of the line segments are fixed, the slope and the intercept can be estimated as follows:

#### Fitting a Circular Arc.

$SN$ can also be fitted as a circular arc, $(x\u2212a)2+(y\u2212b)2=R2$, where *C*(*a*, *b*) is the center of the arc and *R* is the radius.

As the start and end points of the arc are fixed, the center of the arc should lie on the perpendicular line that passes through the midpoint of the line connecting the end points of the arc (Fig. 7). Let $P1(x1,y1)$ and $PN(xN,yN)$ be the end points of the arc, $C\u2032(a\u2032,b\u2032)$ be the mid point of line joining *P*_{1} and *P _{N}*, and $n\u0302=(nx,ny)$ be the normal to the line joining

*P*

_{1}and

*P*. Therefore

_{N}After finding the errors, the segment is typically classified by the primitive that matches with the least error. However, the line segments can always be fit with high accuracy as an arc with a very large radius. In such cases, if the arc length is less than 15 deg, we classify it as a line. Similarly, an arc is classified as a circle if its arc length is close to 2*π*.

### Merging.

The initial critical points set obtained through segmentation routine may contain some false positives. The merging procedure repeatedly merges adjacent segments, if the fit for the merged segment is lower than a certain threshold. For every *i*th segment, we try merging it with (i−1)th and (i+1)th segment. Let these new segments be seg_{1} and seg_{2}. The fit errors for seg_{1} and seg_{2} are calculated according to Sec. 4.3. For the segment with least error among seg_{1} and seg_{2}, merging occurs if and only if the error is less than the sum of the corresponding errors of the original segments. For example, in Fig. 4(e), the two lines and an arc on the right were merged into one single arc (see Fig. 4(f)).

### Geometry Constraint Solving.

Geometric constraints are usually classified as either (1) explicit constraints, which refer to the constraints that are explicitly specified by the user such as dimensions—distance between a point and a line or angle between two lines, or (2) implicit constraints, which refer to the constraints that are inherently present in the sketch such as concentricity and tangency. It is natural for users to express geometric constraints implicitly when they are sketching. Our system infers and satisfies the constraints automatically without much intervention from user using the method described in Ref. [49]. Figure 8 lists the different kind of constraints inferred in our system between points, lines, circular arcs, and circles. We have integrated the LGS2D [50] geometry constraint solver with our system for constraint solving purposes. The set of primitives along with the constraints are input to the solver, and after satisfying the constraints, the solver returns the modified primitives with their new locations. Figures 4(f) and 4(*g*) show the primitives of the sketch before and after constraint solving, respectively. The core technology of LGS2D is a combination of symbolic and numerical methods for solving systems of geometrical constraints. The main symbolic method used in LGS2D is a variation of constraint graph analysis, based on abstract degree-of-freedom approach [51].

### Resolving Ambiguities With Interaction.

Any recognition system is not devoid of ambiguities. Our system provides the interface to correct the errors through simple interactions. Errors in segmentation include missed and unnecessary critical points. In our system, when the user taps on or near a critical point with the stylus, the system first removes that critical point and the corresponding two primitives that share this point. This results in an unrecognized segment which is then classified and refit. The user can also add a segmentation point in a similar manner. The nearest point on the stroke to the clicked location is used as the input point where the existing primitive is broken into two primitives. Errors in segment recognition correspond to primitive misclassification. An input stroke drawn by holding down a button on the stylus is recognized as a pulling gesture. The primitive that is closest to the starting point of this gesture is the one to be pulled, and accordingly, its classification is altered, i.e., if the primitive was a line, it is refit as a circular arc and vice versa. Additionally, the user can erase a primitive, a stroke, or a part of stroke using the eraser end of the stylus, just as using a pencil eraser.

## Symbol Recognition and Sketch Interpretation

The symbols drawn in finite element domain, both in academia and research, have well-defined and standardized forms. The list of symbols commonly used in finite element domain (i.e., for loading and boundary conditions) is shown along with other symbols recognized in our system in Fig. 9. Figure 10(a) shows an example of beautified 2D bracket drawn in geometry mode. The sketch consists of seven line segments (L1–L7), two circular arcs (A1 and A2), and a circle (C1). For visual clarity, once the geometry is beautified, the recognized lines are drawn in black and the arcs (circles) in green. Figure 10(b) shows the various red-colored strokes input in symbol mode that represent dimensions, loading, and boundary conditions. Sections 5.1–5.3 describe the various steps in processing these strokes for symbol recognition and sketch interpretation (Fig. 10).

### Clustering.

The first step in processing this collection of symbol strokes is to cluster them into smaller groups. We use both a temporal and a spatial proximity strategy to group strokes. This stems from the observation that a group of strokes comprising a symbol are generally drawn close to each other and continuously. In addition, the system should not constrain the user to complete a symbol before moving on to the next one. For example, if the user specifies “*P* = 100” as a loading condition initially and later wishes to change it to “*P* = 1000,” the operations required must be as simple as adding a zero to the input rather than have to erase and rewrite the whole text again. Hence, the criteria for clustering require the strokes to be within a spatial threshold distance of 100 HIMETRIC units and (or) the time gap between continuous strokes is less than 500 ms. Figure 10(c) shows the results of the clustering, where a dashed bounding box is drawn around each group.

### Text and Symbol Recognition.

The next step is recognition of each stroke group, where each stroke group is comprised of either text or symbols. We use the height (<1.2 cm) and width of bounding box (<2.2 cm) as the criteria to distinguish between text and symbols. The stroke groups that are classified as text are next recognized using the built-in handwriting recognizer (Microsoft Tablet PC SDK). The texts in the sketch are primarily of two types: (1) loading conditions (force, temperature, or pressure) with alphabets—F, T, or P on the left-hand side of an “equal to” symbol and numbers on the right-hand side, and (2) dimensions, which are made up of only numbers. This observation helps in robust recognition of text and also helps in correcting misclassification of texts and symbols. After the identification of texts, the next step is to recognize the remaining stroke groups. On quick observation, one can see that almost all of the symbols are comprised of either lines or circles, and only the “moment” symbol consists of an arc. Also, some symbols like “roller” have different variations, where there is a difference in the number of circles drawn. Though these symbols seem different, there are certain distinct properties for each symbol or group of symbols that are different from other symbols (or groups). For example, the “fully constrained” symbol is different from roller symbol, as it can be distinguished with the presence or absence of circles. In this case, the number of circles does not matter for the differentiation. We have created similar heuristic-based rules to recognize different symbols. The reason behind using such an approach is that the number of symbols in this set is finite and each symbol has some distinct properties that can be used to differentiate from the other symbols in spite of the possible variations. Also, there is no training required. For the recognition of various symbols, we have built custom recognizers by extending simple gesture recognition library (SIGER) [52] using vector strings and regular expressions.

### Sketch Interpretation.

The sketch needs to be interpreted after beautification and symbol recognition. Generally, users draw related objects in such a way that they are closer to each other. We use this observation to associate and group objects to provide context. For example, in Fig. 10(d), the “load” symbols, *P* = 100 and line L6, combine together to imply the meaning that a pressure load of 100 units is applied on the line in negative *y*-direction. The various contexts observed in finite element analysis can be classified into three categories, namely, *loading conditions*, *boundary conditions*, and *dimensions*. Accordingly, the various symbols (Fig. 9) fall into these categories. We use this classification information and spatial proximity reasoning of the bounding boxes to understand the different contexts in the sketch. Applied loads in the system are either point loads or uniform loads, which can be forces, pressure, or temperature (depending on the problem). The magnitude and direction of the loads are determined from the text and direction of arrow. When there is only one load symbol detected, it refers to a point load and the detected load is applied to the nearest point (node) in the geometry. If a pattern of load symbols is inferred next to hand written text, then the closest starting and end points of the arrows are found, and the system searches for a nearest primitive on the geometry and applies to it. The types of boundary conditions are either fully constrained or constrained in only one direction (specified with a roller symbol). The specific direction, i.e., *x*- or *y*-direction, is determined from the orientation of the symbol, for example, like the pattern of circles in the roller symbol. Like loads, the boundary conditions can be applied either to a single point or a primitive. Finally, the interpreted dimensional constraints are satisfied by the solver, and the sketch gets updated accordingly. Figure 10(e) shows the final sketch after interpretation of different contexts in the sketch. The freehand input symbols are replaced with recognized text and symbols. Line L1 is fully constrained which is indicated by a bounding box and a triangle, and line L2 is constrained along the *x*-direction, indicated by a bounding box and a circle. The dimensions of lines L4 and L6 have been updated, which is reflected in the tree.

The text, geometry, and symbols in a sketch have an inherent structure and they all combine only in some specific ways. For example, a dimensional value can never be associated with a straight single headed arrow; a loading condition can never be associated with a fully constrained symbol; and any arrow cannot exist on its own without an associated text group. This kind of reasoning helps to correct errors automatically allowing for robust sketch interpretation.

## Finite Element Integration

The final step is to setup the problem for finite element analysis. Our system provides the interface to (see Fig. 3(g)) input the material information, element type and description, and mesh size (if necessary). Our current implementation of the system supports three types of elements which are commonly used in structural, thermal, and static finite element analysis. Similarly, the users can also specify what results they wish to view after the analysis. Currently, the system allows users to choose from von Misses stress, reaction forces, deflections, and temperature. Figures 3(g)–3(i) show the finite element integration for the bracket in Fig. 10. Here, the three-dimensional bracket is modeled as a two-dimensional problem with uniform thickness = 0.5 in. The finite element specific parameters (ansys) include material: steel, $\epsilon =30\xd7106,\u2009\nu =0.3$; element type: PLANE42; element size: 0.5. After specifying the necessary input, the system exports the model geometry, boundary conditions, loads, material, element, and meshing information to a unified file specific for ansys (APDL commands). Figure 3(h) shows the generated ansys specific code, and Fig. 3(i) shows the “displacement vector sum results” plotted results in ansys.

## User Study

We conducted a preliminary user study to test the system. Through this study, we aimed to find out if the users were able to finish the task given and whether they were able to accomplish it with fewer interactions and strokes than a system that supports only single-primitive strokes. Also, we wanted to receive feedback from participants about the tool for future improvements.

*System:* Our prototype was ported on to a PC with Wacom Cintiq 21UX LCD display. This display offers the users a way to work naturally and intuitively by using a digital pen, directly on the surface of an LCD display.

*Subjects:* Six graduate students in mechanical engineering participated in this study and all of them were familiar with sketching aspects of CAD programs (such as autocad and pro/engineer), and hence were well aware of use of geometric constraints in making diagrams. They were also familiar with using ansys for finite element analysis. In addition, they had used digitizing media like tablet PCs and (or) personal digital assistants before but not the Wacom line of products.

*Measurement:* The measures in this study included critical points segmentation accuracy, primitives recognition accuracy, symbol recognition accuracy, and context interpretation accuracy.

*Training Process:* The participants were trained for 10 min with the capabilities of the system, i.e., the two modes of input—geometry and symbol; beautification of the freehand strokes in geometry mode; symbol recognition and sketch interpretation; and finally, finite element integration. In addition to illustrating the work flow, we also demonstrated its limitations, i.e., the system recognizes only lines, arcs, and circles and does not handle overtracing (making several overlapping strokes, such that the strokes are perceived as a single object collectively); interaction techniques (such as clicking on a critical point) for correcting errors during beautification and symbol recognition; and finally, symbol recognition might fail when two different symbols overlap each other. In addition, the participants were given 15 min to get acquainted with the system.

*Task:* On the completion of the training process, the participants were asked to sketch and solve the four problems shown in Fig. 11. The total amount of time given was 1 h. The problems were carefully chosen in such a way that they tested all the different capabilities of our system. In addition, these examples illustrate a good range of problems that can be solved using our system. Each problem had a verbal description of the boundary conditions, loads, and dimensions (collectively termed as nongeometric information) accompanied by a graphic that represented just the geometry, devoid of dimensions and symbols. The reason behind such a formulation was to analyze how the users input the nongeometric information in the symbol mode and also to remove any bias on how the information should be input. The problem types were: (1) a static plane stress structural problem (Fig. 11(a)), (2) a static two-dimensional truss problem (Fig. 11(b)), (3) a three-dimensional structural problem modeled as a static, two-dimensional problem with constant thickness (Fig. 11(c)), and (4) a steady-state heat conduction problem (Fig. 11(d)). The problem descriptions are as follows:

*Problem 1*. Plot the von Mises stress for the shape in Fig. 11(a) and the following loading conditions. A flat rectangular plate is made of steel (*ε* = 210,000 MPa and *ν* = 0.3) with two holes and a constant thickness of 0.75 cm. The width of rectangular plate = 20 cm and height = 10 cm. The two holes must be completely inside the plate and on the same imaginary horizontal line, but should not touch the edges of the plate or each other. The left end of the rectangular plate is welded (fully constrained), and a uniform pressure of 0.1 MPa acts along the right end of the plate. Use PLANE 42 element in plane stress with thickness and a mesh size of 0.25.

*Problem 2.* Plot the displacement vector sum for the shape in Fig. 11(b) and loading conditions. The material properties are: *ε* = 210,000 MPa and *ν* = 0.3. The radius of inner arc = 10 cm and outer arc = 15 cm. The arcs are concentric and the length of a horizontal line segment = 20 cm. The top left edge is fully constrained, and a point load = 750 N acts downward at the bottom left point. Use PLANE 42 element in plane stress and a mesh size of 0.5.

*Problem 3.* Plot the displacement vector sum for the truss in Fig. 11(c) consisting of six joints and nine links. Here, links L1, L2, L4, and L5 are horizontal (parallel to *X*-axis). Similarly, (L6, L8, and L3) and (L7 and L9) are parallel. Node N1 is fully constrained, while node N2 is constrained only in the *y*-direction and free in the *x*-direction. A load of 100 N acts along negative *y*-direction on N5, and a load of 200 N acts along the positive *x*-direction on N4. The material properties are same as previous examples. Use LINK element with cross-sectional area of 0.5 square units.

*Problem 4.* Plot the temperature contour plot for the shape in Fig. 11(d) and boundary conditions: a square plate (width = 10 and thickness = 1) with a circular hole (diameter = 5) at the center. The top end of the plate is constrained at a temperature = 500 °C and the bottom edge at 100 °C. The left and right edges are maintained at 0 °C (fully constrained). The thermal conductivity (*k*) of the material is 10 W/mC. Use PLANE 55 element in plane stress and a uniform mesh size of 0.25.

## Results and Discussion

The six participants all solved the four problems within the allocated time, providing a total of 24 sketches. Some of the sample sketches drawn by the participants are shown in Fig. 12. Each row represents the work flow snapshots taken by the system for each of the problems. Figure 13 summarizes the results obtained after beautification in geometry mode. A total of 88 geometry strokes and 13 interactions were recorded for the geometry part of the problems. The input strokes comprised of both single primitive and multiprimitive types. A single-primitive stroke means a stroke can represent only one kind of primitive, i.e., a line, an arc, or a circle. On the other hand, a multiprimitive stroke can represent any number and any kind of primitives connected together. In all, 101 operations were required for successfully completing the geometry for all students and all problems. In contrast, if the system allowed only single-primitive strokes to be input to create the geometry, then the minimum number of total strokes required would be equal to 168, approximately 40% more number of strokes for geometry creation. In addition, this number does not reflect the number of operations that would be required to specify geometry constraints. Our system correctly segmented 236 critical points out of 238 with 99.2% accuracy and correctly recognized 173 out of 174 primitives, achieving 99.4% primitive recognition accuracy. These results indicate that we have a robust beautification algorithm, and the participants were able to draw the given shapes successfully and at the same time with minimal interactions and lesser time.

Figure 14 shows the results of text and symbol recognition algorithm implemented in the system. Our system correctly clustered and recognized 223 out of 228 stroke groups with an accuracy of 97.8%. The various symbols recognized and their individual accuracies across both problems and types are shown in Fig. 14. Problem 4 had five dimensions, two boundary conditions, and two loading (temperature constraints) conditions, which when specified in a single iteration can lead to a crowded sketch and a high chance for overlapping symbols. In one such instance, the fully constrained condition on the left end of the symbol overlapped with the temperature constraint on the bottom edge. The participant had to manually delete the overlapping stroke(s) and process it again. To avoid over crowdedness, three of the six participants resorted to two iterations of context interpretation, where they specified all the dimensional constraints in the first iteration and all the loading and boundary conditions in the second iteration. This process is similar to traditional finite element systems where users usually finish the problem geometry before specifying other constraints.

Figure 15 shows the results of various contexts interpreted in the user study. Our system correctly interpreted 129 out of 132 contexts in the sketch with an accuracy of 98.5%. The misinterpreted contexts were due to the overlapped symbols as explained in Sec. 7. The results suggest that our recognition and interpretation algorithms work robustly for the domain of static finite element analysis.

A two-factor analysis of variance test was performed to see if there were any variations in results by problem or by user for the four response variables, namely, critical point segmentation accuracy, primitive recognition accuracy, symbol recognition accuracy, and context interpretation accuracy. The test showed no significant main effect for the problem factor, $F(3,15)=3.29,p>0.05$, and no significant main effect for the user factor, $F(5,15)=2.90,p>0.05$, for each of the response variables.

At the end of the user study, each participant was asked if they (a) liked the interface and (b) had any suggestions for improvement. All of the participants reported that the system was easy to use and expressed a positive attitude toward drawing using freehand sketching. The participants were very appreciative that the system could infer the implicit constraints automatically and satisfy them simultaneously without the need for manually specifying them. Of the six participants, four of them suggested that the system infer symmetry and expand the geometric constraints set that can be either detected automatically or specified manually like equal radii constraints and equal lengths. For example, in even a relatively simple geometry such as in Problem 4, five-dimensional constraints were required to construct the square and place a circle at the center. A possible solution is to modify some of the dimensions directly in the left tree. This particular system is currently best suited for exploratory studies in early design where the actual dimensions are not that important in comparison to the shape; for in-classroom demonstrations, where the location of stress concentration or the deflection of a truss member is the focal point of discussion rather than the actual values.

## Conclusions

In this paper, we described FEAsy, a sketch-based interface that integrated freehand sketching with finite element analysis. We presented a beautification method that transforms ambiguous freehand input to more formal structured representations considering the spatial relationships implied in the freehand sketches. We also described algorithms for symbol and text recognition, and interpretation of various contexts in finite element domain. The results from the pilot study indicate that our algorithms are efficient and robust. However, more elaborate studies with a large sample size have to be done to see if such sketch-based interfaces are really a viable alternative. Our immediate future work is to integrate a finite element solver and provide visualization capabilities in the system making it a unified tool for finite element analysis.

## Acknowledgment

This work was partly supported by the NSF IIS CHS (Award No. 1422341), partly by NSF CMMI CPS: Synergy (Award No. #1329979), and the Donald W. Feddersen Chaired Professorship from School of Mechanical Engineering at Purdue University. Any opinions, findings, and conclusions expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.