A major challenge in structural health monitoring (SHM) is to accurately identify both the location and severity of damage using the dynamic response information acquired. While in theory the vibration-based and impedance-based methods may facilitate damage identification with the assistance of a credible baseline finite element model, the response information is generally limited, and the measurements may be heterogeneous, making an inverse analysis using sensitivity matrix difficult. Aiming at fundamental advancement, in this research we cast the damage identification problem into an optimization problem where possible changes of finite element properties due to damage occurrence are treated as unknowns. We employ the multiple damage location assurance criterion (MDLAC), which characterizes the relation between measurements and predictions (under sampled elemental property changes), as the vector-form objective function. We then develop an enhanced, multi-objective version of the dividing rectangles (DIRECT) approach to solve the optimization problem. The underlying idea of the multi-objective DIRECT approach is to branch and bound the unknown parametric space to converge to a set of optimal solutions. A new sampling scheme is established, which significantly increases the efficiency in minimizing the error between measurements and predictions. The enhanced DIRECT algorithm is particularly suited to solving for unknowns that are sparse, as in practical situations structural damage affects only a small region. A number of test cases using vibration response information are executed to demonstrate the effectiveness of the new approach.

## Introduction

Structural health monitoring (SHM) has received significant attention in mechanical, civil and aerospace engineering communities in recent decades, since unpredicted structural failure may lead to catastrophic consequences. The latest advent of many new transducer materials/devices and the explosive progress in microelectronics and computational capability has further expedited the development of SHM systems. A typical SHM scheme uses structural dynamic responses measured by sensors to interpret the health status of the structure monitored. The dynamic responses are either the results of excitations by actuators that are part of an SHM scheme, or induced by ambient condition or normal operation. Usually, the dynamic responses are based upon wave/vibratory motions that can propagate quite far from excitation sources. As anomalies in the dynamic responses are used to infer damage occurrence, consequently, these SHM schemes can cover very large structural area/volume.

One popular class of methods is wave propagation-based, which uses the change of transient wave (e.g., Lamb wave) upon its passage through damage site to infer damage occurrence [1,2]. While these methods may entertain high detection sensitivity as high-frequency waves can be excited and employed as information carrier, it is generally difficult to use the transient responses to identify damage accurately, especially to quantify the severity [3]. An equally popular and traditional class of methods is vibration-based, which utilizes the changes of natural frequencies/modes of structures as inputs [4,5]. These changes of modal properties can be extracted from modal testing through scheduled actuation and sensing, or from ambient responses. One advantage of these methods is that they may be readily realized using off-the-shelf hardware. A qualitatively similar class of methods is the piezoelectric impedance-based methods [6–8]. In these methods, a piezoelectric transducer embedded to the host structure is driven by a frequency-swept sinusoidal voltage, which excites local stationary oscillations. The local structural oscillations in turn affect the electrical response of the transducer. As such, the impedance of the structure is coupled with that of the piezoelectric transducer. The change of piezoelectric impedance measured with respect to that under the undamaged baseline can be used as damage indicator.

A major challenge to virtually all SHM schemes is to accurately identify both the location and severity of damage using the dynamic response information acquired. It is worth noting that a potentially significant advantage of vibration-based and impedance-based methods is that, since the damage effect is reflected in the changes of stationary vibratory responses in these methods, it is possible to facilitate damage identification through inverse finite element analysis. Indeed, when a credible baseline finite element model of the structure monitored is available, we can first develop forward analysis of modal responses (e.g., eigenvalue problem) or impedance responses (under frequency-swept harmonic excitations), and then formulate an inverse sensitivity analysis based on Taylor series expansion under the assumption that each finite element is susceptible of damage occurrence. Using the measured response changes as input, the inversion of the sensitivity matrix can in theory yield the solution of the elemental property changes. Those elements with non-zero property changes indicate both the location and severity of damage in the finite element model. Two issues, however, arise, especially in practical situations. First, while the number of finite elements is generally large which leads to a large number of unknowns in the aforementioned inverse analysis, the available measurement information is usually limited. For example, only a small number of modal frequencies and their changes can be realistically measured, and only the impedance responses around resonant peaks are truly sensitive to damage occurrence. Therefore, the inverse formulation may easily become under-determined [9]. Second, oftentimes the input information is heterogeneous, encompassing different types of physical parameters. For example, an SHM system with combination of vibration sensors and impedance sensors will produce different types of measurements. For another example, in terms of vibration-based approach, both natural frequencies and mode shapes can be measured and used to infer damage occurrence. In these cases, although more information generally leads to more accurate damage identification results, integrating different types of measurements together into a unified inverse analysis formulation is not straightforward. The inevitable measurement noise and modeling uncertainty further compound the difficulty in inverse analysis [7].

Alternatively, the problem of identifying damage location/severity in a finite element based inverse analysis using stationary response change as input can be cast into a global optimization formulation. In such a formulation, the possible property changes in all finite elements are the unknowns to be solved. We sample these unknowns, and the differences between measured response changes and the predicted response changes (with sampled elemental property changes) obtained on the forward finite element analysis are treated as objectives to be minimized. Several stochastic optimization approaches have been attempted, including genetic algorithms [10,11], particle swarm optimization [12], simulated annealing [13], and differential evolution algorithms [14]. A common type of objective function used is derived from multiple damage location assurance criterion (MDLAC) introduced by Messina et al. [15], which quantifies the relation between actual damage effect and model prediction in the unknown damage parametric space. One advantage of solving the damage identification problem via global optimization is that heterogeneous measurements can be easily handled by establishing multiple objectives or vector-form objective functions. For example, a set of MDLAC values, each corresponding to a type of measurements, can be employed to quantify the respective difference/error. We can then resort to multi-objective optimization (MOO) algorithms to solve the problem by minimizing such differences.

While the idea of employing global optimization to identify damage location and severity in a finite element model appears to be promising, the algorithms employed so far lack efficiency and accuracy [14,16]. For example, the number of possible damage locations is quite large, because each element is susceptible of damage occurrence, and the severity level at each element is a continuous variable in theory. Thus, a very large number of runs are required for stochastic optimization. Moreover, the results of stochastic techniques vary for each independent run with or without uncertainties. The performance is also dependent upon the algorithmic parameters that are usually tuned empirically. To tackle these issues and to make fundamental advancement in damage identification using limited dynamic response information, in this research, we develop a multi-objective variant of a deterministic global optimization that follows the underlying idea of dividing rectangles, known as DIRECT [17]. As the outcome of a deterministic technique, the results obtained are repeatable and conclusive without uncertainties. Only one algorithmic parameter, the number of function evaluations, is needed here. Specifically, a new sampling/division scheme in the unknown parametric space is established which significantly increases the efficiency in minimizing the difference between measurements and predictions. The enhanced DIRECT algorithm is particularly suitable to solving for unknowns that are sparse, as in practical situations structural damage affects only a small number of finite elements. The rest of the paper is organized as follows: In Sec. 2, we briefly outline the basic equations and optimization objectives involved in vibration-based damage identification which is used without loss of generality to illustrate the new method. In Sec. 3, the details of the proposed multi-objective DIRECT algorithm are presented to solve the damage identification problem. The proposed method is evaluated and validated for five benchmark damage scenarios in Sec. 4, where a comparative study on different strategies of the multi-objective DIRECT algorithm is also conducted. Concluding remarks are given in Sec. 5.

## Problem Formulation

Our objective here is to develop an efficient global optimization approach for damage identification. We focus on cases where the measured response information is limited, and heterogeneous measurements are available. Without loss of generality we use the well-known vibration-based approach to illustrate the methodology formulation. Natural frequencies, especially lower-order ones, albeit widely used for damage identification, are global properties that may not be sensitive to local damage. In certain cases, mode shapes can be acquired which may be able to reflect better local effect of damage [18]. Combining these two sets of measurements can certainly yield better damage identification results.

*n*is the number of elements, and $KiR$ is the reference (healthy) stiffness of the

*i*th element. We assume damage caused stiffness change. The stiffness matrix of the structure with damage is denoted as $KD=\u2211i=1nKiD$, where $KiD=(1\u2212\alpha i)KiR$. $\alpha i\u2208[0,\u20091]$ ($i=1,\u2026,N$) is the damage index for the

*i*th element. For example, if the

*i*th element suffers from damage that leads to a 20% of stiffness loss, then $\alpha i=0.2$. We further assume damping is negligible, and the modes are ortho-normalized. The

*j*th eigenvalue (square of natural frequency) and the

*j*th mode (eigenvector) are related as $\lambda j={\varphi j}TK{\varphi j}$. The change of the

*j*th eigenvalue from the healthy status to the damaged status can be derived as

**S**is the sensitivity matrix whose elements are given in Eq. (1), and $\Delta \lambda $ and $\alpha $ are, respectively, the

*q*-dimensional natural frequency change vector (based on measurement and baseline healthy results) and the

*n*-dimensional damage index vector consisting of damage index for each of the

*n*segments. Symbolically, the damage index vector can be expressed as

*n*, the number of unknowns (i.e., the number of finite elements), is much greater than

*q*, the number of natural frequencies that can be realistically measured. This is one major reason that we want to avoid matrix inversion of

**S**and resort to optimization (to minimize the difference between the measurements and predictions obtained from a model with sampled damage index values). Messina et al. [15] proposed a correlation coefficient, referred to as the MDLAC, to compare two natural frequency change vectors, i.e., measured frequency change $\Delta \lambda $ and predicted frequency change $\delta \lambda $ (obtained based on assumed damage index values), as expressed below:

*j*th mode shape, we can compare the measured change and predicted change using MDLAC in a similar manner

*j*th mode, is measured. Since both MDLACs are functions of $\alpha $, and the measured data $\Delta \lambda $ and ${\Delta \varphi j}$ are known, a multi-objective minimization problem for an

*n*-element structure can then be formulated

where $\alpha l$ and $\alpha u$ are the lower bound and upper bound of the damage index. If more modes can be measured and are to be used in damage identification, we only need to add more objective functions into the optimization problem. The optimization problem defined is nonconvex and may have many local optima. It is only appropriate when a global optimization approach is adopted.

## Multi-Objective Dividing Rectangles Algorithm

### DIRECT Algorithm.

Dividing rectangles (DIRECT) algorithm is a deterministic global optimization algorithm evolved as a technique to address the shortcomings of the Lipschitzian optimization [17]. It generally features higher efficiency than stochastic global techniques due to its deterministic attribute, and the optimization result is reproducible for each independent run without uncertainties. Furthermore, parameter tuning is not a necessity in DIRECT unlike most other global optimization algorithms. Such features are desirable when tackling damage identification problems, because a conclusive result independent of the parameters selection can be expected.

As shown in Fig. 1, the point of intersection for the two lines leads to the first estimation of the minimum of $f(x)$. Such discretization can be carried on to gradually approximate the minimum of $f(x)$ on $[a,\u2009b]$ [20] as illustrated in Fig. 2.

*N*> 1. Second, the Lipschitz constant $\gamma $ is usually hard to acquire. Subsequently, the original version of DIRECT algorithm was proposed to overcome these drawbacks [17,21]. Instead of using endpoints, DIRECT samples at midpoints of the parametric space which addresses the high dimensional challenge. Moreover, DIRECT relaxes the premise of $f(x)$ being Lipschitz continuous so there is no need for Lipschitz constant. To start the shift from Lipschitz optimization to DIRECT, we first assume that the Lipschitz constant is known and replace $x\u2032$ in Eq. (8) with $c=a+b/2$

These two inequalities correspond to the lines with slopes $\gamma $ and $\u2212\gamma $ as demonstrated in Fig. 3. Thus, the first estimation of the minimum with respect to $f(x)$ is obtained as $f(c)\u2212\gamma (b\u2212a)/2$, which only uses the function value at the center $f(c)$. Hence, the first problem of Lipschitz optimization regarding the tractability in high dimension has been solved. However, Lipschitz continuity is still assumed to hold, and Lipschitz constant is considered as known. To complete the conversion, we specify the subdivision routine of DIRECT, i.e., where to sample and evaluate next. As shown in Fig. 4, for interval $[a,\u2009b]$, the midpoint $c1$ is sampled first, then the interval is divided into three segments, and the function value is evaluated at midpoints of the smaller intervals $c2$ and $c3$. After partitioning interval $[a,\u2009b]$ into smaller intervals, next step in DIRECT, we determine which smaller interval should be further sampled.

*m*intervals $[ai,\u2009bi],\u2009i=1,\u2026,m$. Each interval can be represented by a point in a plane, as illustrated in Fig. 5(a), where the horizontal axis stands for the distance from midpoint to vertices $bi\u2212ai/2$, and the vertical axis is the function value $f(ci)$. The fundamental idea of DIRECT is to determine which interval to be further sampled using the inequalities given below:

where $\epsilon >0$ is a positive constant, $fmin$ is the current best objective function value. If there exists some rate-of-change constant *K* > 0, then the interval *j* is said to be potentially optimal and will be further evaluated (Fig. 5(b)). Once the interval has been identified as potentially optimal, the interval is subdivided following routine given in Fig. 4.

The examples given so far are all in one dimension. Indeed, univariate DIRECT can be modified to multivariate DIRECT by sampling the center points of hyper-rectangles. The multivariate DIRECT algorithm can be summarized in four steps:

*Step 1*—

*Normalize*: First convert the decision domain to a unit cube/hyper-cube depending on the dimension of the decision space (Fig. 6(a))

*i*th unit vector in Euclidean space (Fig. 6(b)). The dimension

*i*will be divided first if

*i*satisfies

Take Fig. 6(c), for example. If $min(f(c2),\u2009f(c3))<min(f(c4),f(c5))$, the dimension along $c2,\u2009c3$ direction will be divided first.

*Step 3—Determine the potentially optimal rectangles*: This is the step that ensures the optimality of the solution. After steps (1) and (2), the parametric space is divided into several rectangles/hyper-rectangles, e.g., five rectangles for two-dimensional (2D) case (Fig. 6(c)). A rectangle

*j*is said to be potentially optimal if there exists some rate-of-change constant

*K*> 0 and positive constant $\epsilon $ such that

where $dj$ measures the size of *j*th rectangle. In the original version of DIRECT [17], *d* is the distance from rectangle center *c* to the vertex, and it is recommended to take $\epsilon =1\xd710\u22124$. Equation (16) makes sure the rectangles selected are on the lower right of the collinear convex hull of the dots. Meanwhile, Eq. (17) prevents the algorithm from becoming too local in its orientation. Observe Fig. 7. All the rectangles are projected to the plane where the horizontal axis denotes the rectangle size *d*, and the vertical axis represents objective value *f*. Each rectangle selected is the one with the smallest objective value in its *d* class. And the selected rectangles are further divided and sampled by repeating step (2). In Fig. 6(d), $c2$ and $c4$ are potentially optimal rectangles and will be therefore further divided.

*Step 4—Iterate:* Repeat steps (2) and (3) until maximum number of evaluation is reached.

In Fig. 6, we provide 2D and three-dimensional examples. Higher dimensional cases could be divided and sampled in a similar manner. The pseudo-code of DIRECT is given as follows.

### Multi-Objective DIRECT.

The problem defined in Sec. 2 involves simultaneous optimization of two incommensurable and potentially conflicting objectives. Intuitively, MOO could be facilitated by forming an alternative problem with a composite objective function using weighted-sum approach. However, the weighted-sum methods have difficulties in selecting proper weighting factors. In order to provide the damage identification result independent of how the algorithm is implemented, a posteriori preference articulation is usually preferred, which allows a greater separation between the algorithm and the decision-making process [22]. Accordingly, instead of a single optimum produced by weighted sum methods, MOO will generate a set of alternative solutions explicitly exhibiting the tradeoff between different objectives. Because of these advantages, the recent decade has seen a few attempts to extend DIRECT for multi-objective optimization problems. Wang et al. [23] proposed a hybrid algorithm which uses a multi-objective DIRECT algorithm to generate population for evolutionary algorithm. In their study, rank strategy is used to rank the rectangles into fronts in terms of their objective values, and then replace $f(c)$ in Eqs. (16) and (17) with the rank to identify the potentially optimal rectangles. More recently, Al-Dujaili and Suresh [24] treated *d* as an additional objective value and obtain the potentially optimal rectangles by calculating the nondominated Pareto front. Wong et al. [25] replace $f(c)$ in Fig. 7 with the hypervolume indicator and select rectangles on the upper right Pareto front in the *hypervolume*-*d* plane.

*R*= 1, the second front will have a rank index

*R*= 2, and so on so forth. Then, the potentially optimal rectangles can be obtained by projecting the rectangles to

*R*−

*d*plane first and extract the lower right Pareto front as shown in Fig. 9(a). In the case of

*m*-objective minimization, a rectangle

*j*is said to be potentially optimal if

where *R(*)* is an nondominated sorting operator in minimization sense which returns the rank index. Recall that in the original implementation of DIRECT, *d* is the distance measured between the rectangle center and vertex. In this research, *d* is set to be the length of the longest side of the rectangle instead, which allows the algorithm to group more rectangles at the same size [21]. Compared to the original DIRECT algorithm, we omit the second condition (Eq. (17)) because by considering lower right Pareto front, the rectangles on the lower right convex hull that have the same *R* but smaller *d* will be automatically filtered out (Fig. 9(a)), which resembles the effect of Eq. (17).

A major difference between the proposed strategy and the strategy by Wang et al. [23] is that our strategy explores the concave rectangles as well (Fig. 9). Generally, a low-ranked solution (large rank index) is dominated by some of the high-ranked (small rank index) solutions, but meanwhile nondominated to other high-ranked solutions. As a result, a portion of low-ranked rectangles may lead to optimal solutions just like certain high-ranked rectangles. Examining the lower right Pareto helps to partition some low-ranked rectangles with potential, thus gains an edge over the NS-DIRECT [23] in terms of exploration. Moreover, the proposed method overcomes the drawback of the MO-DIRECT [24], as we concentrate only on a small portion of the rectangles. MO-DIRECT, on the contrary, incorporates *d* as an additional objective which causes most of the rectangles fall into the potentially optimal group. Finally, when competing with MO-DIRECT-hv [25], the proposed method is more efficient computationally. In MO-DRIECT-hv, hypervolume needs to be calculated iteratively, whereas hypervolume calculation is a computationally expensive NP-hard problem. In Sec. 4, we will evaluate the performance of proposed algorithm given several damage cases and carry out comparison between the four-above-mentioned different multi-objective DIRECT techniques.

### A Posterior Articulation.

In optimization, when a decision maker expresses a subjective judgment before or during the optimization, a single biased solution is obtained. However, in engineering practice, it is quite common that several solutions are of interests. As mentioned in Sec. 3.2, it is preferable that the problem-solving and decision-making processes are independent of each other so that the problem-solving approach could remain unchanged when user judgments differ. Thus, multi-objective DIRECT optimization is adopted in this research, which will provide a set of solutions as damage identification candidates. Nevertheless, to obtain a single satisfactory candidate that compromises between objectives, a posterior articulation should be incorporated into the formulation as well.

where *i* is the index of the solution in optimal set $Se$, and *p* is the index of the solution selected. In Eq. (19), $l0$-norm emphasizes the number of damage, while $l1$-norm emphasizes finding the solution with the least amplitude. Consequently, a concise and interpretable sparse solution will be selected in this study as the posterior damage identification result. Since a posterior articulation is independent of the algorithm formulation, it could be modified at all times according to users' needs.

## Case Studies

In this section, we employ the proposed multi-objective DIRECT algorithm to five damage identification cases, as explained in Table 1. To facilitate easy re-production of case analyses, a benchmark cantilever beam model with varying number of elements and varying damage scenarios is adopted here for demonstration purposes. Cases 1 and 2 serve as baseline scenarios. Cases 3, 4, and 5 have increased difficulty level in terms of damage identification, which are employed to further investigate and interrogate the factors that may affect the performance and the outcome of the new algorithm. Algorithm comparisons are carried out subsequently.

The Young's modulus of the beam is 69 GPa, the length per element is set as 10 m, and the cross-sectional area is 1 m^{3}. The measured mode shape and natural frequencies used are simulated directly from the finite element models which are subject to ± 0.15% standard Gaussian uncertainties [14,15,27]. Unless noted otherwise, the measurements available to the algorithm are limited to the first five natural frequencies and the second mode shape for all cases. It is worth mentioning here that the algorithm does not know the number of damage. Nor does it know the location and severity. As discussed in Sec. 1, an important advantage of multi-objective DIRECT against other optimization techniques is that it has significantly fewer parameters to tune. The only parameter needs to be specified here is the maximum number of evaluations, which is set to 30,000 as it is adequate for all cases to converge. The average computational time for 30,000 function evaluations is 79 s within matlab on a 2.40 GHz Xeon E5620 computer.

### Case 1 (15 Elements, 2 Damaged Elements).

We first carry out the case study on a 15-element cantilever beam. The damage occurs at the third and 14th elements with severities $\alpha 3=0.09$ and $\alpha 14=0.05$, respectively. A set of optimal candidates are obtained (Fig. 10(a)) as the result of tradeoff between objectives. Each solution obtained corresponds to one possible damage situation. Figure 10(b) shows the mean and variance of the damage indices for each element. The mean value is then compared to the true damage indices in Fig. 10(c). As illustrated, by adopting the proposed approach, location and severity of the damage are successfully identified. The mean value of the identification results bears some small errors in the fifth element and the seventh element. By using a posterior articulation introduced in Sec. 3.3, the damage is better approximated (Fig. 10(d)). To better demonstrate the selection process of potentially optimal rectangles of the proposed method, several iterations of the multi-objective DIRECT search in objective space are presented (Fig. 11), where each dot corresponds to one or a group of rectangles. The vertical axis is the rank index *R* of the rectangles, and horizontal axis is the size of the rectangle *d*.

### Case 2 (15 Elements, 2 Damaged Elements).

We then carry out another case study on the same 15-element cantilever beam. Different from case 1, the damage occurs at the eighth and 11th elements with severities $\alpha 8=0.02$ and $\alpha 11=0.08$, respectively. Together with case 1, we try to investigate how different location and severity of the damage affect the results while other factors remain unchanged. Similarly, a set of optimal candidates are obtained (Fig. 12(a)), which could belong to the same solution niche due to small variance (Fig. 12(b)). As illustrated in Fig. 12(c), the location and severity of the induced damage are accurately identified with only small errors, which are further reduced through a posterior articulation as depicted in Fig. 12(d). For both cases 1 and 2, the proposed algorithm exhibits satisfactory performance regardless the locations and severities. Figure 13 gives the convergent history of DIRECT as applied to case 2, where the horizontal axis is the number of function evaluations, and the vertical axis is the mean MDLAC value of the solutions in current optimal set. Unlike gradient-based optimization, both MDLAC curves fluctuate rather than monotonic decrease. This ensures the algorithm to be able to escape from local optima. Specifically, when the MDLAC curve ascends, new areas of the decision space are explored and sampled.

### Case 3 (15 Elements, 3 Damaged Elements).

Different from case 1, in case 3 an additional damage is introduced to the ninth element with the severity of $\alpha 9=0.03$. The purpose of such a test is to examine the effect of the number of damage to damage identification accuracy. As can be seen from Figs. 14(c) and 14(d), the location of the damage is detected accurately. The results on damage severities are also very good, with only small discrepancies between actual damage and identified damage in terms of severity. Apparently, the optimal solutions fall into a “local minimum” of the original objective space, because the current objective space is deformed by the measurement errors introduced. On the other hand, because of the center-sampling nature of the DIRECT algorithm, the more damage occurrences there are, the less sparse the damage index vector is, which adds extra burden to the algorithm computationally. It is shown in Fig. 15 that it indeed takes the algorithm longer to converge than in case 2.

### Case 4 (20 Elements, two Damaged Elements).

Case 4 is derived from case 2, where a 20-element beam is used rather than a 15-element beam. In optimization perspective, it is equivalent to adding five variables. The results on locations and severities are still generally good even with more variables. As shown in Fig. 16(b), the solutions have larger variance compared to that of case 2 (Fig. 12(b)). Due to enlarged search space, the solutions also tend to be less clustered (Fig. 16(a)). Nevertheless, the proposed algorithm locates the induced damage with very good accuracy (Fig. 16(c)), and the errors are further eliminated by a posterior articulation proposed. For an ideal model without uncertainty, adding variables alone would not change the essence of the problem. In other words, if the optimization process lasts long enough, the quality of the final solutions would not deteriorate. However, errors and uncertainties are inevitable in engineering practices. Under realistic circumstances, more variables require not only more computational time, but also more measurements. Otherwise, the accuracy of the results is expected to decrease. That is why the optimization results of case 4 reported in Fig. 16 are not as good as those of case 2.

### Case 5 (30 Elements, 3 Damaged Elements).

In case 5, ten more elements and one additional damage are added to the damage scenario given in case 4. As have been discussed in cases 3 and 4, either more damaged elements or more variables affect the quality of the solution negatively. Therefore, in case 5, the optimal solutions suffer the most error among all numerical test cases conducted in this study (Fig. 17). The results, however, can still be considered acceptable. Moreover, when the first eight natural frequencies are used instead of five, the identification results are improved considerably (Figs. 17(e) and 17(f)) even if the problem is still ill-posed.

In summary, the proposed approach has accurately identified both locations and severities for all numerical test cases given limited and contaminated data.

### Algorithm Comparison.

In this section, we perform the comparison of the proposed method with other multi-objective DIRECT techniques mentioned in Sec. 3.2, namely NS-DIRECT, MO-DIRECT, and MO-DIRECT-hv, on case 3 (described in Sec. 4.3). The main differences between these algorithms are the strategies used when determining the potentially optimal rectangles. The strategy used in our proposed approach, NS-DIRECT, MO-DIRECT, and MO-DIRECT-hv can be regarded as Pareto front strategy, nondominated strategy, convex hull strategy, and hypervolume strategy, respectively. Different from Sec. 4.3, for comparison purpose, the first nine natural frequencies are used, and the uncertainties are omitted. Table 2 records the predicted results compared to true damage indices for the three damaged elements. All predictions made by the proposed algorithm are ranked the best among the four algorithms. The performance of the algorithm is also depicted in Fig. 18 as a spider plot. Each vertex of the polygon represents the error between the true damage index for a specific element. As can be seen in Fig. 18, among all the strategies, the black polyline, which represents the Pareto front strategy used in our proposed approach, has the best overall performance.

## Conclusions

This research aims at solving the fundamental challenge in damage identification using limited dynamic response information and heterogeneous measurements. With a vibration-based approach as an illustrative example, we formulate a multi-objective optimization problem that can be solved to yield damage location and severity. An enhanced multi-objective DIRECT, which is a global deterministic optimization technique, is formulated. The algorithm has fewer parameters compared to stochastic optimization techniques. Due to its deterministic nature, the identification results are conclusive and repeatable. A new sampling scheme is established in this research which significantly increases the efficiency in minimizing the error between measurements and predictions. The enhanced DIRECT algorithm is particularly suited to solving for unknowns that are sparse in nature, since in practical situations structural damage affects only a small region. Comprehensive case analyses are performed that demonstrate the effectiveness of the new method. This method can be applied to a variety of damage identification problems.

## Funding Data

Air Force Office of Scientific Research (Grant No. FA9550-14-1-0384).