This paper deals with the distributed fault detection and isolation problem of uncertain, nonlinear large-scale systems. The proposed method targets applications where the computation requirements of a full-order failure-sensitive filter would be prohibitively demanding. The original process is subdivided into low-order interconnected subsystems with, possibly, overlapping states. A network of diagnostic units is deployed to monitor, in a distributed manner, the low-order subsystems. Each diagnostic unit has access to a local and noisy measurement of its assigned subsystem's state, and to processed statistical information from its neighboring nodes. The diagnostic algorithm outputs a filtered estimate of the system's state and a measure of statistical confidence for every fault mode. The layout of the distributed failure-sensitive filter achieves significant overall complexity reduction and design flexibility in both the computational and communication requirements of the monitoring network. Simulation results demonstrate the efficiency of the proposed approach.

Introduction

The majority of contemporary industrial and commercial control systems are composed of a large number of spatially distributed feedback modules with heterogeneous sensors, actuators, and controllers that exchange information over a band-limited communication network that is embedded within the system. These large-scale systems are characterized by high-dimensional state-spaces and nonlinear dynamics. Typical applications are water distribution networks, power grids, automated highway systems, swarms of unmanned aerial vehicles, and environmental control systems, just to name a few. Large-scale systems are much more vulnerable to faults since the effects of a single malfunction to an individual part may rapidly diffuse to the entire system due to the interconnection of the various subcomponents.

Availability, dependability, and resiliency are becoming major design goals for large-scale technological systems due to stringent economic, ecological, and safety demands. These attributes are of major importance, primarily for safety-critical systems, e.g., airplanes, automobiles, and nuclear reactors, since they ensure public safety. Therefore, there is a growing need for reliable real-time monitoring and supervision especially in the case of safety-critical systems. Fault diagnosis (FD) describes the dual objective of detecting the occurrence of a fault (detection) and identifying it (identification or isolation). A timely diagnosis of a fault mode may improve the system's availability and maintainability by avoiding down-times, breakdowns, and catastrophic failures rates.

Research in the field of FD has attracted significant attention since the beginning of the 1970s. The significant majority of existing FD methods [16] have a centralized architecture in the sense that the sensor measurements and the diagnostic algorithm are collected and executed by a singleton processing unit. Centralized FD is considered a matured field that has established reliable solutions to many engineering applications. However, the applicability of this traditional approach is limited to concentrated low-order systems.

Modern processes involve high-dimensional state-spaces as well as highly nonlinear dynamics. In the case of large-scale and spatially distributed systems, centralized FD becomes ill-suited. Every monitoring system has certain limitations in terms of computational power and communication bandwidth. When the dimensionality and complexity of the system increases, it is likely that these limitations will not be satisfied by a centralized configuration. The online monitoring of a high-dimensional system would require extensive computations from the central processing unit. Processes with geographically remote subcomponents necessitate long distance and energy demanding broadcasts or complex multihop routing protocols to transmit information to the central fusion center. In both cases, a centralized architecture exhibits poor scalability.

In the literature, the majority of distributed fault detection and identification methods are developed for discrete-event systems and for multiprocessor computing applications [79]. A growing interest in the development of distributed fault detection algorithms has also been reported by the wireless sensor networks community [10]. In this work our attention is spotlighted to model-based FD methods for dynamic systems that utilize a mathematical model of the process. A rudimentary classification of existing distributed FD model-based methodologies can take place based on the data type that is exchanged between the nodes of the diagnostic system. The diagnostic nodes (DNs) can exchange: raw measurements of the interconnected states [1113], state estimates [14,15], or fault signatures [16]. The most prominent work on distributed, observed-based FD for nonlinear dynamic systems has been reported by Ferrari et al. [11] and Boem et al. [17]. The authors apply overlapping decomposition techniques to subdivide the monolithic process to a set of reduced order subsystems. Each subsystem is monitored by a local nonlinear observer. Seminal work in distributed estimation-based FD using Kalman filtering is reported in Ref. [18]. The algorithm is based on the distributed version of Kalman filter (KF) established by Olfati-Saber [19,20]. The KF is restricted to linear system, while linearization (extended KF) leads to high false alarms rates [21].

Foundational work on estimation-based FD for nonlinear systems that employ large number of correlated sensors is introduced in Refs. [22,23]. The author combines a distributed particle filtering algorithm for state estimation with fused hypothesis testing through likelihood tests, to determine the occurrence of fault modes. The proposed method is mainly geared toward relative low-order systems monitored by a high number of interconnected sensors. The layout of the algorithm does not accommodate subdivisions of the original process. The applied classical likelihood tests require a bank of estimators equal to the number of fault modes. Such replication of the state is unsuitable for large-scale systems. A full-order distributed failure-sensitive filter has also been introduced by Noursadeghi and Raptis [24,25]. In this scheme, a detection network is assigned to monitor the entire state of the monolithic system using only local measurements. Similarly to the work of Cheng, the authors did not considered any form of subdivision in the dynamics of the original process. Instead, the algorithm introduced in Refs. [24,25] provides an estimate of the entire state of the monolithic system.

The particle filter (PF) is an ideal estimator for fault diagnosis since it avoids linearity and Gaussian noise assumptions. A comprehensive analysis on distributed PF algorithms is given by Hlinka [26,27] and Mohammadi [28,29]. A distributed PF scheme for FD diagnosis that accounts for system decomposition is reported in Ref. [30]. The authors propose a hybrid modeling approach where every potential fault is treated as a system mode. This approach assumes that the transition probability between the fault modes is known a priori. This probabilistic information may not be available in most real-life applications.

In this work, we present a distributed, model-based and sequential fault diagnosis methodology for large-scale, stochastic nonlinear systems that are subject to multiple fault modes. This approach targets systems where the state dimension is significantly large (102 states and higher). A distributed version of the particle filtering method will serve as the foundation of the derived diagnostic algorithms. We introduce a reduced-order fault diagnostic algorithm that allows the subdivision of the original process dynamics to low-order interconnected subsystems with state overlap. A DN is assigned to monitor every partition of the monolithic system and triggers alarm indicators based on its local observations and information exchange between neighboring units. Each local failure sensitive filter outputs an estimate of the subsystem's state vector and the probabilities of failure of the local fault modes.

Our reduced-order FD technique achieves a dramatic decrease to the computational complexity of the original problem and provides significant design flexibility to the layout of the algorithm. The PF is an ideal estimator since it eliminates complex Lyapunov arguments that are required by the observed-based methods to guarantee convergence. A binary update rule is used to repopulate the particles and estimate the system modes without the need for transition probabilities. The failure sensitive filter can simultaneously detect and identify faults without the need for a bank of estimators. The proposed algorithm takes advantage of the decentralized architecture and computational strength of modern embedded systems such as wireless sensor networks and multicore processors.

This paper is organized as follows: A brief description of the PF algorithm is presented in Sec. 2. The synthesis of a centralized PF fault diagnosis algorithm is outlined in Sec. 3. The centralized algorithm serves as a benchmark framework for its distributed counterpart. The reduced-order distributed version of the failure sensitive filter is presented in Sec. 4. The performance of the proposed methodology is evaluated in Sec. 5 via numerical simulations. Finally, concluding remarks are given in Sec. 6.

Centralized Particle Filtering

The filtering problem is formulated based on the discrete time state-space approach. The purpose is to estimate the state of the system by using a sequence of noisy measurements. Consider a time-dependent, state vector x(k)nx, where k+ is the time index. The state-transition model of the state x(k) is defined according to
(1)
where f(k):nx×nvnx is a known, nonlinear function, and v(k)nv stands for system's process noise. At time step k, the measurement equation of the state x(k) is expressed by
(2)
where z(k)nz represents the measurement vector, h(k):nxnz is a known nonlinear function, and ω(k)nω stands for the measurement noise. It is assumed that both the process noise v(k) and measurement noise ω(k) are white and independent with known probability density functions (pdf). From a Bayesian perspective, the objective is to recursively quantify some degree of belief in the state x(k), given the measurement data z(1:k) up to time k. The belief is expressed by the calculation of the posterior pdf p(x(k)|z(1:k)). The calculation of the posterior density p(x(k)|z(1:k)) allows the computation of various measures of the state x(k), such as the minimum mean square error
(3)
The PF is a sequential Monte Carlo method that uses a finite set of “particles” to represent probability density functions [31]. The basic idea is to represent the non-Gaussian posterior pdf p(x(k)|z(k)) by a set of randomly drawn particles xi(k) and corresponding weights wi(k). Using set of particles and the weights {xi(k),wi(k)}i=1Ns, the posterior can have the following discrete approximation:
(4)
where δ(·) denotes the multivariate delta Dirac function. As the number of particles becomes very large, the sequential Monte Carlo representation is closer to the analytical description of the posterior. Using the representation in Eq. (4), one can obtain various estimates of x(k). For example, the minimum mean square error estimate is approximated as
(5)

Different variations of the PF algorithm exist depending on the choice of the importance density function and the resampling step. The most standard form of the PF algorithm is the sequential importance resampling filter (SIR). The SIR filter forms the foundation for some well-known PFs including the bootstrap filter [32], the auxiliary PF [33], and the regularized PF [33]. These PFs are derived using a suboptimal choice of the proposal pdf q(x(k)|z(k1),z(k)).

The most frequently used and easiest to implement SIR algorithm is the bootstrap filter that is employed during this work. In this filter, the particles are updated using the state-transition density function p(x(k)|x(k1)) as the importance pdf. In this case, the weight update equation simplifies to
(6)
The measurement noise ω is considered as a Gaussian distribution with zero mean and covariance matrix Σω, the likelihood function for each particle is calculated by
(7)

where N(ei(k),0,Σω) is the normal distribution with zero mean and covariance matrix Σω evaluated at the points ei(k)=z(k)h(xi(k)), where ei(k) is the prediction error of the ith particle. For a detailed description of various PF algorithms and resampling techniques, the reader is referred to Ref. [33]. The pseudocode of the bootstrap filter and resampling are provided in Tables 1 and 2, respectively. The block diagram of the bootstrap algorithm is shown in Fig. 1.

Fig. 1
Block diagram of the bootstrap PF
Fig. 1
Block diagram of the bootstrap PF
Close modal
Table 1

Pseudocode of the BOOTSTRAP PF algorithm [31]

function BOOTSTRAP PF
Inputs:xi(k1),wi(k1),z(k)
Outputs:xi(k),wi(k)
Required:RESAMPLE
1: fori=1:Nsdo
2:  xi(k)=f(xi(k1),vi(k1)) ▷ Particles update
3:  wi(k)=wi(k1)·p(z(k)|xi(k)) ▷ Weights update
4: end for
5: wi(k)=wi(k)j=1Nswj(k) ▷ Weight normalization
6: [{xi(k),wi(k)}i=1Ns]=RESAMPLE[{xi(k),wi(k)}i=1Ns] ▷ Resampling
7: x̂(k)=i=1Nswi(k)xi(k) ▷ Minimum mean-square error
function BOOTSTRAP PF
Inputs:xi(k1),wi(k1),z(k)
Outputs:xi(k),wi(k)
Required:RESAMPLE
1: fori=1:Nsdo
2:  xi(k)=f(xi(k1),vi(k1)) ▷ Particles update
3:  wi(k)=wi(k1)·p(z(k)|xi(k)) ▷ Weights update
4: end for
5: wi(k)=wi(k)j=1Nswj(k) ▷ Weight normalization
6: [{xi(k),wi(k)}i=1Ns]=RESAMPLE[{xi(k),wi(k)}i=1Ns] ▷ Resampling
7: x̂(k)=i=1Nswi(k)xi(k) ▷ Minimum mean-square error
Table 2

Pseudocode of the RESAMPLING algorithm [33]

function RESAMPLE
Inputs:xi(k),wi(k)
Outputs:xj*(k),wj(k)
1: c(1)=0 ▷ Initialize the cumulative distribution function (CDF)
2: fori=2:Nsdo
3:  c(i)=c(i1)+wi(k) ▷ Construct CDF
4: end for
5: i = 1 ▷ Start at the bottom of the CDF
6: u(1)=U[0,Ns1] ▷ Draw a starting point
7: forj=1:Nsdo
8:  u(j)=u(1)+Ns1(j1) ▷ Move along the CDF
9:  whileu(j)>c(i)do
10:   i=i+1
11:  end while
12:  xj*(k)=xi(k) ▷ Assign samples
13:  wj(k)=Ns1 ▷ Assign weights
14: end for
function RESAMPLE
Inputs:xi(k),wi(k)
Outputs:xj*(k),wj(k)
1: c(1)=0 ▷ Initialize the cumulative distribution function (CDF)
2: fori=2:Nsdo
3:  c(i)=c(i1)+wi(k) ▷ Construct CDF
4: end for
5: i = 1 ▷ Start at the bottom of the CDF
6: u(1)=U[0,Ns1] ▷ Draw a starting point
7: forj=1:Nsdo
8:  u(j)=u(1)+Ns1(j1) ▷ Move along the CDF
9:  whileu(j)>c(i)do
10:   i=i+1
11:  end while
12:  xj*(k)=xi(k) ▷ Assign samples
13:  wj(k)=Ns1 ▷ Assign weights
14: end for

Centralized Particle Filtering Fault Diagnosis

This work extends the methodology introduced in Ref. [34] from one-dimensional fault-growth models to dynamic state-space systems of nonlinear processes introducing the centralized particle filtering fault diagnosis (CPFFD) algorithm. The CPFFD algorithms generate two outputs. The first is the system's state estimate from a sequence of noise infested measurements. The second output is a statistical characterization for the occurrence of each fault mode that can trigger fault alarms.

Table 3

Pseudocode of the CPFFD algorithm

function CPFFD
Inputs:Xi(k1),wi(k1),ui(k1),Z(k)
Outputs:Xi(k),wi(k),{b̂2j(k)}j=1M
Required:RESAMPLE
1: fori=1:Nsdo
2:  Xi(k)=F(Xi(k1),ui(k1),V(k1)) ▷ Particles update
3:  wi(k)=wi(k1)·p(Z(k)|Xi(k)) ▷ Weights update
4: end for
5: wi(k)=wi(k)j=1Nswj(k) ▷ Weight normalization
6: [{Xi(k),wi(k)}i=1Ns]=RESAMPLE[{Xi(k),wi(k)}i=1Ns] ▷ Resampling
7: X̂(k)=i=1Nswi(k)Xi(k) ▷ Minimum mean-square error
8: forj=1:Mdo
9:  b̂2j(k)=E[b2j(k)|Z(k)] ▷ Probabilities of failure
10: end for
function CPFFD
Inputs:Xi(k1),wi(k1),ui(k1),Z(k)
Outputs:Xi(k),wi(k),{b̂2j(k)}j=1M
Required:RESAMPLE
1: fori=1:Nsdo
2:  Xi(k)=F(Xi(k1),ui(k1),V(k1)) ▷ Particles update
3:  wi(k)=wi(k1)·p(Z(k)|Xi(k)) ▷ Weights update
4: end for
5: wi(k)=wi(k)j=1Nswj(k) ▷ Weight normalization
6: [{Xi(k),wi(k)}i=1Ns]=RESAMPLE[{Xi(k),wi(k)}i=1Ns] ▷ Resampling
7: X̂(k)=i=1Nswi(k)Xi(k) ▷ Minimum mean-square error
8: forj=1:Mdo
9:  b̂2j(k)=E[b2j(k)|Z(k)] ▷ Probabilities of failure
10: end for
Consider the uncertain, nonlinear, and discrete-time dynamic system S described by the following state-space model:
(8)

where the terms x(k)nx, u(k)nu, and z(k)nz refer to the state, input, and measurement vector, respectively; f(k):nxnx, and h(k):nxnz denote the known nonlinear functions of the system's healthy dynamics and measurement model, while v(k)nx and ω(k)nz stand for the process and measurement noise sequences, respectively.

The monolithic system's healthy dynamics are subjected to M potential fault modes described by the nonlinear functions {gj(x(k),u(k))}j=1M with gj(k):nx×nunx. The term β(kk0j) is a scalar function representing the time profile of the fault mode j occurring at some unknown time k0j. We can consider both abrupt (step-like) or incipient (exponential-like) fault modes, defined as
(9)

It is assumed that the system is initiated from the healthy mode (β(·)=0 at k = 0). Due to the random occurrence of the possible faults, the monolithic system may be viewed as a hidden Markov model, where the transition probabilities between the different system modes are unknown.

The proposed failure sensitive filter embeds the dynamics of the monolithic system S given in Eq. (8), as well as a binary variable (for every potential fault), that identifies the changes in the process dynamics expressed by the terms β(kk0j). Hence, the binary state vector bj(k)=[b1j(k)b2j(k)]T, with b1j,b2j{0,1} and j=1,,M, is introduced to estimate the occurrence of each fault mode. More specifically, b1j(k)=1 indicates that the absence of failure mode j, while b2j(k)=1 denotes that the fault mode j is detected to the system. The continuous-valued states are coupled with the discrete-valued binary fault occurrence estimates resulting in a hybrid model.

The operating condition of the system (normal or faulty), as well as the detection and isolation of the faults, is determined by employing a particle filtering scheme for the statistical characterization of both the binary and continuous-valued states, as new measurements are received. Hence, the state vector that is used by the PF algorithm is the combination of the continuous state vector and the binary state vector as XT(k)=[(xc(k))T(b1(k))T(bM(k))T]nx+2·M, where xc(k)x(k). Therefore, the state-transition dynamics of X are described by
(10)

where ṽ(k)nx and ω̃(k)nz are approximations of the failure sensitive filter's process and measurement noise, respectively. These noise sequences should be as close as possible to the actual ones (v(k) and ω(k)). The nonlinear function Φ:2{[01]T,[10]T}, represents the evolution of the binary states driven by the identically independent distributed (i.i.d) uniform white noise nj(k). The function Φ(·) is defined such that the previous state bj(k1) is randomly excited at each time step by nj(k1). This random vector of 2 is assigned to one of the binary states (normal/faulty operating condition) based on the distance metric of the perturbed vector bj(k1)+nj(k1) to the coordinates [01]T and [10]T.

By using this technique, when one of the fault modes occurs, the weights will gradually converge the corresponding binary variable b2j(k) to one (b2j(k)1). This is due to the fact that the likelihood of the measurements will diminish the weights of the particles associated with the healthy condition. This way, the occurrence of the each fault is estimated exclusively through the measurement data and the process model, without the knowledge of the fault modes transition probabilities. The function Φ(·) serves as “likelihood feedback” that drives the sample population of the binary states. A choice of Φ(·) that has been successfully used in Refs. [3436] is
(11)
where e1=[1;0]T and e2=[0;1]T. The state model of the CPFFD algorithm can be written in a more compact form as

where Z(k)=z(k),V(k)=[ṽ(k)n1(k)nM(k)], and F(·),H(·) are nonlinear functions of appropriate dimensions and structure. The aforementioned definition will be used to ease the notation in subsequent parts of the analysis. The outputs of the CPFFD module are the estimation of the systems's state vector and the probabilities of failure of each fault mode. These probabilities are the expectations of the binary states b̂2j(k)=E[b2j(k)|Z(k)]. This measure is used to trigger alarm indicators if its value exceeds a certain threshold α(0;1) that marks the probability of detection (i.e., b̂2j(k)<α indicates that the system is in healthy operating condition). With this layout, two or more different co-existing fault modes can be simultaneously detected. The pseudocode of the CPFFD algorithm is given in Table 3.

The probability of failure is a much more computationally attractive measure compared to classical change detection methods such as hypothesis testing. In the context of fault isolation, detection algorithms using hypothesis testing through logarithm likelihood ratio test requires the execution of a bank of estimators that is equal to the fault modes. For large-scale systems, this computational load is prohibited. The proposed CPFFD algorithm is significantly more efficient, since it increases the dynamics of the detector by only M binary state vectors.

Distributed Particle Filtering Fault Diagnosis

The CPFFD algorithm described in Sec. 3 is not scalable or robust to complex large-scale dynamical systems that employ scattered measurement sensors over large geographical regions. For high-dimensional large-scale systems, this methodology becomes impractical due to limitations in the observation range of sensors, communication bandwidth, and computation power of the centralized computing node.

In this section, we present a reduced-order distributed particle filtering fault diagnosis (DPFFD) algorithm for large-scale nonlinear systems. The original diagnostic problem is subdivided to a number of lower-order interconnected fault sensitive filters. With this technique, each low-order filter can balance its computation power requirements and volume of data transfers. Similar to Ref. [37], we take into account subdivisions with state overlap. The states that are common between two or more subsystems are referred to as shared states. Shared states between subsystems appear when state variables are mutually monitored by sensors that correspond to different subsystems.

Here, we briefly illustrate the three most characteristic types of decomposition based on a similar description given in Ref. [37]. The most communication intensive decomposition involves nonoverlapping subsystems of order one (Fig. 2(a)). This fragmenting is the most computationally effective, however, most likely the communication limitations will be reached. On the contrary, the decomposition depicted in Fig. 2(b) provides a balanced compromise between computational labor and communication broadcasts. It is important to note that there exists a trade-off between computation power and communication capacity for the nodes of the network. The third case (Fig. 2(c)) is similar to the previous scenario with the difference that there is overlap between the subsystems. In principal, overlapping dynamics increase both the complexity and the communication requirements of the overall design. This additional complexity overhead is due to the fusion of the common measurements and shared state estimates between the nodes. The overlap can increase the fidelity of states and measurements that are exposed to higher uncertainty, since they are monitored by more than one sensor. This fact can justify the additional effort in terms of complexity and communications that stems from overlapping states. Decomposition techniques are out of the scope of this paper, and the interested readers are referred to Refs. [38,39].

Fig. 2
Characteristic types of system decomposition: (a) low complexity, many interconnections, (b) balanced complexity/communications with no shared states, and (c) balanced complexity/communications, with shared states. This figure is based on a similar one in Ref. [37].
Fig. 2
Characteristic types of system decomposition: (a) low complexity, many interconnections, (b) balanced complexity/communications with no shared states, and (c) balanced complexity/communications, with shared states. This figure is based on a similar one in Ref. [37].
Close modal

Graph theoretical tools are deployed to represent the dynamical interdependence of the system's states [37]. A directed graph or digraph provides a pictorial representation of the system's structure [40]. The digraph of system S is defined as the pair Gs=(Vs,Es), where Vs=χVB is the set of vertices consisted of the system states χ={Xj}j=1,,nx, the noise inputs V={vj}j=1,,nx, and the scalar terms B={βj}j=1,,M. The set Es represents the oriented edges defined by the ordered pairs {νl,νm}, where νlVs and νmχ. An oriented edge exists between the state Xl (or vl and βl) and state Xm, if the former appears at the dynamic equation of the latter. If an edge exists between vertices νl and νm, we call them adjacent and denote this relationship by νlνm. We define the neighborhood VmVs of the vertex Xmχ as the set Vm={Xlχ|{Xl,Xm}Es} of all adjacent states to Xm. The digraph Gs is also referred to as structural graph of the system S.

From a graph theoretical perspective, each subsystem SI of the monolithic process is represented by a cut-point set of vertices Vs(I), where Vs(I)χ. Each cut-point set includes states that are observed locally by sensors of its corresponding subsystem. The components of χ that belong to the cut-point set Vs(I) comprise the local states of subsystem SI. States from subsystems with departing directed edges that enter the vertices of a cut-point set determine the interconnection variables or forcing terms.

Following the previous definitions [41], every subsystem SI has a local state vector xInxI, local interconnection variables vector dIndI, and local process noise input vector vInvI. Each of the M fault modes will have their own presence in every subsystem. Let gIJ(k) denote the appearance of the fault mode J(J=0,1,,M) at subsystem I(I=1,,N). Based on the earlier definition, the state space model of each subsystem SI is described by
(12)

where uI(k)nuI and zI(k)nzI refer to the control input and measurement vector of subsystem I, respectively. The nonlinear functions fI(·,·,·):nxI×ndI×nuInxI and hI(·):nxInzI denote to the local subsystem and measurement dynamics; while vI(k)nxI and ωI(k)nzI stand for the subsystem and measurement noise, respectively.

Likewise to the centralized approach, the formulation of the reduced order local PF for fault diagnosis will include a vector of binary states to represent the absence or presence of each fault mode. The binary vector of failure mode j at subsystem I is represented by bIj(k)=[b1,Ij(k)b2,Ij(k)]T with the local fault function gIj(·).

The aforementioned definitions are illustrated with a simple example. Consider a three-dimensional system with the global state vector x(k)=[X1(k),X2(k),X3(k)]T, the noise vector v(k)=[v1(k),v2(k),v3(k)]T, and the set of change step functions B={β1(k),β2(k)}. The digraph of this example is shown in Fig. 3. Each sensor set monitors one subsystem. The monolithic system of the this example is decomposed into two subsystems represented by the cut-point sets Vs(1)={X1,X2} and Vs(2)={X2,X3}. The local dynamic models of these two overlapping subsystems are

Fig. 3
System digraph of the running example with three states. The circles represent the states, and the squares represent the noise input and scalar variables vectors (fault occurrence terms). The thunderbolt marks illustrate the location of the potential failure modes.
Fig. 3
System digraph of the running example with three states. The circles represent the states, and the squares represent the noise input and scalar variables vectors (fault occurrence terms). The thunderbolt marks illustrate the location of the potential failure modes.
Close modal
(13)
(14)
The compact formulation of the local reduced-order failure sensitive filter that incorporates the evolution of the binary states is given by
(15)

where XI=[(xI)T(bI1)T(bI2)T(bIM)T]TnxI+2·M, UI=[(uI)T(dI)T]T, ZI=zI,VI=[ṽITnIT]T, and FI(·),HI(·) are nonlinear functions with appropriate dimensions and structure. The diagnostic algorithm includes the design of one DN for each subsystem SI. Each DN consists of a processing unit that executes the local PF algorithm. The nodes can measure their own local states and communicate with their neighbors to obtain a processed estimate of their forcing term vector. The layout of the proposed DPFFD algorithm is depicted in Fig. 4. The algorithm can be separated into three main parts.

Fig. 4
Schematic of the reduced-order DPFFD algorithm. The thunderbolt markers represent the location of the potential fault modes in the system.
Fig. 4
Schematic of the reduced-order DPFFD algorithm. The thunderbolt markers represent the location of the potential fault modes in the system.
Close modal

Particles update: In the first part, each DN executes a local bootstrap PF. For every subsystem, Ns particles are drawn according to the state transition propagation given in Eq. (15). This action requires estimates of the forcing terms d̂I obtained by the neighbors of DN I. At time k − 1 all subsystems have already generated an estimate of their own states.

Weights update: In this step, each node uses its local observation and updates its particles' weights based on p(ZI(k)|XIi(k)). By taking into account the estimates of the neighboring states d̂I, and by choosing the proposal distribution similar to the local state transition function, the weight update is given by
(16)

The local PF is concluded after the weight normalization and resampling steps of the bootstrap filter (Sec. 2). The outputs of each DN are an estimation of the subsystems' state vector x̂I(k), and the probabilities of failure b̂2,Ij(k)=E[b2,Ij(k)|ZI(k)];j=1,,M.

Shared states fusion: The last part of the reduced-order DPFFD algorithm involves the fusion of state variables that belong to overlapping cut-point sets. At each time step, the estimates of all DN are collected by a central fusion center that assembles the final global output of the diagnostic network. The data transmitted to the central unit contain only postprocessed information. This is the only centralized processing action that takes place on the fusion center and does not add significant computational overhead to the algorithm. A running average filter is executed between the shared states to calculate a common estimate of their value.

The state estimates x̂InxI of each DN do not have the same dimensions and cannot be added directly. To this end, the local state matrices HInx×nxI are introduced that convert the local state estimates x̂I to the vectors x̂Ig of dimension nx. The vector x̂Ig has nonzero entries, equal to the components of x̂I, at the states that correspond to the elements of the cut-point set VsI. The rows of HI correspond to the components of the global continuous valued state vector x, while its rows to the components of xI. The binary matrix HI has a nonzero entry (equal to one) to its i–j element, if the global state variable Xi appears to the jth component of the local state xI. For the local state vectors one has
(17)
where the local state matrix HI is defined as follows:
(18)
In the context of the running example given in Eqs. (13) and (14), the local state matrixes for each DN are
(19)
with corresponding full dimensional local vectors
(20)
The superscripts are added to clarify the subsystem that each component belongs to. We further introduce the binary selection system-to-state (StS) matrix TI×nx. Each entry indicates the presence of a global state to each subsystem. The StS matrix T has a nonzero entry (equal to one) to its i–j element, if the state variable Xj appears to subsystem Si. The StS matrix is defined as
(21)
The nx dimensional local state vectors x̂Ig are added together. Every element of the resulting summation vector is divided by the number of appearances of the corresponding state variable to the cut-point sets. Denote by σ=I=1NHIx̂Ig the sum of the nx dimensional state vectors. The ith element of the global fused state vector x̂ is given by
(22)
For the running illustration, the StS matrix is
(23)
Hence, the global fused state vector x̃ according to Eq. (22) is given by
(24)

The block diagram, with the breakdown of the reduced-order DPFFD algorithm's steps, is depicted in Fig. 5. The corresponding pseudocode of the proposed algorithm is outlined in Table 4.

Fig. 5
Block diagram of the ith DN
Fig. 5
Block diagram of the ith DN
Close modal
Table 4

Pseudocode of the REDUCED-ORDER DPFFD algorithm

function REDUCED-ORDER DPFFD
Inputs:xI(k1),HI,{bIj(k1)}j=1M,wIi(k1),uI(k1),ZI(k),Gs
Outputs:XIi(k),wIi(k),{b̂2,Ij(k)}j=1M,x̃
Required:CPFFD
1: forI=1:Ndo
2:  forj=1:nxIdo
3:   FindVjbasedonGs ▷ Find the neighborhood set of node j
4:   dI=jVj{Xj}j=1nxI{Xj} ▷ Calculation of forcing term vector
5:  end for
6:  UI=[(uI)T(dI)T]T ▷ Control input at DN I
7:  XI=[(xI)T(bI1)T(bI2)T(bIM)T]T ▷ State vector of DN I
8: [XIi(k),wIi(k){b̂2,Ij(k)}j=1M]=CPFFD[XIi(k1),wIi(k1),UI(k1),ZI(k)] ▷ Calculation of the particles, particles weights and probabilities of failure
9:  x̂Ig=HIx̂I ▷ Conversion of the reduced-order vector xI to the full-order vector xIg
10: end for
11: σ=INx̂Ig ▷ Summation of the full-order local state vectors
12: fori=1:nxdo
13:  [x̃]i=[σ]ij=1N[T]ji ▷ Calculation of the global state fusion vector's elements
14: end for
function REDUCED-ORDER DPFFD
Inputs:xI(k1),HI,{bIj(k1)}j=1M,wIi(k1),uI(k1),ZI(k),Gs
Outputs:XIi(k),wIi(k),{b̂2,Ij(k)}j=1M,x̃
Required:CPFFD
1: forI=1:Ndo
2:  forj=1:nxIdo
3:   FindVjbasedonGs ▷ Find the neighborhood set of node j
4:   dI=jVj{Xj}j=1nxI{Xj} ▷ Calculation of forcing term vector
5:  end for
6:  UI=[(uI)T(dI)T]T ▷ Control input at DN I
7:  XI=[(xI)T(bI1)T(bI2)T(bIM)T]T ▷ State vector of DN I
8: [XIi(k),wIi(k){b̂2,Ij(k)}j=1M]=CPFFD[XIi(k1),wIi(k1),UI(k1),ZI(k)] ▷ Calculation of the particles, particles weights and probabilities of failure
9:  x̂Ig=HIx̂I ▷ Conversion of the reduced-order vector xI to the full-order vector xIg
10: end for
11: σ=INx̂Ig ▷ Summation of the full-order local state vectors
12: fori=1:nxdo
13:  [x̃]i=[σ]ij=1N[T]ji ▷ Calculation of the global state fusion vector's elements
14: end for

The reduced-order DPFFD algorithm results to a significant reduction in the computational complexity and communication bandwidth requirements of each DN. Suppose that the large-scale system has nx+2·M states. The computational complexity of the centralized architecture, according to Ref. [42] and by considering Ns particles is approximated by O((nx+2·M)2·Ns) floating point operations (flops). By decomposing the system into N subsystems, the number of the state variables is decreased to (nx+2·M)/N at each subsystem. Assuming that Ns particles are generated in every reduced-order estimator node, and with the assumption of no shared states, the total computational complexity of the reduced-order DPFFD algorithm reduces to O((nx+2·M)2·Ns/N).

Numerical Results

This section provides an evaluation of the proposed DPFFD algorithm via extensive numerical simulations. Two systems of different dimensionality are analyzed to validate the efficiency of the algorithm. In both cases, the process model under investigation is a water tank system. This process was selected since its dynamics are nonlinear and its physical subcomponents (water tanks) are clearly identified.

The first case study involves the water tank system illustrated in Fig. 6. This process consists of nine identical cylindrical tanks of cross-sectional area Sc. The tanks are connected with pipes of cross section area Sp. The flow rate Qi,j between tank i and tank j is defined by means of Torricelli's rule as

Fig. 6
Schematic of the nine-tank system that has been decomposed into two subsystems. The subsystems are specified by the dashed lines while the fault mode locations are noted by double arrows.
Fig. 6
Schematic of the nine-tank system that has been decomposed into two subsystems. The subsystems are specified by the dashed lines while the fault mode locations are noted by double arrows.
Close modal
(25)
where μi is the flow correction term of tank i, g is the gravity constant, and Xi is the water level of the ith tank. The water level dynamics of tank i is determined by means of mass balance equation as
(26)

where Ni refers to the neighboring tanks of tank i. The nominal values of the process's parameters are given in Table 5 and are based on the benchmark process described in Ref. [43]. The fault modes under consideration are abrupt leaks to the water tanks. The leakage dynamics ot tank j are given by

Table 5

Model parameters of the water tank system

ParameterMeaningValue
ScTank cross-sectional area0.0154m2
SpCross section of interacting pipes2×105m2
μiFlow correction term1
gGravity constant9.81m/s2
ParameterMeaningValue
ScTank cross-sectional area0.0154m2
SpCross section of interacting pipes2×105m2
μiFlow correction term1
gGravity constant9.81m/s2
(27)
By discretizing the continuous dynamics, the difference equation of the ith tank takes the form
(28)

where Ts=0.1s is the sampling period, and the process noise vi is drawn from the normal distribution N(0,0.05). The goal of this simulation scenario is to investigate the case of decomposition with overlapping states. To this end, the monolithic process is decomposed into two reduced-order subsystems, namely S1 and S2, as shown in Fig. 6. Figure 7(a) depicts the structural graph of the monolithic process and its partitions. The local observation vectors of the two DNs are expressed by

Fig. 7
(a) Structural graph and subsystem decomposition and (b) separation of shared and unshared states for observation fusion. The presence of the β term and the noise V are omitted for illustrative purposes.
Fig. 7
(a) Structural graph and subsystem decomposition and (b) separation of shared and unshared states for observation fusion. The presence of the β term and the noise V are omitted for illustrative purposes.
Close modal
(29)

where x1=[X1,,X6]T,x2=[X4,,X9]T, d1=[X7,X8,X9]T,d2=[X1,X2,X3]T, and the measurement noise sequences ω1 and ω2 are generated by the multivariate normal distribution N(0,0.1). The subgraphs of the observation fusion are depicted in Fig. 7(b). As shown in this figure, the states {X4,X5,X6} are shared between the two DNs. Three failure modes are seeded at tanks 1, 4, and 5 at the time steps T=290,250,and200s, respectively. The time horizon of the simulation is set to 360 time steps. The number of particles at each DN is set to Ns = 200.

During the execution of the reduced-order DPFFD algorithm, the i.i.d noise that drives the binary states is generated by the distribution U(0.6,0.6). Figure 8 shows the population of the particles on the b1b2 plane during the healthy and faulty operating condition of the system at a given time instant. The selection of this noise range plays a crucial role in the performance of the algorithm. The effect of the i.i.d uniform white noise is illustrated in Fig. 9. When the noise is U(a,a) with a = 0.5 (too small), there is no overlap between the two regions; thus, the particles remain trapped in the healthy state even in the presence of a fault. On the contrary, when the overlap increases (a0.8), the particles keep transitioning between the states and the output of the failure filter is indecisive.

Fig. 8
Spatial distribution of the binary particles during the execution of the failure sensitive filter. The cross markers denote the locations of e1 and e2, respectively.
Fig. 8
Spatial distribution of the binary particles during the execution of the failure sensitive filter. The cross markers denote the locations of e1 and e2, respectively.
Close modal
Fig. 9
Spatial distribution of the binary particles with different range of i.i.d driving noise when the process is in healthy operating condition. The cross markers denote the locations of e1 and e2.
Fig. 9
Spatial distribution of the binary particles with different range of i.i.d driving noise when the process is in healthy operating condition. The cross markers denote the locations of e1 and e2.
Close modal

The binary state's update function Φ(·) of Eq. (10) is essentially a “data-driven feedback” for the failure sensitive filter. This way, when the process is healthy, the filter will diminish the particles that correspond to fault modes indirectly through the likelihood function [34]. A compromising value that ensures the optimal operation of the diagnostic filter was shown to be a = 0.6.

The probabilities of failure for each fault mode are illustrated in Fig. 10. Due to the overlap of the two DNs, the estimates of common states {X4,X5,X6} are fused using the central averaging step described in Sec. 4. As it can be seen, both DNs can timely detect and isolate their respective fault modes.

Fig. 10
First case study: probabilities of failure (dashed dotted line) for each fault mode as generated by the two DNs. The solid horizontal line marks the detection threshold. The solid vertical line indicates the occurrence instant of the fault. The states X4, X5, and X6 are shared between the two DNs.
Fig. 10
First case study: probabilities of failure (dashed dotted line) for each fault mode as generated by the two DNs. The solid horizontal line marks the detection threshold. The solid vertical line indicates the occurrence instant of the fault. The states X4, X5, and X6 are shared between the two DNs.
Close modal
The second simulation scenario involves a grid of 100 water tanks organized in a lattice of ten rows and ten columns. In this scenario, the dimensionality of the system is significantly increased compared to the first case. To the authors knowledge this is the highest dimension simulation example encountered in the literature of FD. The goal is to validate the DPFFD algorithm when applied to a large-scale system. A DN is assigned to each water tank. The DNs monitor the water level of their respective tank and exchange information with their adjacent nodes. The observation equation of each DN is expressed by
(30)
where xi = Xi and i=1,,100. The two-dimensional location of the tank in the grid array is converted to a single index. The one-dimensional index i is calculated by
(31)

where nrow and ncol are the row and column number of the tank in the grid, and Nrows denotes the total rows in the array.

Abrupt leaks are seeded randomly to nine tanks at the time instances listed in Table 6. The nominal values of the system parameters are identical to the first scenario (Table 5). The measurement/process noise are drawn from the normal distribution N(0,1). The time horizon of the simulation is set to 190 time steps. The same tuning guidelines for the failure sensitive filters hold with the first example. The initial values of the estimated tank water levels are set to 14m.

Table 6

Occurrence time of each fault mode

Fault mode numberTime of occurrence
Mode 8210
Mode 9130
Mode 1350
Mode 9270
Mode 6490
Mode 10110
Mode 28130
Mode 55150
Mode 96170
Fault mode numberTime of occurrence
Mode 8210
Mode 9130
Mode 1350
Mode 9270
Mode 6490
Mode 10110
Mode 28130
Mode 55150
Mode 96170

Due to the high dimensionality of the system's states, the simulation results are presented with respect to both time and space. The illustration of the probabilities of failure, the actual and estimated values of the water tank levels are shown in the first, second, and the third row of Fig. 11, respectively. The output values of the DNs are depicted as color-coded pixels based on their location in the lattice, for different time instances. The probabilities of failure with respect to time, only for the leaked tanks, are shown in Fig. 12. When a leak is seeded in one of the tanks, its water level will gradually reduce. For a transient interval, the neighboring tanks will try to compensate for this loss due to the pressure difference until their level will also start to decrease as well. The diagnostic performance is deemed satisfactory since each DN can promptly detect and isolate its own fault mode in spite of having access to local information. This case study involves only nonoverlapping subsystems; therefore, state fusion was not necessary. The computational reduction compared to the CPFFD algorithm is dramatic. Instead of processing 100 states, each node is responsible of monitoring a one-dimensional system.

Fig. 11
Second case study: probabilities of failure (first row), actual (second row), and estimated (third row) values of the tanks water levels at different time instances. Each pixel represents the water level of a tank in the lattice.
Fig. 11
Second case study: probabilities of failure (first row), actual (second row), and estimated (third row) values of the tanks water levels at different time instances. Each pixel represents the water level of a tank in the lattice.
Close modal
Fig. 12
Second case study: probabilities of failure (dashed line) with respect to time of the DNs with a leak occurring to their respective tank. The solid horizontal line marks the detection threshold. The solid vertical line signals the fault occurrence instant.
Fig. 12
Second case study: probabilities of failure (dashed line) with respect to time of the DNs with a leak occurring to their respective tank. The solid horizontal line marks the detection threshold. The solid vertical line signals the fault occurrence instant.
Close modal

Conclusions

We have presented a reduced-order distributed implementation of a fault detection and isolation algorithm for nonlinear large-scale systems. A network of interconnected DNs is employed to monitor the entire process. Each node monitors lower-order subdivisions of the monolithic system. The DNs have access to partial local measurements and can communicate with adjacent nodes of the monitoring network. The layout of the scheme is driven by the two main constraints of networked systems: the available communication bandwidth and processing capabilities of the nodes. An on-line hypothesis testing module is embedded at each failure sensitive filter that triggers alarm indicators in the presence of a fault. This inference component eliminates the need for the entire system's state at each DN and the necessity of a bank of estimators to isolate the occurring faults. A simplistic state fusion step takes place between nodes that monitor common states. This approach relieves the filter design analysis by substituting the complex stability proofs that are required by observed-based methods, with Monte Carlo simulations that are conveniently applicable to real-life sensor networks.

Funding Data

  • National Science Foundation (NSF) (Award No. CMMI-1662742).

Nomenclature

b =

binary vector of Sf

B =

set of time profile functions

bI =

binary vector of subsystem SIf

dI =

interconnection variables (forcing terms) of subsystem SI

ei =

prediction error of particle i

f(·) =

state transition function

F(·) =

compact state transition function

fI(·) =

state transition function of subsystem SI

FI(·) =

compact state transition function of subsystem SIf

Gs =

structural graph (digraph) of system S

gj(·) =

function of fault mode j

gIj(k) =

fault function of failure mode j at subsystem SI

h(·) =

observation function

hI(·) =

observation function of subsystem SI

HI(·) =

compact local observation function

HI =

local state matrix

k =

time index

k0j =

time occurrence of failure mode j

M =

number of failure modes

n =

uniform white noise

N =

number of subsystems

ndI =

dimension of subsystem's SI forcing vector dI

nu =

dimension of input vector

nuI =

dimension of subsystem's SI input vector uI

nv =

dimension of process noise vector

nvI =

dimension of subsystem's SI system noise vector vI

nx =

dimension of state vector

nxI =

dimension of subsystem's SI state vector xI

nz =

dimension of observation vector

nzI =

dimension of subsystem's SI measurement vector zI

nω =

dimension of measurement noise vector

nωI =

dimension of subsystem's SI measurement noise vector ωI

Ns =

number of particles

N(·,·,·) =

normal distribution

p(x(k)|z(1:k)) =

posterior density function

p(x(k)|x(k1)) =

state transition density function

p(z(k)|x(k)) =

likelihood density function

q(x(k)|z(1:k)) =

proposal distribution function

=

set of real numbers

S =

monolithic system

SI =

subsystem I of monolithic system S

Sf =

failure sensitive filter

SIf =

subsystem I of failure sensitive filter Sf

T =

StS matrix

u =

input vector

uI =

input vector of subsystem SI

UI =

compact input vector of subsystem SIf (combination of uI and dI)

U(·,·) =

uniform distribution

v =

process noise vector

ṽ =

approximate process noise vector

V =

set of noise inputs

V =

compact noise vector of system Sf (combination of ṽ and n)

vI =

process noise vector of subsystem SI

VI =

compact noise vector of subsystem SIf (combination of vI and nI)

Vm =

neighborhood set of vertex m

VsI =

set of vertices of the graph Gs

wi =

particles' weight

wIi =

particles' weight of DN I

x =

state vector of monolithic system S

x̃ =

global fused state vector

X =

compact state vector of Sf (combination of xc and bj|j=1,,M)

xI =

state vector of subsystem SI

Xi =

state variable i

XI =

compact state vector of subsystem SIf (combination of xIc and bIj|j=1,,M)

xi =

particles of x

xc =

continuous valued state vector of Sf

XIi =

particles generated by DN I

x̂Ig =

nx order estimate of state xI

X̂jSIf =

estimate of Xj by DN I

z =

measurement vector

Z =

compact measurement vector of system Sf

zI =

observation vector of subsystem SI

ZI =

compact observation vector of subsystem SIf

+ =

set of positive integers

β =

time profile function of a fault's occurrence

δ(·) =

Dirac function

εs =

edges of the graph Gs

Σω =

covariance matrix of measurement noise

Φ(·) =

update function of the binary states

χ =

set of system states

ω =

measurement noise vector

ω̃ =

approximate measurement noise vector

ωI =

measurement noise vector of subsystem SI

References

1.
Willsky
,
A. S.
,
1976
, “
A Survey of Design Methods for Failure Detection in Dynamic Systems
,”
Automatica
,
12
(
6
), pp.
601
611
.
2.
Gertler
,
J. J.
,
1988
, “
Survey of Model-Based Failure Detection and Isolation in Complex Plants
,”
IEEE Control Syst. Mag.
,
8
(
6
), pp.
3
11
.
3.
Frank
,
P. M.
,
1990
, “
Fault Diagnosis in Dynamic Systems Using Analytical and Knowledge-Based Redundancy: A Survey and Some New Results
,”
Automatica
,
26
(
3
), pp.
459
474
.
4.
Chen
,
J.
, and
Patton
,
R. J.
,
1999
,
Robust Model-Based Fault Diagnosis for Dynamic Systems
,
Kluwer Academic Publishers
,
Dordrecht, The Netherlands
.
5.
Venkatasubramanian
,
V.
,
Rengaswamy
,
R.
,
Yin
,
K.
, and
Kavuri
,
S. N.
,
2003
, “
A Review of Process Fault Detection and Diagnosis—Part I: Quantitative Model-Based Methods
,”
Comput. Chem. Eng.
,
27
(
3
), pp.
293
311
.
6.
Ding
,
S. X.
,
2008
,
Model-Based Fault Diagnosis Techniques: Design Schemes, Algorithms, and Tools
,
Springer Science & Business Media
, Berlin.
7.
Baroni
,
P.
,
Lamperti
,
G.
,
Pogliano
,
P.
, and
Zanella
,
M.
,
1999
, “
Diagnosis of Large Active Systems
,”
Artif. Intell.
,
110
(
1
), pp.
135
183
.
8.
Rish
,
I.
,
Brodie
,
M.
,
Ma
,
S.
,
Odintsova
,
N.
,
Beygelzimer
,
A.
,
Grabarnik
,
G.
, and
Hernandez
,
K.
,
2005
, “
Adaptive Diagnosis in Distributed Systems
,”
IEEE Trans. Neural Networks
,
16
(
5
), pp.
1088
1109
.
9.
Le
,
T.
, and
Hadjicostis
,
C. N.
,
2006
, “
Graphical Inference Methods for Fault Diagnosis Based on Information From Unreliable Sensors
,”
Ninth International Conference on Control, Automation, Robotics and Vision
(
ICARCV
), Singapore, Dec. 5–8, pp.
1
6
.
10.
Dong
,
H.
,
Wang
,
Z.
,
Ding
,
S. X.
, and
Gao
,
H.
,
2014
, “
A Survey on Distributed Filtering and Fault Detection for Sensor Networks
,”
Math. Probl. Eng.
,
2014
, p.
858624
.
11.
Ferrari
,
R. M.
,
Parisini
,
T.
, and
Polycarpou
,
M. M.
,
2012
, “
Distributed Fault Detection and Isolation of Large-Scale Discrete-Time Nonlinear Systems: An Adaptive Approximation Approach
,”
IEEE Trans. Autom. Control
,
57
(
2
), pp.
275
290
.
12.
Boem
,
F.
,
Ferrari
,
R. M.
,
Parisini
,
T.
, and
Polycarpou
,
M. M.
,
2013
, “
Distributed Fault Diagnosis for Continuous-Time Nonlinear Systems: The Input–Output Case
,”
Annu. Rev. Control
,
37
(
1
), pp.
163
169
.
13.
Shames
,
I.
,
Teixeira
,
A. M.
,
Sandberg
,
H.
, and
Johansson
,
K. H.
,
2011
, “
Distributed Fault Detection for Interconnected Second-Order Systems
,”
Automatica
,
47
(
12
), pp.
2757
2764
.
14.
Yan
,
X.-G.
, and
Edwards
,
C.
,
2008
, “
Robust Decentralized Actuator Fault Detection and Estimation for Large-Scale Systems Using a Sliding Mode Observer
,”
Int. J. Control
,
81
(
4
), pp.
591
606
.
15.
Zhang
,
X.
, and
Zhang
,
Q.
,
2012
, “
Distributed Fault Diagnosis in a Class of Interconnected Nonlinear Uncertain Systems
,”
Int. J. Control
,
85
(
11
), pp.
1644
1662
.
16.
Daigle
,
M. J.
,
Koutsoukos
,
X. D.
, and
Biswas
,
G.
,
2007
, “
Distributed Diagnosis in Formations of Mobile Robots
,”
IEEE Trans. Rob.
,
23
(
2
), pp.
353
369
.
17.
Boem
,
F.
,
Ferrari
,
R. M.
,
Parisini
,
T.
, and
Polycarpou
,
M. M.
,
2011
, “
A Distributed Fault Detection Methodology for a Class of Large-Scale Uncertain Input-Output Discrete-Time Nonlinear Systems
,”
50th IEEE Conference on Decision and Control and European Control Conference
(
CDC-ECC
), Orlando, FL, Dec. 12–15, pp.
897
902
.
18.
Franco
,
E.
,
Olfati-Saber
,
R.
,
Parisini
,
T.
, and
Polycarpou
,
M. M.
,
2006
, “
Distributed Fault Diagnosis Using Sensor Networks and Consensus-Based Filters
,”
45th IEEE Conference on Decision and Control
(
CDC
), San Diego, CA, Dec. 13–15, pp.
386
391
.
19.
Olfati-Saber
,
R.
,
2009
, “
Kalman-Consensus Filter: Optimality, Stability, and Performance
,”
48th IEEE Conference on Decision and Control
(
CDC
), Shanghai, China, Dec. 15–18, pp.
7036
7042
.
20.
Olfati-Saber
,
R.
,
2007
, “
Distributed Kalman Filtering for Sensor Networks
,”
46th IEEE Conference on Decision and Control
(
CDC
), New Orleans, LA, Dec. 12–14, pp.
5492
5498
.
21.
Kadirkamanathan
,
V.
,
Li
,
P.
,
Jaward
,
M. H.
, and
Fabri
,
S. G.
,
2002
, “
Particle Filtering-Based Fault Detection in Non-Linear Stochastic Systems
,”
Int. J. Syst. Sci.
,
33
(
4
), pp.
259
265
.
22.
Cheng
,
Q.
,
Varshney
,
P. K.
,
Michels
,
J. H.
, and
Belcastro
,
C. M.
,
2009
, “
Distributed Fault Detection With Correlated Decision Fusion
,”
IEEE Trans. Aerosp. Electron. Syst.
,
45
(
4
), pp. 1448–1465.
23.
Cheng
,
Q.
,
Varshney
,
P. K.
,
Michels
,
J.
, and
Belcastro
,
C. M.
,
2005
, “
Distributed Fault Detection Via Particle Filtering and Decision Fusion
,”
Eighth International Conference on Information Fusion
(
ICIF
), Philadelphia, PA, July 25–28, p.
8
.
24.
Noursadeghi
,
E.
, and
Raptis
,
I.
,
2015
, “
Full-Order Distributed Fault Diagnosis for Large-Scale Nonlinear Stochastic Systems
,”
ASME
Paper No. DSCC2015-9927.
25.
Noursadeghi
,
E.
, and
Raptis
,
I.
,
2015
, “
Distributed Fault Detection of Nonlinear Large-Scale Dynamic Systems
,”
ACM/IEEE Sixth International Conference on Cyber-Physical Systems
, Seattle, WA, Apr. 14–16, pp.
51
59
.http://dl.acm.org/citation.cfm?id=2735981
26.
Hlinka
,
O.
,
Hlawatsch
,
F.
, and
Djuric
,
P. M.
,
2013
, “
Distributed Particle Filtering in Agent Networks: A Survey, Classification, and Comparison
,”
IEEE Signal Process. Mag.
,
30
(
1
), pp.
61
81
.
27.
Hlinka
,
O.
,
2012
, “
Distributed Particle Filtering in Networks of Agents
,”
Ph.D. dissertation
, Vienna University of Technology, Vienna, Austria.https://publik.tuwien.ac.at/files/PubDat_209278.pdf
28.
Mohammadi
,
A.
,
2013
, “
Distributed Implementations of the Particle Filter With Performance Bounds
,”
Ph.D. dissertation
, York University, Toronto, ON, Canada.http://www.cse.yorku.ca/~marash/Qualifier/ArashThesis.pdf
29.
Mohammadi
,
A.
, and
Asif
,
A.
,
2011
, “
A Consensus/Fusion Based Distributed Implementation of the Particle Filter
,”
Fourth IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing
(
CAMSAP
), San Juan, Puerto Rico, Dec. 13–16, pp.
285
288
.
30.
Sadeghzadeh Nokhodberiz
,
N.
, and
Poshtan
,
J.
,
2014
, “
Belief Consensus–Based Distributed Particle Filters for Fault Diagnosis of Non-Linear Distributed Systems
,”
Proc. Inst. Mech. Eng., Part I
,
228
(
3
), pp.
123
137
.http://journals.sagepub.com/doi/pdf/10.1177/0959651813512478
31.
Ristic, B., Arulampalam, S., and Gordon, N.,
2004
,
Beyond the Kalman Filter: Particle Filters for Tracking Applications
, Vol. 830,
Artech House
,
London
.
32.
Gordon
,
N. J.
,
Salmond
,
D. J.
, and
Smith
,
A. F.
,
1993
, “
Novel Approach to Nonlinear/Non-Gaussian Bayesian State Estimation
,”
IEE Proc. F Radar Signal Process.
,
140
(
2
), pp.
107
113
.
33.
Arulampalam
,
M. S.
,
Maskell
,
S.
,
Gordon
,
N.
, and
Clapp
,
T.
,
2002
, “
A Tutorial on Particle Filters for Online Nonlinear/Non-Gaussian Bayesian Tracking
,”
IEEE Trans. Signal Process.
,
50
(
2
), pp.
174
188
.
34.
Orchard
,
M.
,
2006
, “
A Particle Filtering-Based Framework for Online Fault Diagnosis and Failure Prognosis
,”
Ph.D. dissertation
, Georgia Institute of Technology, Atlanta, GA.https://smartech.gatech.edu/handle/1853/19752
35.
Raptis
,
I. A.
, and
Vachtsevanos
,
G.
,
2011
, “
An Adaptive Particle Filtering-Based Framework for Real-Time Fault Diagnosis and Failure Prognosis of Environmental Control Systems
,”
Annual Conference of the Prognostics and Health Management Society
, Montreal, QC, Canada, Sept. 25–29.https://www.phmsociety.org/node/646
36.
Raptis
,
I. A.
,
Sconyers
,
C.
,
Martin
,
R.
,
Mah
,
R.
,
Oza
,
N.
,
Mavris
,
D.
, and
Vachtsevanos
,
G. J.
,
2013
, “
A Particle Filtering-Based Framework for Real-Time Fault Diagnosis of Autonomous Vehicles
,”
Annual Conference of the Prognostics and Health Management Society
, New Orleans, LA, Oct. 14–17.https://www.phmsociety.org/node/969
37.
Ferrari
,
R. M.
,
2009
, “
Distributed Fault Detection and Isolation of Large-Scale Nonlinear Systems: An Adaptive Approximation Approach
,”
Ph.D. dissertation
, Universita degli studi di Trieste, Trieste, Italyhttps://www.openstarts.units.it/bitstream/10077/3118/1/00_master.pdf.
38.
Šiljak
,
D. D.
,
1978
,
Large-Scale Dynamic Systems: Stability and Structure
, Vol.
2
,
North Holland
, Amsterdam, The Netherlands.
39.
Vidyasagar
,
M.
,
1980
, “
Decomposition Techniques for Large-Scale Systems With Nonadditive Interactions: Stability and Stabilizability
,”
IEEE Trans. Autom. Control
,
25
(
4
), pp.
773
779
.
40.
Mesbahi
,
M.
, and
Egerstedt
,
M.
,
2010
,
Graph Theoretic Methods in Multiagent Networks
,
Princeton University Press
, Princeton, NJ.
41.
Khan
,
U. A.
, and
Moura
,
J. M.
,
2008
, “
Distributing the Kalman Filter for Large-Scale Systems
,”
IEEE Trans. Signal Process.
,
56
(
10
), pp.
4919
4935
.
42.
Karlsson
,
R.
,
Schon
,
T.
, and
Gustafsson
,
F.
,
2005
, “
Complexity Analysis of the Marginalized Particle Filter
,”
IEEE Trans. Signal Process.
,
53
(
11
), pp.
4408
4411
.
43.
Zhang
,
X.
,
Polycarpou
,
M. M.
, and
Parisini
,
T.
,
2002
, “
A Robust Detection and Isolation Scheme for Abrupt and Incipient Faults in Nonlinear Systems
,”
IEEE Trans. Autom. Control
,
47
(
4
), pp.
576
593
.