英文原文
8.2.1.2 Coprime Factorization Techniques
Coprime factorization of a transfer function (matrix) gives a further system
rep-resentation form which will be intensively used in our subsequent study.
Roughly speaking, a coprime factorization over[mathcal{R}mathcal{H}_{infty}$$ is to factorize a transfer matrix into two stable and coprime transfer matrices. ]
Definition 8.1 Two stable transfer matrices $$hat{M}(z)$$,
[hat{N}(z)$$ are called left coprime if ]
there exist two stable transfer matrices $$hat{X}(z)$$ and $$hat{Y}(z)$$ such
that
138 8 Introduction, Preliminaries and I/O Data Set Models
Similarly, two stable transfer matrices M(z), N(z) are right coprime if there
exist two stable matrices Y(z), X(z) such that
Let G(z) be a proper real-rational transfer matrix. The left coprime
factorization (LCF) of G(z) is a factorization of G(z) into two stable and
coprime matrices which will play a key role in designing the so-called residual
generator. To complete the notation, we also introduce the right coprime
factorization (RCF), which is however only occasionally applied in our study.
Definition 8.2 $$Gleft( z
ight) = {hat{M}}^{- 1}left( z
ight)hat{N}(z)$$
with the left coprime pair $$left( hat{M}left( z
ight),hat{N}left( z
ight)
ight)$$ is
called LCF of G(z). Similarly, RCF of G(z) is defined by $$Gleft( z
ight)
= Nleft( z
ight)Mleft( z
ight)^{- 1}$$ with the right coprime pair
hat{M}left( z ight) = left( A - LC, - L,C,I ight),hat{N}left( z ight) = left( A - LC,B - LD,C,D ight)
Mleft( z ight) = left( A + BF,B,F,I ight),Nleft( z ight) = left( A + BF,B,C,C + DF,D ight)
hat{X}left( z ight) = left( A - LC, - left( B - LD ight),F,I ight),Yleft( z ight) = left( A - LC, - L,F,0 ight)
Xleft( z ight) = left( A - LC, - left( B - LD ight),F,I ight),Yleft( z ight) = left( A - LC, - L,F,0 ight)
Gleft( z ight) = {hat{M}}^{- 1}left( z ight)hat{N}left( z ight) = Nleft( z ight)M^{- 1}left( z ight)
egin{bmatrix}
Xleft( z
ight) & Yleft( z
ight)
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
Mleft( z ight) & - hat{Y}left( z ight)
Nleft( z ight) & Xleft( z ight)
end{bmatrix} = egin{bmatrix}
I & 0
0 & I
end{bmatrix}
rleft( z ight) = egin{bmatrix}
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
uleft( z ight)
yleft( z ight)
end{bmatrix}
xleft( k + 1 ight) = Axleft( k ight) + Buleft( k ight) + E_{d}dleft( k ight) + etaleft( k ight)
yleft( k ight) = C_{x}left( k ight) + D_{u}left( k ight) + F_{d}dleft( k ight) + vleft( k ight)
- input–output model
where Gyd(z), Gyν(z) are known.
8.2.1.4 Modeling of Faults
There exists a number of ways of modeling faults. Extending model (8.16) to
is a widely adopted one, where $$f in mathcal{R}^{k_{f}}$$ is a unknown
vector that represents all possible faults and will be zero in the
fault-free case, $$G_{ ext{yf}}(zmathcal{) in L}mathcal{H}_{infty}$$ is
a known transfer matrix. Throughout this book, f is assumed to be a
deterministic time func-tion. No further assumption on it is made, provided
that the type of the fault is not specified.
Suppose that a minimal state space realization of (8.17) is given by
With know matrixs Ef , Ff .Then we have
140 8 Introduction, Preliminaries and I/O Data Set Models
It becomes evident that Ef , Ff indicate the place where a fault occurs
and its influ-ence on the system dynamics. It is the state of the art that
faults are divided into three categories:
-
sensor faults fS : these are faults that directly act on the process
measurement -
actuator faults fA: these faults cause changes in the actuator
-
process faults fP : they are used to indicate malfunctions within the
process.
A sensor fault is often modeled by setting Ff = I, that is,
while an actuator fault by setting Ef = B, Ff = D, which leads to
Depending on their type and location, process faults can be modeled by Ef
= EP and Ff = FP for some EP, FP. For a system with sensor, actuator
and process faults, we define
and apply (8.18), (8.19) to represent the system dynamics.
Due to the way how they affect the system dynamics, the faults described by
(8.18), (8.19) are called additive faults. It is very important to note that
the occurrence of an additive fault will not affect the system stability,
independent of the system configuration. Typical additive faults met in
practice are, for instance, an offset in sensors and actuators or a drift in
sensors. The former can be described by a constant, while the latter by a
ramp.
In practice, malfunctions in the process or in the sensors and actuators
often cause changes in the model parameters. They are called multiplicative
faults and generally modeled in terms of parameter changes.
8.2.2 Model-Based Residual Generation Schemes
Next, we introduce some standard model- and observer-based residual
generation schemes.
8.2 Preliminaries and Review of Model-Based FDI Schemes | 141 |
---|
8.2.2.1 Fault Detection Filter
Fault detection filter (FDF) is the first type of observer-based residual
generators proposed by Beard and Jones in the early 1970s. Their work marked the
beginning of a stormy development of model-based FDI techniques.
Core of an FDF is a full-order state observer
which is constructed on the basis of the nominal system model Gyu(z) =
C(zI − A)−1B + D. Built upon (8.25), the residual is simply defined
by
Introducing variable
yields, on the assumption of process model (8.18), (8.19),
It is evident that r(k) has the characteristic features of a residual when the
observer gain matrix L is chosen so that A − LC is stable.
The advantages of an FDF lie in its simple construction form and, for the reader
who is familiar with the modern control theory, in its intimate relationship
with the state observer design and especially with the well-established robust
control theory by designing robust residual generators.
We see that the design of an FDF is in fact the determination of the observer
gain matrix L. To increase the degree of design freedom, we can switch a
matrix to the output estimation error $$yleft( z
ight) - hat{y}(z)$$, that
is
A disadvantage of FDF scheme is the online implementation of the full-order
state observer, since in many practical cases a reduced order observer can
provide us with the same or similar performance but with less online
computation. This is one of the motivations for the development of Luenberger
type residual generators, also called diagnostic observers.
142 8 Introduction, Preliminaries and I/O Data Set Models
8.2.2.2 Diagnostic Observer Scheme
The diagnostic observer (DO) is, thanks to its flexible structure and
similarity to the Luenberger type observer, one of the mostly investigated
model-based residual generator forms.
The core of a DO is a Luenberger type (output) observer described by
where $$z in mathcal{R}^{s}$$, s denotes the observer order and can be
equal to or lower or higher than the system order n. Although most
contributions to the Luenberger type observer are focused on the first case
aiming at getting a reduced order observer, higher order observers will play
an important role in the optimization of FDI systems.
Assume Gyu(z) = C(zI − A)−1B + D, then matrices G, H, L , Q, V
and W together with a matrix $$T in mathcal{R}^{s imes n}$$ have to
satisfy the so-called Luenberger conditions,
under which system (8.30), (8.31) delivers a residual vector, that is
Let e(k) = Tx(k) − z(k), it is straightforward to show that the system
dynamics of DO is, on the assumption of process model (8.18), (8.19),
governed by
Remember that in the last section it has been claimed all residual generator
design schemes can be formulated as the search for an observer gain matrix
and a post-filter. It is therefore of practical and theoretical interest to
reveal the relationships between matrices G, L , T , V and W solving
Luenberger equations (8.32)–(8.34) and observer gain matrix as well as
post-filter.
A comparison with the FDF scheme makes it clear that
-
the diagnostic observer scheme may lead to a reduced order residual
generator, which is desirable and useful for online implementation, -
we have more degree of design freedom but, on the other hand,
-
more involved desig
8.2 Preliminaries and Review of Model-Based FDI Schemes
8.2.2.3 Kalman Filter Based Residual Generation
Consider (8.14), (8.15). Assume that η(k), ν(k) are white Gaussian
processes and independent of initial state vector x(0), u(k) with
A Kalman filter is, although structured similar to an observer of the
full-order, a time-varying system given by the following recursions:
recursive computation for optimal state estimation:
recursive computation for Kalman filter gain:
where xˆ(k) denotes the estimation of x(k) and
is the associated estimation error covariance.
The significant characteristics of Kalman filter is
- the state estimation is optimal in the sense of
- the so-called innovation process e(k) = y(k) −C xˆ (k)− Du(k) is a
white Gaussian process with covariance
144 8 Introduction, Preliminaries and I/O Data Set Models
Below is an algorithm for the online implementation of the Kalman filter
algorithm given by (8.40)–(8.45).
Algorithm 8.1 On-line implementation of Kalman filter
S0: Set xˆ(0), P(0) as given in (8.40) and (8.42)
S1: Calculate Re(k), K (k), xˆ(k), according to (8.45), (8.44) and (8.41)
S2: Increase k and calculate P(k + 1) according to (8.43)
S3: Go S1.
Suppose the process under consideration is stationary, then
which is subject to
With
It holds
Equation (8.49) is an algebraic Riccati equation whose solution P is positive
definite under certain conditions. It thus becomes evident that given system
model the gain matrix K can be calculated offline by solving Riccati equation
(8.49). The corre-sponding residual generator is then given by
Note that we now have in fact an observer of the full-order.
Remark 8.1 The offline set up (S0) in the above algorithm is needed only for
one time, but S1–S3 have to be repeated at each time instant. Thus, the online
implemen-tation, compared with the steady-state Kalman filter, is
computationally consuming. For the FDI purpose, we can generally assume that the
system under consideration is operating in its steady state before a fault
occurs. Therefore, the use of the steady-state type residual generator (8.50),
(8.51) is advantageous. In this case, the most involved computation is finding a
solution for Riccati equation (8.49), which, nevertheless, is carried out
offline, and moreover for which there exists a number of numerically reliable
methods and CAD programs.
8.2 Preliminaries and Review of Model-Based FDI Schemes | 145 |
---|
8.2.2.4 Parity Space Approach
The parity space approach was initiated by Chow and Willsky in their pioneering
work in the early 1980s. Although a state space model is used for the purpose of
residual generation, the so-called parity relation, instead of an observer,
builds the core of this approach. The parity space approach is generally
recognized as one of the important model-based residual generation approaches,
parallel to the observer-based and the parameter estimation schemes.
We consider in the following the state space model (8.18), (8.19) and, without
loss of generality, assume that r ank(C) = m. For the purpose of
constructing a residual generator, we first suppose f (k) = 0, d(k) = 0.
Following (8.18), (8.19), y(k − s), s > 0, can be expressed in terms of
x(k − s), u(k − s), and y(k − s + 1) in terms of x(k − s), u(k −
s + 1), u(k − s) as follows
Repeating this procedure yields
Introducing the notations
leads to the following compact model form
Note that (8.54) describes the input and output relationship in dependence on
the past state vector x(k − s). It is expressed in an explicit form, in
which
146 8 Introduction, Preliminaries and I/O Data Set Models
-
ys (k) and us (k) consist of the temporal and past outputs and inputs
respectively and are known -
matrices Ho,s and Hu,s are composite of system matrices A, B, C, D and
also known -
the only unknown variable is x(k − s).
The underlying idea of the parity relation based residual generation lies in
the utilization of the fact, known from the linear control theory, that for
s ≥ n the following rank condition holds:
r ank Ho,s ≤ n < the row number of matrix Ho,s = (s + 1)m.
This ensures that for s ≥ n there exists at least a (row) vector
[v_{s}left( eq 0 ight) in mathcal{R}^{left( s + 1 ight)m}$$ such that ]
Hence, a parity relation based residual generator is constructed by
whose dynamics is governed by, in case of f (k) = 0, d(k) = 0,
Vectors satisfying (8.55) are called parity vectors, the set of which,
is called the parity space of the sth order.
In order to study the influence of f, d on residual generator (8.56), let
f(k) ≠ 0, d(k) ≠ 0. It is straightforward that
where
8.2 Preliminaries and Review of Model-Based FDI Schemes
Constructing a residual generator according to (8.56) finally results in
We see that the design parameter of the parity relation based residual generator
is the parity vector whose selection has decisive impact on the performance of
the residual generator.
Remark 8.2 One of the significant properties of parity relation based residual
gener-ators, also widely viewed as the main advantage over the observer-based
approaches, is that the design can be carried out in a straightforward manner.
In fact, it only deals with solutions of linear equations or linear optimization
problems. In against, the implementation form (8.56) is surely not ideal for an
online realization, since it is presented in an explicit form, and thus not only
the temporal but also the past measurement and input data are needed and have to
be recorded.
8.2.2.5 Kernel Representation and Parameterization of Residual Generators
In the model-based FD study, the FDF, DO and Kalman filter based residual
gener-ators are called closed-loop configurated, since a feedback of the
residual signal is embedded in the system configuration and the computation is
realized in a recursive form. Differently, the parity space residual generator
is open-loop structured. In fact, it is an FIR (finite impulse response) filter.
Below, we introduce a general form for all types of LTI residual generators,
which is also called parameterization of residual generators.
A fundamental property of the LCF is that in the fault- and noise-free case
Equation (8.61) is called kernel representation (KR) of the system under
consid-eration and useful in parameterizing all residual generators. For our
purpose, we introduce below a more general definition of kernel representation.
Definition 8.3 Given system (8.2), (8.3), a stable linear system K driven
by u(z), y(z) and satisfying
148 8 Introduction, Preliminaries and I/O Data Set Models
is called stable kernel representation (SKR) of (8.2), (8.3).
model (8.18), (8.19) with unknown input vectors. The parameterization forms
of all LTI residual generators and their dynamics are described by
where
R(z)(≠0) is a stable parameterization matrix and called post-filter.
Moreover, in order to avoid loss of information about faults to be detected,
the condition r ank(R(z)) = m is to be satisfied.
It has been demonstrated that all LTI residual generators can be expressed
by (8.63), while their dynamics with respect to the unknown inputs and
faults are para-meterized by (8.64). Moreover, it holds
with yˆ delivered by a full order observer as an estimate of y, we can
apply an FDF,
for the online realization of (8.63).
As a summary, we present a theorem which immediately follows from Definition
8.3 and the residual generator parametrization.
Theorem 8.1 Given process model (8.18), (8.19), an LTI dynamic system
is a resid-ual generator if and only if it is an SKR of (8.2), (8.3).
8.3 I/O Data Models
In order to connect analytical models and process data, we now introduce
different I/O data models. They are essential in our subsequent study and
build the link between the model-based FD schemes introduced in the previous
sections and the data-driven design methods to be introduced below. For our
purpose, the following LTI process model is assumed to be the underlying
model form adopted in our study
8.3 I/O Data Models 149
where u √ Rl , y √ Rm and x √ Rn , w √ Rn and v √ Rm denote
noise sequences that are normally distributed and statistically independent
of u and x(0).
Let ω(k) √ Rξ be a data vector. We introduce the following notations
where s, N are some (large) integers. In our study, ω(k) can be y(k),
u(k), x(k), and
represents m or l or n given in (8.65), (8.66). The first I/O data
model described by
follows directly from the I/O model (8.54) introduced in the study on parity
space scheme, where Hu,s √ R(s+1)m ×(s+1)l , Hw,s Wk,s + Vk,s
represents the influence of the noise vectors on the process output with
Hw,s having the same structure like Hu,s and Wk,s , Vk,s as defined in
(8.67), (8.68).
In the SIM framework, the so-called innovation form, instead of (8.69), is
often applied to build an I/O model. The core of the innovation form is a
(steady) Kalman filter, which is written as
with the innovation sequence y(k) − yˆ(k) := e(k) being a white
noise sequence and
- the Kalman filter gain matrix. Based on (8.70), the I/O relation of the
process can be alternatively written into
150 8 Introduction, Preliminaries and I/O Data Set Models
The following two I/O data models follow from (8.71), (8.72):
8.4 Notes and References
In order to apply the MVA technique to solving FD problems in dynamic
processes, numerous methods have been developed in the last two decades.
Among them, dynamic PCA/PLS [1–3], recursive implementation of PCA/PLS [4,
5], fast moving window PCA [6] and multiple-mode PCA [7] are widely used in
the research and applications in recent years.
SIM is a well-established technique and widely applied in process
identification [8–11]. The application of the SIM technique to FDI study is
new and has been first proposed by [12–15].
In Sect. 8.2, the basics of the model-based FDI framework have been
reviewed. They can be found in the monographs [1, 16–25] and in the survey
papers [26–29]. The first work on FDF has been reported by Beard and Jones
in [30, 31], and Chow and Willsky have proposed the first optimal FDI
solution using the parity space scheme [32]. The reader is referred to
Chaps. 3 and 5 in [25] for a systematic handling of the issues on process
and fault modeling and model-based residual generation schemes.
The concept SKR will play an important role in our subsequent studies. In
fact, residual generator design is to find an SKR, as given in Theorem 8.1.
The SKR definition given in Definition 8.3 is similar to the one introduced
in [33] for nonlinear systems.
初步译文
8.2.1.2 互质分解技术
传递函数(矩阵)的互质分解给出了一种更进一步的系统表示形式,这种表示形式将在后续的研究中得到广泛的应用。粗略地说,$$mathcal{R}mathcal{H}_{infty}$$的上的互质互质分解就是把一个转移矩阵分解为两个稳定的互质转移矩阵。
定义8.1
如果存在两个稳定转移矩阵$$hat{X}(z)$$和$$hat{Y}(z)$$满足(8.5),则两个稳定转移矩阵
egin{bmatrix}
hat{M}left( z
ight) & hat{N}left( z
ight)
end{bmatrix}egin{bmatrix}
hat{X}left( z
ight)
hat{Y}left( z
ight)
end{bmatrix} = I
egin{bmatrix}
Xleft( z
ight) & Yleft( z
ight)
end{bmatrix}egin{bmatrix}
Mleft( z
ight)
Nleft( z
ight)
end{bmatrix} = I.
hat{M}left( z ight) = left( A - LC, - L,C,I ight),hat{N}left( z ight) = left( A - LC,B - LD,C,D ight)
Mleft( z ight) = left( A + BF,B,F,I ight),Nleft( z ight) = left( A + BF,B,C,C + DF,D ight)
hat{X}left( z ight) = left( A - LC, - left( B - LD ight),F,I ight),Yleft( z ight) = left( A - LC, - L,F,0 ight)
Xleft( z ight) = left( A - LC, - left( B - LD ight),F,I ight),Yleft( z ight) = left( A - LC, - L,F,0 ight)
Gleft( z ight) = {hat{M}}^{- 1}left( z ight)hat{N}left( z ight) = Nleft( z ight)M^{- 1}left( z ight)
egin{bmatrix}
Xleft( z
ight) & Yleft( z
ight)
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
Mleft( z ight) & - hat{Y}left( z ight)
Nleft( z ight) & Xleft( z ight)
end{bmatrix} = egin{bmatrix}
I & 0
0 & I
end{bmatrix}
rleft( z ight) = egin{bmatrix}
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
uleft( z ight)
yleft( z ight)
end{bmatrix}
xleft( k + 1 ight) = Axleft( k ight) + Buleft( k ight) + E_{d}dleft( k ight) + etaleft( k ight)
yleft( k ight) = C_{x}left( k ight) + D_{u}left( k ight) + F_{d}dleft( k ight) + vleft( k ight)
yleft( z ight) = G_{ ext{yu}}left( z ight)uleft( z ight) + G_{ ext{yd}}left( z ight)dleft( z ight) + G_{ ext{yf}}left( z ight)fleft( z ight)
yleft( z ight) = G_{ ext{yu}}left( z ight)uleft( z ight) + G_{ ext{yd}}left( z ight)dleft( z ight) + G_{ ext{yf}}left( z ight)fleft( z ight)
xleft( k + 1 ight) = Axleft( k ight) + Buleft( k ight) + E_{d}dleft( k ight) + E_{f}fleft( k ight)
yleft( k ight) = Cxleft( k ight) + Duleft( k ight) + F_{d}dleft( k ight) + F_{f}fleft( k ight)
G_{ ext{yf}}left( z ight) = F_{f} + Cleft( zI - A ight)^{- 1}E_{f}
yleft( k ight) = C_{x}left( k ight) + D_{u}left( k ight) + F_{d}dleft( k ight) + fsleft( k ight)
xleft( k + 1 ight) = Axleft( k ight) + Bleft( uleft( k ight) + f_{A} ight) + E_{d}d(k)
yleft( k ight) = Cxleft( k ight) + Dleft( uleft( k ight) + f_{A} ight) + F_{d}dleft( k ight)
f = egin{bmatrix}
f_{A}
f_{P}
f_{S}
end{bmatrix},E_{f} = egin{bmatrix}
B & E_{P} & 0
end{bmatrix},F_{f} = egin{bmatrix}
D & F_{P} & I
end{bmatrix}
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + Buleft( k ight) + Lleft( yleft( k ight) - Chat{x}left( k ight) - Duleft( k ight) ight)
rleft( k ight) = yleft( k ight) - hat{y}left( k ight) = yleft( k ight) - Chat{x}left( k ight) - Duleft( k ight)
eleft( k ight) = xleft( k ight) - hat{x}left( k ight)
eleft( k + 1 ight) = left( A - LC ight)eleft( k ight) + left( E_{d} - LF_{d} ight)dleft( k ight) + left( E_{f} - LF_{f} ight)
rleft( k ight) = Celeft( k ight) + F_{d}dleft( k ight) + F_{f}fleft( k ight)
rleft( z ight) = Vleft( yleft( z ight) - hat{y}left( z ight) ight)
zleft( k + 1 ight) = Gzleft( k ight) + Huleft( k ight) + Lyleft( k ight)
rleft( k ight) = Vyleft( k ight) - Wzleft( k ight) - Quleft( k ight)
{ ext{I. G is stable}ackslash n}{II. TA - GT = LC,H = TB - LDackslash n}{III. VC - WT = 0,Q = VD}
forall u,xleft( 0 ight),operatorname{}{rleft( k ight) = 0}
eleft( k + 1 ight) = Geleft( k ight) + left( TEd - LFd ight)dleft( k ight) + left( TE_{f} - LF_{f} ight)fleft( k ight)
rleft( k ight) = Veleft( k ight) + VF_{d}dleft( k ight) + VF_{f}f(k)
varepsilonegin{bmatrix}
etaleft( i
ight)eta^{T}left( j
ight) & etaleft( i
ight)v^{T}left( j
ight)
vleft( i
ight)eta^{T}left( j
ight) & vleft( i
ight)v^{T}left( j
ight)
end{bmatrix} = egin{bmatrix}
Sigma_{eta} & S_{ ext{ηv}}
S_{ ext{vη}} & Sigma_{v}
end{bmatrix}delta_{ ext{ij}},delta_{ ext{ij}} = left{ egin{matrix}
1, i = j
0, i
eq j
end{matrix}
ight.
Sigma_{v} > 0,Sigma_{eta} geq 0,varepsilonleftlbrack etaleft( k ight) ight brack = 0,varepsilonleftlbrack vleft( k ight) ight brack = 0
varepsilonleftlbrack xleft( 0 ight) ight brack = overline{x},varepsilonleftlbrack left( xleft( 0 ight) - overline{x} ight)left( xleft( 0 ight) - overline{x} ight)^{T} ight brack = P_{o}
hat{x}left( 0 ight) = overline{x}
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + ext{Bu}left( k ight) + Kleft( k ight)left( yleft( k ight) - Chat{x}left( k ight) - ext{Du}left( k ight) ight)
Pleft( 0 ight) = P_{0}
Pleft( K + 1 ight) = ext{AP}left( k ight)A^{T} - Kleft( k ight)R_{e}left( k ight)K^{T}left( k ight) + Sigma_{eta}
Kleft( k ight) = left( ext{AP}left( k ight)C^{T} + S_{ ext{ηv}} ight)R_{e}^{- 1}left( k ight)
R_{e}left( k ight) = Sigma_{v} + ext{CP}left( k ight)C^{T}
Pleft( k ight) = varepsilonleftlbrack left( xleft( k ight) - hat{x}left( k ight) ight)left( xleft( k ight) - hat{x}left( k ight) ight)^{T} ight brack
Pleft( k ight) = varepsilonleftlbrack left( xleft( k ight) - hat{x}left( k ight) ight)left( xleft( k ight) - hat{x}left( k ight) ight)^{T} ight brack Longrightarrow min
varepsilonleft( eleft( k ight)e^{T}left( k ight) ight) = R_{e}left( k ight) = Sigma_{v} + ext{CP}left( k ight)C^{T}
operatorname{}{Kleft( k ight) =}K = constant matrix
K = left( ext{AP}C^{T} + S_{eta v^{T}} ight)R_{e}^{- 1}
P = lim_{k ightarrow infty}Pleft( k ight),R_{e} = Sigma_{v} + CPC^{T}
P = ext{AP}A^{T} - KR_{e}K^{T} + Sigma_{eta}
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + Buleft( k ight) + Kleft( yleft( k ight) - Chat{x}left( k ight) - Duleft( k ight) ight)
r(k) = yleft( k ight) - Chat{x}left( k ight) - Duleft( k ight)
yleft( k - s ight) = Cxleft( k - s ight) + Duleft( k - s ight)
{yleft( k - s + 1 ight) = Cxleft( k - s + 1 ight) + Duleft( k - s + 1 ight)ackslash n}{= CAxleft( k - s ight) + CBuleft( k - s ight) + Du(k - s + 1)}
yleft( k - s + 2 ight) = CA^{2}xleft( k - 2 ight) + CABuleft( k - s ight) + CBuleft( k - s + 1 ight) + Duleft( k - x + 2 ight),ldots,
yleft( k ight) = CA^{s}xleft( k - s ight) + CA^{s - 1} ext{Bu}left( k - s ight) + cdots + CBuleft( k + 1 ight) + Duleft( k ight)
y_{s}left( k
ight) = egin{bmatrix}
yleft( k - s
ight)
yleft( k - s + 1
ight)
vdots
yleft( k
ight)
end{bmatrix},u_{s}left( k
ight) = egin{bmatrix}
uleft( k - s
ight)
uleft( k - s + 1
ight)
vdots
uleft( k
ight)
end{bmatrix}
H_{o,s} = egin{bmatrix}
C
ext{CA}
vdots
CA^{s}
end{bmatrix},H_{u,s} = egin{bmatrix}
D & 0 & cdots & 0
ext{CB} & D & ddots & vdots
vdots & ddots & ddots & 0
CA^{s - 1}B & cdots & ext{CB} & D
end{bmatrix}
{y_{s}(k) = H}{o,s}xleft( k - s ight) + H{u,s}u_{s}left( k - s ight)
ext{rank}left( H_{o,s} ight) leq n < the row number of matrix H_{o,s} = left( s + 1 ight)m
u_{s}H_{o,s} = 0
rleft( k ight) = v_{s}left( y_{s}left( k - s ight) - H_{u,s}u_{s}left( k - s ight) ight)
rleft( k ight) = v_{s}left( y_{s}left( k ight) - H_{u,s}u_{s}left( k ight) ight) = v_{s}H_{o,s}left( k - s ight) = 0
P_{s} = left{ v_{s} middle| v_{s}H_{o,s} = 0 ight}
y_{s}left( k ight) = H_{o,s}xleft( k - s ight) + H_{u,s}u_{s}left( k ight) + H_{f,s}f_{s}left( k ight) + H_{d,s}d_{s}left( k ight)
f_{s}left( k
ight) = egin{bmatrix}
fleft( k - s
ight)
fleft( k - s + 1
ight)
vdots
fleft( k
ight)
end{bmatrix}, H_{f,s} = egin{bmatrix}
F_{f} & 0 & cdots & 0
CE_{f} & F_{f} & ddots & vdots
vdots & ddots & ddots & 0
CA^{s - 1}E_{f} & cdots & CE_{f} & F_{f}
end{bmatrix}
d_{s}left( k
ight) = egin{bmatrix}
dleft( k - s
ight)
dleft( k - s + 1
ight)
vdots
dleft( k
ight)
end{bmatrix}, H_{d,s} = egin{bmatrix}
F_{d} & 0 & cdots & 0
CE_{d} & F_{d} & ddots & vdots
vdots & ddots & ddots & 0
CA^{s - 1}E_{d} & cdots & CE_{d} & F_{d}
end{bmatrix}
rleft( k ight) = v_{s}(H_{f,s}f_{s}left( k ight) + H_{d,s}d_{s}left( k ight),v_{s} in P_{s}
forall u,egin{bmatrix}
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
uleft( z ight)
yleft( z ight)
end{bmatrix} = 0
forall uleft( z
ight),rleft( z
ight) = kappaegin{bmatrix}
uleft( z
ight)
yleft( z
ight)
end{bmatrix} = 0
{rleft( z ight) = Rleft( z ight)egin{bmatrix}
- hat{N}left( z
ight) & hat{M}left( z
ight)
end{bmatrix}egin{bmatrix}
uleft( z ight)
yleft( z ight)
end{bmatrix}ackslash n}{= Rleft( z ight)left( {hat{N}}{d}left( z ight)dleft( z ight) + {hat{N}}{f}left( z ight)fleft( z ight) ight)}
{hat{N}}{d}left( z ight) = left( A - ext{LC}, ext{Ed} - ext{LFd},C, ext{Fd} ight),{hat{N}}{f}left( z ight) = left( A - ext{LC},E_{f} - LF_{f},C,F_{f} ight)
leftlbrack - hat{N}left( z
ight)hat{M}left( z
ight)
ight
brackegin{bmatrix}
uleft( z
ight)
yleft( z
ight)
end{bmatrix} = yleft( z
ight) - hat{y}left( z
ight)
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + ext{Bu}left( k ight) + Lleft( yleft( k ight) - hat{y}left( k ight) ight),hat{y}left( k ight) = Chat{x}left( x ight) + ext{Du}left( k ight)
xleft( k + 1 ight) = ext{Ax}left( k ight) + ext{Bu}left( k ight) + wleft( k ight)
yleft( k ight) = ext{Cx}left( k ight) + ext{Du}left( k ight) + vleft( k ight)
{w_{s}left( k
ight) = egin{bmatrix}
wleft( k - 1
ight)
vdots
wleft( k
ight)
end{bmatrix} in mathcal{R}^{left( s + 1
ight)xi}ackslash n}{ Omega_{k} = egin{bmatrix}
wleft( k
ight) & cdots & wleft( k + N - 1
ight)
end{bmatrix} in mathcal{R}^{xi imes N}}
Omega_{k,s} = egin{bmatrix}
w_{s}left( k
ight)cdots w_{s}left( k + N - 1
ight)
end{bmatrix} = egin{bmatrix}
Omega_{k - s}
vdots
Omega_{k}
end{bmatrix} in mathcal{R}^{xi imes N}
Y_{k,s} = Gamma_{s}X_{k - s} + H_{u,s}U_{k,s} + H_{w,s}W_{k,s} + V_{k,s} in mathcal{R}^{left( s + 1 ight)m imes N}
Y_{s} = egin{bmatrix}
C
ext{CA}
vdots
CA^{s}
end{bmatrix} in mathcal{R}^{left( s + 1
ight)m imes n},H_{u,s} = egin{bmatrix}
D & 0 & &
ext{CB} & ddots & ddots &
vdots & ddots & ddots & 0
CA^{s - 1}B & cdots & ext{CB} & D
end{bmatrix}
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + ext{Bu}left( k ight) + Kleft( yleft( k ight) - hat{y}left( k ight) ight),hat{y}left( k ight) = Chat{x}left( k ight) + ext{Du}left( k ight)
hat{x}left( k + 1 ight) = Ahat{x}left( k ight) + ext{Bu}left( k ight) + ext{Ke}left( k ight) = A_{K}hat{x}left( k ight) + B_{K}uleft( k ight) + ext{Ky}left( k ight)
yleft( k ight) = Chat{x}left( k ight) + ext{Du}left( k ight) + eleft( k ight),A_{K} = A - ext{KC},B_{K} = B - ext{KD}
Y_{k,s} = Gamma_{s}{hat{X}}{k - s} + H{u,s}U_{k,s} + H_{e,s}E_{k,s},H_{e,s} = egin{bmatrix}
I & 0 & &
ext{CK} & ddots & ddots &
vdots & ddots & ddots & 0
CA^{left( s - 1
ight)}K & cdots & ext{CK} & I
end{bmatrix}
left( I - H_{y,s}^{K}
ight)Y_{k,s} = Gamma_{s}^{K}{hat{X}}{k - s} + H{u,s}{K}U_{k,s},Gamma_{s}{K} = egin{bmatrix}
C
CA_{K}
vdots
CA_{K}^{s}
end{bmatrix}
H_{y,s}^{K} = egin{bmatrix}
0 & 0 & &
ext{CK} & ddots & ddots &
vdots & ddots & ddots & 0
CA_{K}^{s - 1}K & cdots & ext{CK} & 0
end{bmatrix},H_{u,s}^{K} = egin{bmatrix}
0 & 0 & &
CB_{K} & ddots & ddots &
vdots & ddots & ddots & 0
CA_{K}^{s - 1}K & cdots & CK_{B} & 0
end{bmatrix}
与(8.67),(8.68)中给出的结构相同。
8.4 注释和参考
为了将MVA技术应用于解决动态过程中的FD问题,最近二十年来已开发了出很多方法。其中,动态PCA/PLS[1-3],PCA/PLS[4,5]的递归实现,快速移动窗体PCA[6]和多模式PCA[7]在近年来得到了广泛的应用和研究。
SIM是一个成熟的技术,在过程识别中得到了广泛的应用[8-11]。SIM技术在FDI中的应用,是[12-15]首次提出的。
第8.2节回顾了基于模型的FDI框架的基本内容。它们可以在专著[1,16-
25]和调查文件[26-29]中找到。
Beard 和 Jones 在[30,31]中报道了关于FDF的第一项工作,Chow
和Willsky提出了第一个利用奇偶空间法的最优FDI解决方法[32]。读者可以参考[25]中的第三章和第五章来系统地处理过程及故障建模以及基于模型的残差生成法等问题。
SKR的概念将在我们的后续研究中发挥重要作用。实际上,残差生成器的设计就是要找到定理8.1中给出的SKR。定义8.3中给出的SKR定义类似于[33]中针对非线性系统引入的定义。