Extraction of two sinusoids from a noisy data through the GPOF method
Generalized pencil-of-function method (GPOF ), also known as matrix pencil method , is a
Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
[1]
The method was originally developed by Yingbo Hua and
Method
Mathematical basis
A transient electromagnetic signal can be represented as:[3]
y
(
t
)
=
x
(
t
)
+
n
(
t
)
≈
∑
i
=
1
M
R
i
e
s
i
t
+
n
(
t
)
;
0
≤
t
≤
T
,
{\displaystyle y(t)=x(t)+n(t)\approx \sum _{i=1}^{M}R_{i}e^{s_{i}t}+n(t);0\leq t\leq T,}
where
y
(
t
)
{\displaystyle y(t)}
is the observed time-domain signal,
n
(
t
)
{\displaystyle n(t)}
is the signal noise ,
x
(
t
)
{\displaystyle x(t)}
is the actual signal,
R
i
{\displaystyle R_{i}}
are the residues (
R
i
{\displaystyle R_{i}}
),
s
i
{\displaystyle s_{i}}
are the poles of the system, defined as
s
i
=
−
α
i
+
j
ω
i
{\displaystyle s_{i}=-\alpha _{i}+j\omega _{i}}
,
z
i
=
e
(
−
α
i
+
j
ω
i
)
T
s
{\displaystyle z_{i}=e^{(-\alpha _{i}+j\omega _{i})T_{s}}}
by the identities of Z-transform ,
α
i
{\displaystyle \alpha _{i}}
are the damping factors
and
ω
i
{\displaystyle \omega _{i}}
are the angular frequencies .
The same sequence, sampled by a period of
T
s
{\displaystyle T_{s}}
, can be written as the following:
y
[
k
T
s
]
=
x
[
k
T
s
]
+
n
[
k
T
s
]
≈
∑
i
=
1
M
R
i
z
i
k
+
n
[
k
T
s
]
;
k
=
0
,
.
.
.
,
N
−
1
;
i
=
1
,
2
,
.
.
.
,
M
{\displaystyle y[kT_{s}]=x[kT_{s}]+n[kT_{s}]\approx \sum _{i=1}^{M}R_{i}z_{i}^{k}+n[kT_{s}];k=0,...,N-1;i=1,2,...,M}
,
Generalized pencil-of-function estimates the optimal
M
{\displaystyle M}
and
z
i
{\displaystyle z_{i}}
's.[4]
Noise-free analysis
For the noiseless case, two
(
N
−
L
)
×
L
{\displaystyle (N-L)\times L}
matrices,
Y
1
{\displaystyle Y_{1}}
and
Y
2
{\displaystyle Y_{2}}
, are produced:[3]
[
Y
1
]
=
[
x
(
0
)
x
(
1
)
⋯
x
(
L
−
1
)
x
(
1
)
x
(
2
)
⋯
x
(
L
)
⋮
⋮
⋱
⋮
x
(
N
−
L
−
1
)
x
(
N
−
L
)
⋯
x
(
N
−
2
)
]
(
N
−
L
)
×
L
;
{\displaystyle [Y_{1}]={\begin{bmatrix}x(0)&x(1)&\cdots &x(L-1)\\x(1)&x(2)&\cdots &x(L)\\\vdots &\vdots &\ddots &\vdots \\x(N-L-1)&x(N-L)&\cdots &x(N-2)\end{bmatrix}}_{(N-L)\times L};}
[
Y
2
]
=
[
x
(
1
)
x
(
2
)
⋯
x
(
L
)
x
(
2
)
x
(
3
)
⋯
x
(
L
+
1
)
⋮
⋮
⋱
⋮
x
(
N
−
L
)
x
(
N
−
L
+
1
)
⋯
x
(
N
−
1
)
]
(
N
−
L
)
×
L
{\displaystyle [Y_{2}]={\begin{bmatrix}x(1)&x(2)&\cdots &x(L)\\x(2)&x(3)&\cdots &x(L+1)\\\vdots &\vdots &\ddots &\vdots \\x(N-L)&x(N-L+1)&\cdots &x(N-1)\end{bmatrix}}_{(N-L)\times L}}
where
L
{\displaystyle L}
is defined as the pencil parameter.
Y
1
{\displaystyle Y_{1}}
and
Y
2
{\displaystyle Y_{2}}
can be decomposed into the following matrices:[3]
[
Y
1
]
=
[
Z
1
]
[
B
]
[
Z
2
]
{\displaystyle [Y_{1}]=[Z_{1}][B][Z_{2}]}
[
Y
2
]
=
[
Z
1
]
[
B
]
[
Z
0
]
[
Z
2
]
{\displaystyle [Y_{2}]=[Z_{1}][B][Z_{0}][Z_{2}]}
where
[
Z
1
]
=
[
1
1
⋯
1
z
1
z
2
⋯
z
M
⋮
⋮
⋱
⋮
z
1
(
N
−
L
−
1
)
z
2
(
N
−
L
−
1
)
⋯
z
M
(
N
−
L
−
1
)
]
(
N
−
L
)
×
M
;
{\displaystyle [Z_{1}]={\begin{bmatrix}1&1&\cdots &1\\z_{1}&z_{2}&\cdots &z_{M}\\\vdots &\vdots &\ddots &\vdots \\z_{1}^{(N-L-1)}&z_{2}^{(N-L-1)}&\cdots &z_{M}^{(N-L-1)}\end{bmatrix}}_{(N-L)\times M};}
[
Z
2
]
=
[
1
z
1
⋯
z
1
L
−
1
1
z
2
⋯
z
2
L
−
1
⋮
⋮
⋱
⋮
1
z
M
⋯
z
M
L
−
1
]
M
×
L
{\displaystyle [Z_{2}]={\begin{bmatrix}1&z_{1}&\cdots &z_{1}^{L-1}\\1&z_{2}&\cdots &z_{2}^{L-1}\\\vdots &\vdots &\ddots &\vdots \\1&z_{M}&\cdots &z_{M}^{L-1}\end{bmatrix}}_{M\times L}}
[
Z
0
]
{\textstyle [Z_{0}]}
and
[
B
]
{\textstyle [B]}
are
M
×
M
{\textstyle M\times M}
diagonal matrices with sequentially-placed
z
i
{\textstyle z_{i}}
and
R
i
{\textstyle R_{i}}
values, respectively.[3]
If
M
≤
L
≤
N
−
M
{\textstyle M\leq L\leq N-M}
, the
[
Y
2
]
−
λ
[
Y
1
]
=
[
Z
1
]
[
B
]
(
[
Z
0
]
−
λ
[
I
]
)
[
Z
2
]
{\displaystyle [Y_{2}]-\lambda [Y_{1}]=[Z_{1}][B]([Z_{0}]-\lambda [I])[Z_{2}]}
yield the poles of the system, which are
λ
=
z
i
{\displaystyle \lambda =z_{i}}
. Then, the generalized eigenvectors
p
i
{\displaystyle p_{i}}
can be obtained by the following identities:[3]
[
Y
1
]
+
[
Y
1
]
p
i
=
p
i
;
{\displaystyle [Y_{1}]^{+}[Y_{1}]p_{i}=p_{i};}
i
=
1
,
.
.
.
,
M
{\displaystyle i=1,...,M}
[
Y
1
]
+
[
Y
2
]
p
i
=
z
i
p
i
;
{\displaystyle [Y_{1}]^{+}[Y_{2}]p_{i}=z_{i}p_{i};}
i
=
1
,
.
.
.
,
M
{\displaystyle i=1,...,M}
where the
+
{\displaystyle ^{+}}
denotes the Moore–Penrose inverse , also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
Noise filtering
If noise is present in the system,
[
Y
1
]
{\textstyle [Y_{1}]}
and
[
Y
2
]
{\textstyle [Y_{2}]}
are combined in a general data matrix,
[
Y
]
{\textstyle [Y]}
:[3]
[
Y
]
=
[
y
(
0
)
y
(
1
)
⋯
y
(
L
)
y
(
1
)
y
(
2
)
⋯
y
(
L
+
1
)
⋮
⋮
⋱
⋮
y
(
N
−
L
−
1
)
y
(
N
−
L
)
⋯
y
(
N
−
1
)
]
(
N
−
L
)
×
(
L
+
1
)
{\displaystyle [Y]={\begin{bmatrix}y(0)&y(1)&\cdots &y(L)\\y(1)&y(2)&\cdots &y(L+1)\\\vdots &\vdots &\ddots &\vdots \\y(N-L-1)&y(N-L)&\cdots &y(N-1)\end{bmatrix}}_{(N-L)\times (L+1)}}
where
y
{\displaystyle y}
is the noisy data. For efficient filtering , L is chosen between
N
3
{\textstyle {\frac {N}{3}}}
and
N
2
{\textstyle {\frac {N}{2}}}
. A singular value decomposition on
[
Y
]
{\textstyle [Y]}
yields:
[
Y
]
=
[
U
]
[
Σ
]
[
V
]
H
{\displaystyle [Y]=[U][\Sigma ][V]^{H}}
In this decomposition,
[
U
]
{\textstyle [U]}
and
[
V
]
{\textstyle [V]}
are unitary matrices with respective eigenvectors
[
Y
]
[
Y
]
H
{\textstyle [Y][Y]^{H}}
and
[
Y
]
H
[
Y
]
{\textstyle [Y]^{H}[Y]}
and
[
Σ
]
{\textstyle [\Sigma ]}
is a diagonal matrix with singular values of
[
Y
]
{\textstyle [Y]}
. Superscript
H
{\textstyle H}
denotes the conjugate transpose .[3] [4]
Then the parameter
M
{\textstyle M}
is chosen for filtering. Singular values after
M
{\textstyle M}
, which are below the filtering threshold, are set to zero; for an arbitrary singular value
σ
c
{\textstyle \sigma _{c}}
, the threshold is denoted by the following formula:[1]
σ
c
σ
m
a
x
=
10
−
p
{\displaystyle {\frac {\sigma _{c}}{\sigma _{max}}}=10^{-p}}
,
σ
m
a
x
{\textstyle \sigma _{max}}
and p are the maximum singular value and significant decimal digits , respectively. For a data with significant digits accurate up to p , singular values below
10
−
p
{\textstyle 10^{-p}}
are considered noise.[4]
[
V
1
′
]
{\textstyle [V_{1}']}
and
[
V
2
′
]
{\textstyle [V_{2}']}
are obtained through removing the last and first row and column of the filtered matrix
[
V
′
]
{\textstyle [V']}
, respectively;
M
{\textstyle M}
columns of
[
Σ
]
{\textstyle [\Sigma ]}
represent
[
Σ
′
]
{\textstyle [\Sigma ']}
. Filtered
[
Y
1
]
{\textstyle [Y_{1}]}
and
[
Y
2
]
{\textstyle [Y_{2}]}
matrices are obtained as:[4]
[
Y
1
]
=
[
U
]
[
Σ
′
]
[
V
1
′
]
H
{\displaystyle [Y_{1}]=[U][\Sigma '][V_{1}']^{H}}
[
Y
2
]
=
[
U
]
[
Σ
′
]
[
V
2
′
]
H
{\displaystyle [Y_{2}]=[U][\Sigma '][V_{2}']^{H}}
Prefiltering can be used to combat noise and enhance signal-to-noise ratio (SNR).[1] Band-pass matrix pencil (BPMP) method is a modification of the GPOF method via FIR or IIR band-pass filters .[1] [5]
GPOF can handle up to 25 dB SNR. For GPOF, as well as for BPMP, variace of the estimates approximately reaches Cramér–Rao bound .[3] [5] [4]
Calculation of residues
Residues of the complex poles are obtained through the least squares problem:[1]
[
y
(
0
)
y
(
1
)
⋮
y
(
N
−
1
)
]
=
[
1
1
⋯
1
z
1
z
2
⋯
z
M
⋮
⋮
⋱
⋮
z
1
N
−
1
z
2
N
−
1
⋯
z
M
N
−
1
]
[
R
1
R
2
⋮
R
M
]
{\displaystyle {\begin{bmatrix}y(0)\\y(1)\\\vdots \\y(N-1)\end{bmatrix}}={\begin{bmatrix}1&1&\cdots &1\\z_{1}&z_{2}&\cdots &z_{M}\\\vdots &\vdots &\ddots &\vdots \\z_{1}^{N-1}&z_{2}^{N-1}&\cdots &z_{M}^{N-1}\end{bmatrix}}{\begin{bmatrix}R_{1}\\R_{2}\\\vdots \\R_{M}\end{bmatrix}}}
Applications
The method is generally used for the closed-form evaluation of Sommerfeld integrals in discrete complex image method for method of moments applications, where the spectral Green's function is approximated as a sum of complex exponentials.[1] [6] Additionally, the method is used in antenna analysis, S-parameter -estimation in microwave integrated circuits , wave propagation analysis, moving target indication , radar signal processing ,[1] [7] [8] and series acceleration in electromagnetic problems.[9]
See also
References
^ .
.
^ .
^ .
^ .
.
.
.
.