vectors ==
===Scalar product=== A scalar product of two events x1, y1, z1, t1 and x2, y2, z2, t2 may be defined either in space-like form as
or in time-like form as
Two vectors with zero scalar product were called normal by Minkowski. The condition resembles orthogonality but this term is inappropriate as right angles are not conserved under Lorentz transformation.
The time-like form of the scalar product satisfies the reversed Cauchy inequality valid for two time-like events:
From this it follows that the scalar product of two time-like events is positive. So two time-like events cannot be normal. It follows that if two events are normal one at least must be space-like (and it is possible for both to be space-like).
Scalar product
Two event vectors both having either t>0 or t<0 are called similarly directed..
A scalar product of two similarly directed time-like event vectors u = (t. x, y, z)v = (t', x', y', z')
may be defined as
This is always positive because tt'>0 since u and v are similarly directed and so by Cauchy's inequality
A similar result does not apply to similarly directed space-like event vectors because if (x, y, z), (x', y', z') are orthogonal
Norm and reversed triangle inequality
For time-like event vectors v = (t, x, y, z) a norm may be defined as
This does not satisfy the usual triangle inequality. Instead, it satisfies the reversed triangle inequality.
If v and w are both future-directed (t > 0) or past-directed (t < 0) time-like event
where the constant is the delay of the Wiener Filter (since it is causal). In other words, the error is the difference between the estimated signal and the true signal shifted by the filter delay .
The squared error is
where is the desired output of the filter and is the error. Depending on the value of , the problem can be described as follows:
if then the problem is that of prediction (error is reduced when is similar to a later value of s),
if then the problem is that of filtering (error is reduced when is similar to ), and
if then the problem is that of smoothing (error is reduced when is similar to an earlier value of s).
Taking the expected value of the squared error results in
where is the observed signal, is the autocorrelation function of , is the autocorrelation function of , and is the cross-correlation function of and . If the signal and the noise are uncorrelated (i.e., the cross-correlation is zero), then this means that and . For many applications, the assumption of uncorrelated signal and noise is reasonable.
The goal is to minimize , the expected value of the squared error, by finding the optimal , the Wiener filter
impulse response function. The minimum may be found by calculating the first order incremental change in the least square
resulting from an incremental change in for positive time. This is
For a minimum, this must vanish identically for all for which leads to the
Wiener–Hopf equation
:
This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved to find the optimal filter by a special technique due to Wiener and Hopf.
Wiener filter solutions
The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book Levinson gave the FIR solution.
Noncausal solution
Where are spectra. Provided that is optimal, then the
consists of the causal part of (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
is the causal component of (i.e., the inverse Laplace transform of is non-zero only for )
is the anti-causal component of (i.e., the inverse Laplace transform of is non-zero only for )
This general formula is complicated and deserves a more detailed explanation. To write down the solution in a specific case, one should follow these steps:[4]
Start with the spectrum in rational form and factor it into causal and anti-causal components:
where contains all the zeros and poles in the left half plane (LHP) and contains the zeroes and poles in the right half plane (RHP). This is called the Wiener–Hopf factorization.
Divide by and write out the result as a partial fraction expansion.
Select only those terms in this expansion having poles in the LHP. Call these terms .
Divide by . The result is the desired filter transfer function .
Wiener filter
The first order incremental change in the least square error resulting from an incremental change in g for positive time is
The condition for this to vanish identically leads to the Wiener-Hopf equation
This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved by a special technique due to Wiener and Hopf in a previous paper.