User:JFB80/sandbox

Source: Wikipedia, the free encyclopedia.

== Properties of time-like

w being rapidity
 for small


vectors == ===Scalar product=== A scalar product of two events x1, y1, z1, t1 and x2, y2, z2, t2 may be defined either in space-like form as

or in time-like form as

Two vectors with zero scalar product were called normal by Minkowski. The condition resembles orthogonality but this term is inappropriate as right angles are not conserved under Lorentz transformation.

The time-like form of the scalar product satisfies the reversed Cauchy inequality valid for two time-like events:

From this it follows that the scalar product of two time-like events is positive. So two time-like events cannot be normal. It follows that if two events are normal one at least must be space-like (and it is possible for both to be space-like).

Scalar product

Two event vectors both having either t>0 or t<0 are called similarly directed..

A scalar product of two similarly directed time-like event vectors u = (t. x, y, z) v = (t', x', y', z') may be defined as

This is always positive because tt'>0 since u and v are similarly directed and so by Cauchy's inequality

A similar result does not apply to similarly directed space-like event vectors because if (x, y, z), (x', y', z') are orthogonal


Norm and reversed triangle inequality

For time-like event vectors v = (t, x, y, z) a norm may be defined as

This does not satisfy the usual triangle inequality. Instead, it satisfies the reversed triangle inequality.

If v and w are both future-directed (t > 0) or past-directed (t < 0) time-like event

vectors, then:[1]

The result may be proved by using the algebraic definition, squaring and taking all terms to the left hand side when

the result is positive by the reversed Cauchy's inequality.[2]

No similar result holds for space-like events.

Chronological and causality relations

Let x, yM. We say that

  1. x chronologically precedes y if yx is future-directed timelike. This relation has

the

transitive property
and so can be written x < y.

  1. x causally precedes y if yx is future-directed null or future-directed timelike. It

gives a

partial ordering
of space-time and so can be written xy.

Suppose xM is timelike. Then the simultaneous hyperplane for x is

Since this hyperplane varies as x varies, there is a relativity of simultaneity in Minkowski space.


Wiener filter

Noncausal solution

where are spectral densities. Provided that is optimal, then the

minimum mean-square error
equation reduces to

and the solution is the inverse two-sided Laplace transform of .

Wiener filter problem setup

The input, , to the Wiener filter consists of an unknown signal, , corrupted by additive noise, :

The output, , is calculated by means of a filter, , using the following convolution:[3]

where

  • is the original signal (not exactly known; to be estimated),
  • is the noise, which is
    uncorrelated
    with ,
  • is the observed or measured signal,
  • is the estimated signal (the intention is to equal ), and
  • is the Wiener filter's impulse response.

The error is defined as

where the constant is the delay of the Wiener Filter (since it is causal). In other words, the error is the difference between the estimated signal and the true signal shifted by the filter delay .

The squared error is

where is the desired output of the filter and is the error. Depending on the value of , the problem can be described as follows:

  • if then the problem is that of prediction (error is reduced when is similar to a later value of s),
  • if then the problem is that of filtering (error is reduced when is similar to ), and
  • if then the problem is that of smoothing (error is reduced when is similar to an earlier value of s).

Taking the expected value of the squared error results in

where is the observed signal, is the autocorrelation function of , is the autocorrelation function of , and is the cross-correlation function of and . If the signal and the noise are uncorrelated (i.e., the cross-correlation is zero), then this means that and . For many applications, the assumption of uncorrelated signal and noise is reasonable.

The goal is to minimize , the expected value of the squared error, by finding the optimal , the Wiener filter

impulse response function. The minimum may be found by calculating the first order incremental change in the least square
resulting from an incremental change in for positive time. This is

For a minimum, this must vanish identically for all for which leads to the

Wiener–Hopf equation
:

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved to find the optimal filter by a special technique due to Wiener and Hopf.

Wiener filter solutions

The Wiener filter problem has solutions for three possible cases: one where a noncausal filter is acceptable (requiring an infinite amount of both past and future data), the case where a causal filter is desired (using an infinite amount of past data), and the finite impulse response (FIR) case where a finite amount of past data is used. The first case is simple to solve but is not suited for real-time applications. Wiener's main accomplishment was solving the case where the causality requirement is in effect, and in an appendix of Wiener's book Levinson gave the FIR solution.

Noncausal solution

Where are spectra. Provided that is optimal, then the

minimum mean-square error
equation reduces to

and the solution is the inverse two-sided Laplace transform of .

Causal solution

where

  • consists of the causal part of (that is, that part of this fraction having a positive time solution under the inverse Laplace transform)
  • is the causal component of (i.e., the inverse Laplace transform of is non-zero only for )
  • is the anti-causal component of (i.e., the inverse Laplace transform of is non-zero only for )

This general formula is complicated and deserves a more detailed explanation. To write down the solution in a specific case, one should follow these steps:[4]

  1. Start with the spectrum in rational form and factor it into causal and anti-causal components:
    where contains all the zeros and poles in the left half plane (LHP) and contains the zeroes and poles in the right half plane (RHP). This is called the Wiener–Hopf factorization.
  2. Divide by and write out the result as a partial fraction expansion.
  3. Select only those terms in this expansion having poles in the LHP. Call these terms .
  4. Divide by . The result is the desired filter transfer function .


Wiener filter


The first order incremental change in the least square error resulting from an incremental change in g for positive time is

The condition for this to vanish identically leads to the Wiener-Hopf equation

This is the fundamental equation of the Wiener theory. The right-hand side resembles a convolution but is only over the semi-infinite range. The equation can be solved by a special technique due to Wiener and Hopf in a previous paper.




Miscellaneous

∫ ∏ ∑ ─ ≠ ≡ ± ≈ ≤ ≥ ⌡ |⌠ √ ∞ ∫ º²³ⁿ ∂ ∆ ∏ ∑  → ← │ ┐└ ┘ ' n┴ ¼ ⅓ ½ ¾ ΑΒΓΔΕΖΗΘΙΚΛΜΝΞΟΠΡΣΤΥΦΧΨΩ αβγδεζηθικλμνξοπρςστυφχψω

éêè ôö üû äáàâ æ î ç ø ćč àáâãäåæçèéêëìíîïðñòóôõö÷øûüý

  1. ^ Naber p.49
  2. ^ Naber p.48
  3. ^ Cite error: The named reference Brown1996 was invoked but never defined (see the help page).
  4. ^ Welch, Lloyd R. "Wiener–Hopf Theory" (PDF).