User:Dgroseth/Contravariant previous
Covariance and contravariance of vectors - this is a copy of the page from 2009 May 12
Wikipedia pages that link to Covariance and contravariance of vectors.
Redirect pages are:
- Covariant vector - Wikipedia pages that link to Covariant vector.
- Contravariant vector - Wikipedia pages that link to Contravariant vector.
Other noteworthy Wikipedia pages:
- Covariance and contravariance - a disambiguation page. - Wikipedia pages that link to the Covariance and contravariance disambiguation page
- Co-variant - Wikipedia pages that link to Co-variant
- Contra-variant - Wikipedia pages that link to Contra-variant
- Contravariant - Wikipedia pages that link to Contravariant
- Contravariance - Wikipedia pages that link to Contravariance
This user page may require cleanup to meet Wikipedia's quality standards. No cleanup reason has been specified. Please help improve this user page if you can; the talk page may contain suggestions. |
- For other uses of "covariant" or "contravariant", see covariance and contravariance.
Definition
In
In
- for all
- for all
The set of all linear functionals from V to k, Homk(V,k), is itself a k-vector space. This space is called the dual space of V, or sometimes the algebraic dual space, to distinguish it from the continuous dual space. It is often written V*, or V′ when the field k is understood.
This means that in a matrix form, a vector times a covector yields the scalar quantity that is the intersection between the vector expressed in the column, and the covector expressed by the row.
The distinction is particularly important for computations with tensors, which often have mixed variance. This means that they have both covariant and contravariant components, or both vectors and covectors. A tensor's valence is number of variant and covariant terms.
Using Einstein notation, covariant components have lower indices, while contravariant components have upper indices.
When one chooses coordinates on a vector space , for concreteness say Euclidean n-space , both vectors and covectors can be written as an n-tuple of numbers, , but if one changes the basis, they transform differently. Vectors are called contravariant vectors, while covectors are called covariant vectors.
Given a basis for a vector space, a transform is represented by a matrix , while the dual transform is represented by the transpose , and the inverse dual transform is represented by the inverse of the transpose (equivalently, transpose of the inverse); duality reverses direction (it is a
These matrices agree if and only if is an orthogonal matrix, in which case covariant and contravariant vectors transform identically.
Context
Both special relativity (Lorentz covariance) and general relativity (general covariance) use covariant basis vectors.
Systems of
A major potential cause of confusion is that this duality of covariance/contravariance intervenes every time discussion of a vector or tensor quantity is represented by its components. This causes discussion in the mathematics and physics literature often apparently to be using opposite conventions. It is not the convention that differs, but whether an intrinsic or component-wise description is the primary way of thinking of quantities. As the names suggest, covariant quantities are thought of as moving or transforming forwards, while contravariant quantities transform backwards. This depends on whether one is using a fixed background—a fact that switches the point of view.
Informal usage: invariance
One can contrast covariance and contravariance (transforming in a particular way) with
In common
Similar informal usage is sometimes seen with respect to quantities like
Rules of covariant and contravariant transformation
Vectors are covariant, and covectors are contravariant, but the components of vectors are contravariant and the components of covectors are covariant.'' (This is in conflict with another section on this page, http://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors#Definition, "Vectors are called contravariant vectors, while covectors are called covariant vectors," and another page! "As stated [in http://en.wikipedia.org/wiki/Curvilinear_coordinates#Covariant_basis], contravariant vectors are vectors with contravariant components...)" See Einstein notation for details.
- This is a frequently confused point.
In tensor representation, a vector can be expressed as the sum of the products of each of its components times the basis vector belonging to that component in two ways (repeated indices are assumed to sum according to the
where are called the contravariant components of , are called the covariant components of , are covariant basis vectors, and are contravariant basis vectors if and only if these transform from coordinates to coordinates (where are differentiable functions of , and vice versa) according to the rules:
where the primed components and basis vectors represent A in the coordinates :
We could also compute the inverse relations:
which is only possible if the determinant of the matrices formed by the components of and are non-zero. The determinant of the matrix formed by is called the Jacobian of the transformation, which must be non-zero to provide a complete set of transformation laws.
Note that the matrices formed by all of the above partial derivative transformations can be generated as the inverse, transpose, and transpose of the inverse of the matrix formed by the components of . The key property of the tensor representation is the preservation of invariance in the sense that vector components which transform in a covariant manner (or contravariant manner) are paired with basis vectors that transform in a contravariant manner (or covariant manner), and these operations are inverse to one another according to the transformation rules. Substituting the transformation rules for the definition of gives:
where the partial derivative terms cancel one another since they must be inverse to one another. This illustrates what is meant by invariance. A similar relation holds for all vectors (or higher-order tensors), allowing them to be written in the manner described above. Using the transformation rules can also show that: , where is 1 if and 0 otherwise.
Note that in this kind of system the basis vectors are not generally of unit length, nor are covariant basis vectors necessarily parallel to their contravariant basis vectors (if the coordinates are non-orthogonal).
The above figure illustrates how the contravariant and covariant representations would be plotted in terms of components on a 2D curvilinear non-orthogonal grid for a generic vector. Note that the sum of either pair of vectors yields the same vector. Also note that the covariant basis vectors are parallel to their respective coordinate lines while the contravariant basis vectors are orthogonal to the directions of the other coordinate lines.
There are many other useful properties of the tensor representation. If we take the dot product of and then we obtain:
where is the covariant metric tensor. The dot product of and likewise gives:
where is the contravariant metric tensor. This gives two useful results: 1) the covariant (or contravariant) components of a vector can be recovered by taking the dot product of that vector and the covariant (or contravariant) basis vectors, and 2) the covariant and contravariant components are related by the
We note that the tensor representation is not restricted to vectors, but can be used on higher-order tensors where each covariant or contravariant component transforms individually according to the rules described above. For example, we could transform a so-called mixed tensor of the form:
by successively applying the transformation rules to each index according to whether it is covariant (lowered) or contravariant (raised).
Dual basis
Given a basis of a vector space V, there is a unique dual basis of the dual space, which is determined by requiring
- .
Euclidean R3
If e1, e2, e3 are contravariant
Note that even if the ei and ei are not orthonormal, they are still by this definition mutually orthonormal:
Then the contravariant coordinates of any vector v can be obtained by the dot product of v with the contravariant basis vectors:
Likewise, the covariant components of v can be obtained from the dot product of v with covariant basis vectors, viz.
Then v can be expressed in two (reciprocal) ways, viz.
or
Combining the above relations, we have
and we can convert from covariant to contravariant basis with
and
The indices of covariant coordinates, vectors, and tensors are subscripts. If the contravariant basis vectors are
What 'contravariant' means
Contravariant is a
Another method is used to derive covariant tensor components. When performing tensor transformations it is critical that the method used to map to the coordinate systems in use be tracked so that operations may be applied correctly for accurate, meaningful results.
In two dimensions, for an oblique rectilinear coordinate system, contravariant coordinates of a directed line segment (in two dimensions this is termed a vector) can be established by placing the origin of the coordinate axis at the tail of the vector. Parallel lines are placed through the head of the vector. The intersection of the line parallel to the x1 axis with the x2 axis provides the x2 coordinate. Similarly, the intersection of the line parallel to the x2 axis with the x1 axis provides the x1 coordinate.
By definition, the oblique, rectilinear, contravariant coordinates of the point P above are summarized as: xi = (x1, x2)
Notice the superscript; this is a standard nomenclature convention for contravariant tensor components and should not be confused with the subscript, which is used to designate covariant tensor components.
Is there a fundamental difference in the way contravariant and covariant components can be used, or could one simply interchange them everywhere? The answer is that in curved spaces, or in curved coordinate systems in flat space (e.g.
Using the definition above, the contravariant components of a position vector vi, where i = {1, 2}, can be defined as the differences between coordinates (or position vectors) of the head and tail, on the same coordinate axis. Stated in another way, the vector components are the projection onto an axis from the direction parallel to the other axis.
So, since we have placed our origin at the tail of the vector,
- vi = ( (x1 − 0), (x2 − 0 ) )
- vi = (x1, x2)
This result is generalized into n-dimensions. Contravariance is a fundamental concept or property within tensor theory and applies to tensors of all ranks over all manifolds. Since whether tensor components are contravariant or covariant, how they are mixed, and the order of operations all impact the results it is imperative to track for correct application of methods.
In more modern terms, the transformation properties of the covariant indices of a tensor are given by a pullback; by contrast, the transformation of the contravariant indices is given by a pushforward (differential).
Use in tensor analysis
In
On a
The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in the tangent bundle as well as the cotangent bundle.
A contravariant vector is one which transforms like , where are the coordinates of a particle at its proper time . A covariant vector is one which transforms like , where is a scalar field.
Algebra and geometry
In
In
Covariant and contravariant components transform in different ways under coordinate transformations. By considering a coordinate transformation on a manifold as a map from the manifold to itself, the transformation of covariant indices of a tensor are given by a pullback, and the transformation properties of the contravariant indices is given by a pushforward.
See also
External links