Tensor


In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects associated with a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensors. There are many types of tensors, including scalars and vectors, dual vectors, multilinear maps between vector spaces, and even some operations such as the dot product. Tensors are defined independent of any basis, although they are often referred to by their components in a basis related to a particular coordinate system; those components form an array, which can be thought of as a high-dimensional matrix.
Tensors have become important in physics, because they provide a concise mathematical framework for formulating and solving physics problems in areas such as mechanics, electrodynamics, and general relativity. In applications, it is common to study situations in which a different tensor can occur at each point of an object. For example, the stress within an object may vary from one location to another. A family of tensors, that vary across space in this way, is a tensor field. In some areas, tensor fields are so ubiquitous that they are often simply called "tensors".
Tullio Levi-Civita and Gregorio Ricci-Curbastro popularised tensors in 1900 – continuing the earlier work of Bernhard Riemann, Elwin Bruno Christoffel, and others – as part of the absolute differential calculus. The concept enabled an alternative formulation of the intrinsic differential geometry of a manifold in the form of the Riemann curvature tensor.

Definition

As multidimensional arrays

Although seemingly different, the various approaches to defining tensors describe the same geometric concept using different language and at different levels of abstraction.
A tensor may be represented as a array. Just as a vector in an -dimensional space is represented by a one-dimensional array with components with respect to a given basis, any tensor with respect to a basis is represented by a multidimensional array. For example, a linear operator is represented in a basis as a two-dimensional square array. The numbers in the multidimensional array are known as the components of the tensor. They are denoted by indices giving their position in the array, as subscripts and superscripts, following the symbolic name of the tensor. For example, the components of an order- tensor could be denoted  , where and are indices running from to, or also by. Whether an index is displayed as a superscript or subscript depends on the transformation properties of the tensor, described below. Thus while and can both be expressed as n-by-n matrices, and are numerically related via index juggling, the difference in their transformation laws indicates it would be improper to add them together.
The total number of indices required to identify each component uniquely is equal to the dimension or the number of ways of an array, which is why a tensor is sometimes referred to as an -dimensional array or an -way array. The total number of indices is also called the order, degree or rank of a tensor, although the term "rank" generally has another meaning in the context of matrices and tensors.
Just as the components of a vector change when we change the basis of the vector space, the components of a tensor also change under such a transformation. Each type of tensor comes equipped with a transformation law that details how the components of the tensor respond to a change of basis. The components of a vector can respond in two distinct ways to a change of basis, where the new basis vectors are expressed in terms of the old basis vectors as,
Here R ji are the entries of the change of basis matrix, and in the rightmost expression the summation sign was suppressed: this is the Einstein summation convention, which will be used throughout this article. The components vi of a column vector v transform with the inverse of the matrix R,
where the hat denotes the components in the new basis. This is called a contravariant transformation law, because the vector components transform by the inverse of the change of basis. In contrast, the components, wi, of a covector, w, transform with the matrix R itself,
This is called a covariant transformation law, because the covector components transform by the same matrix as the change of basis matrix. The components of a more general tensor are transformed by some combination of covariant and contravariant transformations, with one transformation law for each index. If the transformation matrix of an index is the inverse matrix of the basis transformation, then the index is called contravariant and is conventionally denoted with an upper index. If the transformation matrix of an index is the basis transformation itself, then the index is called covariant and is denoted with a lower index.
As a simple example, the matrix of a linear operator with respect to a basis is a rectangular array that transforms under a change of basis matrix by. For the individual matrix entries, this transformation law has the form so the tensor corresponding to the matrix of a linear operator has one covariant and one contravariant index: it is of type.
Combinations of covariant and contravariant components with the same index allow us to express geometric invariants. For example, the fact that a vector is the same object in different coordinate systems can be captured by the following equations, using the formulas defined above:
where is the Kronecker delta, which functions similarly to the identity matrix, and has the effect of renaming indices. This shows several features of the component notation: the ability to re-arrange terms at will, the need to use different indices when working with multiple objects in the same expression, the ability to rename indices, and the manner in which contravariant and covariant tensors combine so that all instances of the transformation matrix and its inverse cancel, so that expressions like can immediately be seen to be geometrically identical in all coordinate systems.
Similarly, a linear operator, viewed as a geometric object, does not actually depend on a basis: it is just a linear map that accepts a vector as an argument and produces another vector. The transformation law for how the matrix of components of a linear operator changes with the basis is consistent with the transformation law for a contravariant vector, so that the action of a linear operator on a contravariant vector is represented in coordinates as the matrix product of their respective coordinate representations. That is, the components are given by . These components transform contravariantly, since
The transformation law for an order tensor with p contravariant indices and q covariant indices is thus given as,
Here the primed indices denote components in the new coordinates, and the unprimed indices denote the components in the old coordinates. Such a tensor is said to be of order or type. The terms "order", "type", "rank", "valence", and "degree" are all sometimes used for the same concept. Here, the term "order" or "total order" will be used for the total dimension of the array, in the preceding example, and the term "type" for the pair giving the number of contravariant and covariant indices. A tensor of type is also called a -tensor for short.
This discussion motivates the following formal definition:
The definition of a tensor as a multidimensional array satisfying a transformation law traces back to the work of Ricci.
An equivalent definition of a tensor uses the representations of the general linear group. There is an action of the general linear group on the set of all ordered bases of an n-dimensional vector space. If is an ordered basis, and is an invertible matrix, then the action is given by
Let F be the set of all ordered bases. Then F is a principal homogeneous space for GL. Let W be a vector space and let be a representation of GL on W. Then a tensor of type is an equivariant map. Equivariance here means that
When is a tensor representation of the general linear group, this gives the usual definition of tensors as multidimensional arrays. This definition is often used to describe tensors on manifolds, and readily generalizes to other groups.

As multilinear maps

A downside to the definition of a tensor using the multidimensional array approach is that it is not apparent from the definition that the defined object is indeed basis independent, as is expected from an intrinsically geometric object. Although it is possible to show that transformation laws indeed ensure independence from the basis, sometimes a more intrinsic definition is preferred. One approach that is common in differential geometry is to define tensors relative to a fixed vector space V, which is usually taken to be a particular vector space of some geometrical significance like the tangent space to a manifold. In this approach, a type tensor T is defined as a multilinear map,
where V is the corresponding dual space of covectors, which is linear in each of its arguments. The above assumes V is a vector space over the real numbers,. More generally, V can be taken over any field F, with F replacing as the codomain of the multilinear maps.
By applying a multilinear map T of type to a basis for V and a canonical cobasis for V,
a -dimensional array of components can be obtained. A different choice of basis will yield different components. But, because T is linear in all of its arguments, the components satisfy the tensor transformation law used in the multilinear array definition. The multidimensional array of components of T thus form a tensor according to that definition. Moreover, such an array can be realized as the components of some multilinear map T. This motivates viewing multilinear maps as the intrinsic objects underlying tensors.
In viewing a tensor as a multilinear map, it is conventional to identify the double dual V∗∗ of the vector space V, i.e., the space of linear functionals on the dual vector space V, with the vector space V. There is always a natural linear map from V to its double dual, given by evaluating a linear form in V against a vector in V. This linear mapping is an isomorphism in finite dimensions, and it is often then expedient to identify V with its double dual.