Linear map
In mathematics, and more specifically in linear algebra, a linear map is a particular kind of function between vector spaces, which respects the basic operations of vector addition and scalar multiplication. A standard example of a linear map is an matrix, which takes vectors in -dimensions into vectors in -dimensions in a way that is compatible with addition of vectors, and multiplication of vectors by scalars.
A linear map is a homomorphism of vector spaces. Thus, a linear map satisfies, where and are scalars, and and are vectors. A linear mapping always maps the origin of to the origin of, and linear subspaces of onto linear subspaces in ; for example, it maps a plane through the origin in to either a plane through the origin in, a line through the origin in, or just the origin in. Linear maps can often be represented as matrices, and simple examples include [Rotations and reflections in two dimensions|rotation and reflection linear transformations].
Definition and first consequences
Let and be vector spaces over the same field, such as the real or complex numbers.A function is said to be a linear map if for any two vectors and any scalar the following two conditions are satisfied:
- Additivity / operation of addition
- Homogeneity of degree 1 / operation of scalar multiplication
By the associativity of the addition operation denoted as +, for any vectors and scalars the following equality holds:
Thus a linear map is one which preserves linear combinations.
Denoting the zero elements of the vector spaces and by and respectively, it follows that Let and in the equation for homogeneity of degree 1:
A linear map with viewed as a one-dimensional vector space over itself is called a linear functional.
These statements generalize to any left-module over a ring without modification, and to any right-module upon reversing of the scalar multiplication.
Examples
- A prototypical example that gives linear maps their name is a function, of which the graph is a line through the origin.
- More generally, any homothety centered in the origin of a vector space is a linear map.
- The zero map between two vector spaces is linear.
- The identity map on any module is a linear operator.
- For real numbers, the map is not linear.
- For real numbers, the map is not linear.
- If is a real matrix, then defines a linear map from to by sending a column vector to the column vector. Conversely, any linear map between finite-dimensional vector spaces can be represented in this manner; see the, below.
- If is an isometry between real normed spaces such that then is a linear map. This result is not necessarily true for complex normed space.
- Differentiation defines a linear map from the space of all differentiable functions to the space of all functions. It also defines a linear operator on the space of all smooth functions. Indeed,
- A definite integral over some interval is a linear map from the space of all real-valued integrable functions on to. Indeed,
- An indefinite integral with a fixed integration starting point defines a linear map from the space of all real-valued integrable functions on to the space of all real-valued, differentiable functions on. Without a fixed starting point, the antiderivative maps to the quotient space of the differentiable functions by the linear space of constant functions.
- If and are finite-dimensional vector spaces over a field, of respective dimensions and, then the function that maps linear maps to matrices in the way described in is a linear map, and even a linear isomorphism.
- The expected value of a random variable is a linear function of the random variable: for random variables and we have and. The conditional expectation is as well. But the variance of a random variable is not linear, because for instance.
Linear endomorphisms and isomorphisms
If a linear map is a bijection then it is called a '. In the case where, a linear map is called a linear endomorphism. Sometimes the term ' refers to this case, but the term "linear operator" can have different meanings for different conventions.Linear extensions
Often, a linear map is constructed by defining it on a subset of a vector space and then to the linear span of the domain.Suppose and are vector spaces and is a function defined on some subset
Then a of to if it exists, is a linear map defined on that extends and takes its values from the codomain of
When the subset is a vector subspace of then a linear extension of to all of is guaranteed to exist if is a linear map. In particular, if has a linear extension to then it has a linear extension to all of
The map can be extended to a linear map if and only if whenever is an integer, are scalars, and are vectors such that then necessarily
If a linear extension of exists then the linear extension is unique and
holds for all and as above.
If is linearly independent then every function into any vector space has a linear extension to a map .
For example, if and then the assignment and can be linearly extended from the linearly independent set of vectors to a linear map on The unique linear extension is the map that sends to
Every linear functional defined on a vector subspace of a real or complex vector space has a linear extension to all of
Indeed, the Hahn–Banach dominated extension theorem even guarantees that when this linear functional is dominated by some given seminorm then there exists a linear extension to that is also dominated by
Matrices
If and are finite-dimensional vector spaces and a basis is defined for each vector space, then every linear map from to can be represented by a matrix. This is useful because it allows concrete calculations. Matrices yield examples of linear maps: if is a real matrix, then describes a linear map .Let be a basis for. Then every vector is uniquely determined by the coefficients in the field :
If is a linear map,
which implies that the function f is entirely determined by the vectors. Now let be a basis for. Then we can represent each vector as
Thus, the function is entirely determined by the values of. If we put these values into an matrix, then we can conveniently use it to compute the vector output of for any vector in. To get, every column of is a vector
corresponding to as defined above. To define it more clearly, for some column that corresponds to the mapping,
where is the matrix of. In other words, every column has a corresponding vector whose coordinates are the elements of column. A single linear map may be represented by many matrices. This is because the values of the elements of a matrix depend on the bases chosen.
The matrices of a linear transformation can be represented visually:
- Matrix for relative to :
- Matrix for relative to :
- Transition matrix from to :
- Transition matrix from to :
Examples in two dimensions
In two-dimensional space R2 linear maps are described by 2 × 2 matrices. These are some examples:- rotation
- * by 90 degrees counterclockwise:
- * by an angle θ counterclockwise:
- reflection
- * through the x axis:
- * through the y axis:
- * through a line making an angle θ with the origin:
- scaling by 2 in all directions:
- horizontal shear mapping:
- skew of the y axis by an angle θ:
- squeeze mapping:
- projection onto the y axis:
Vector space of linear maps
The composition of linear maps is linear: if and are linear, then so is their composition. It follows from this that the class of all vector spaces over a given field K, together with K-linear maps as morphisms, forms a category.The inverse of a linear map, when defined, is again a linear map.
If and are linear, then so is their pointwise sum, which is defined by.
If is linear and is an element of the ground field, then the map, defined by, is also linear.
Thus the set of linear maps from to itself forms a vector space over, sometimes denoted. Furthermore, in the case that, this vector space, denoted, is an associative algebra under composition of maps, since the composition of two linear maps is again a linear map, and the composition of maps is always associative. This case is discussed in more detail below.
Given again the finite-dimensional case, if bases have been chosen, then the composition of linear maps corresponds to the matrix multiplication, the addition of linear maps corresponds to the matrix addition, and the multiplication of linear maps with scalars corresponds to the multiplication of matrices with scalars.
Endomorphisms and automorphisms
A linear transformation is an endomorphism of ; the set of all such endomorphisms together with addition, composition and scalar multiplication as defined above forms an associative algebra with identity element over the field . The multiplicative identity element of this algebra is the identity map.An endomorphism of that is also an isomorphism is called an automorphism of. The composition of two automorphisms is again an automorphism, and the set of all automorphisms of forms a group, the automorphism group of which is denoted by or. Since the automorphisms are precisely those endomorphisms which possess inverses under composition, is the group of units in the ring.
If has finite dimension, then is isomorphic to the associative algebra of all matrices with entries in. The automorphism group of is isomorphic to the general linear group of all invertible matrices with entries in.
Kernel, image and the rank–nullity theorem
If is linear, we define the kernel and the image or range of byis a subspace of and is a subspace of. The following dimension formula is known as the rank–nullity theorem:
The number is also called the rank of and written as, or sometimes, ; the number is called the nullity of and written as or. If and are finite-dimensional, bases have been chosen and is represented by the matrix, then the rank and nullity of are equal to the rank and nullity of the matrix, respectively.
Cokernel
A subtler invariant of a linear transformation is the cokernel, which is defined asThis is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence
These can be interpreted thus: given a linear equation f = w to solve,
- the kernel is the space of solutions to the homogeneous equation f = 0, and its dimension is the number of degrees of freedom in the space of solutions, if it is not empty;
- the co-kernel is the space of constraints that the solutions must satisfy, and its dimension is the maximal number of independent constraints.
As a simple example, consider the map f'': R2 → R2, given by f =. Then for an equation f = to have a solution, we must have a = 0, and in that case the solution space is or equivalently stated, +,. The kernel may be expressed as the subspace < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W → R, : given a vector, the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map f: R∞ → R∞, with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0, its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel, but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension. The reverse situation obtains for the map h: R∞ → R∞, with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.
Index
For a linear operator with finite-dimensional kernel and co-kernel, one may define index as:namely the degrees of freedom minus the number of constraints.
For a transformation between finite-dimensional vector spaces, this is just the difference dim − dim, by rank–nullity. This gives an indication of how many solutions or how many constraints one has: if mapping from a larger space to a smaller one, the map may be onto, and thus will have degrees of freedom even without constraints. Conversely, if mapping from a smaller space to a larger one, the map cannot be onto, and thus one will have constraints even without degrees of freedom.
The index of an operator is precisely the Euler characteristic of the 2-term complex 0 → V → W → 0. In operator theory, the index of Fredholm operators is an object of study, with a major result being the Atiyah–Singer index theorem.
Algebraic classifications of linear transformations
No classification of linear maps could be exhaustive. The following incomplete list enumerates some important classifications that do not require any additional structure on the vector space.Let and denote vector spaces over a field and let be a linear map.
Monomorphism
is said to be injective or a monomorphism if any of the following equivalent conditions are true:- is one-to-one as a map of sets.
- is monic or left-cancellable, which is to say, for any vector space and any pair of linear maps and, the equation implies.
- is left-invertible, which is to say there exists a linear map such that is the identity map on.
Epimorphism
is said to be surjective or an epimorphism if any of the following equivalent conditions are true:- is onto as a map of sets.
- is epic or right-cancellable, which is to say, for any vector space and any pair of linear maps and, the equation implies.
- is right-invertible, which is to say there exists a linear map such that is the identity map on.
Isomorphism
is said to be an isomorphism if it is both left- and right-invertible. This is equivalent to being both one-to-one and onto or also to being both epic and monic, and so being a bimorphism.If is an endomorphism, then:
- If, for some positive integer, the -th iterate of,, is identically zero, then is said to be nilpotent.
- If, then is said to be idempotent
- If, where is some scalar, then is said to be a scaling transformation or scalar multiplication map; see scalar matrix.
Change of basis
Given a linear map which is an endomorphism whose matrix is A, in the basis B of the space it transforms vector coordinates as = A. As vectors change with the inverse of B its inverse transformation is = B.Substituting this in the first expression
hence
Therefore, the matrix in the new basis is A′ = B−1AB, being B the matrix of the given basis.
Therefore, linear maps are said to be 1-co- 1-contra-variant objects, or type tensors.
Continuity
A linear transformation between topological vector spaces, for example normed spaces, may be continuous. If its domain and codomain are the same, it will then be a continuous linear operator. A linear operator on a normed linear space is continuous if and only if it is bounded, for example, when the domain is finite-dimensional. An infinite-dimensional domain may have discontinuous linear operators.An example of an unbounded, hence discontinuous, linear transformation is differentiation on the space of smooth functions equipped with the supremum norm. For a specific example, converges to 0, but its derivative does not, so differentiation is not continuous at 0.
Applications
A specific application of linear maps is for geometric transformations, such as those performed in computer graphics, where the translation, rotation and scaling of 2D or 3D objects is performed by the use of a transformation matrix. Linear mappings also are used as a mechanism for describing change: for example in calculus correspond to derivatives; or in relativity, used as a device to keep track of the local transformations of reference frames.Another application of these transformations is in compiler optimizations of nested-loop code, and in parallelizing compiler techniques.