Vector space


In mathematics and physics, a vector space is a set whose elements, often called vectors, can be added together and multiplied by numbers called scalars. The operations of vector addition and scalar multiplication must satisfy certain requirements, called vector axioms. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field.
Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrices, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear equations.
Vector spaces are characterized by their dimension, which, roughly speaking, specifies the number of independent directions in the space. This means that for two vector spaces over a given field and with the same dimension, the properties that depend only on the vector-space structure are exactly the same. A vector space is finite-dimensional if its dimension is a natural number. Otherwise, it is infinite-dimensional, and its dimension is an infinite cardinal. Finite-dimensional vector spaces occur naturally in geometry and related areas. Infinite-dimensional vector spaces occur in many areas of mathematics. For example, polynomial rings are countably infinite-dimensional vector spaces, and many function spaces have the cardinality of the continuum as a dimension.
Many vector spaces that are considered in mathematics are also endowed with other structures. This is the case of algebras, which include field extensions, polynomial rings, associative algebras and Lie algebras. This is also the case of topological vector spaces, which include function spaces, inner product spaces, normed spaces, Hilbert spaces and Banach spaces.

Definition and basic properties

In this article, vectors are represented in boldface to distinguish them from scalars.
A vector space over a field is a non-empty set together with a binary operation and a binary function that satisfy the eight axioms listed below. In this context, the elements of are commonly called vectors, and the elements of are called scalars.
  • The binary operation, called vector addition or simply addition assigns to any two vectors and in a third vector in which is commonly written as, and called the sum of these two vectors.
  • The binary function, called scalar multiplication, assigns to any scalar in and any vector in another vector in, which is denoted .
To have a vector space, the eight following axioms must be satisfied for every, and in, and and in.
AxiomStatement
Associativity of vector addition
Commutativity of vector addition
Identity element of vector additionThere exists an element, called the zero vector, such that for all.
Inverse elements of vector additionFor every, there exists an element, called the additive inverse of, such that.
Compatibility of scalar multiplication with field multiplication
Identity element of scalar multiplication, where denotes the multiplicative identity in.
Distributivity of scalar multiplication with respect to vector addition  
Distributivity of scalar multiplication with respect to field addition

When the scalar field is the real numbers, the vector space is called a real vector space, and when the scalar field is the complex numbers, the vector space is called a complex vector space. These two cases are the most common ones, but vector spaces with scalars in an arbitrary field are also commonly considered. Such a vector space is called an vector space or a vector space over .
An equivalent definition of a vector space can be given, which is much more concise but less elementary: the first four axioms say that a vector space is an abelian group under addition, and the four remaining axioms say that this operation defines a ring homomorphism from the field into the endomorphism ring of this group. Specifically, the distributivity of scalar multiplication with respect to vector addition means that multiplication by a scalar is an endomorphism of the group. The remaining three axiom establish that the function that maps a scalar to the multiplication by is a ring homomorphism from the field to the endomorphism ring of the group.
Subtraction of two vectors can be defined as
Direct consequences of the axioms include that, for every and one has
  • implies or
Even more concisely, a vector space is a module over a field.

Bases, vector coordinates, and subspaces

;Linear combination
;Linear independence
;Linear subspace
;Linear span
;Basis and dimension
Bases are a fundamental tool for the study of vector spaces, especially when the dimension is finite. In the infinite-dimensional case, the existence of infinite bases, often called Hamel bases, depends on the axiom of choice. It follows that, in general, no base can be explicitly described. For example, the real numbers form an infinite-dimensional vector space over the rational numbers, for which no specific basis is known.
Consider a basis of a vector space of dimension over a field. The definition of a basis implies that every may be written
with in, and that this decomposition is unique. The scalars are called the coordinates of on the basis. They are also said to be the coefficients of the decomposition of on the basis. One also says that the -tuple of the coordinates is the coordinate vector of on the basis, since the set of the -tuples of elements of is a vector space for componentwise addition and scalar multiplication, whose dimension is.
The one-to-one correspondence between vectors and their coordinate vectors maps vector addition to vector addition and scalar multiplication to scalar multiplication. It is thus a vector space isomorphism, which allows translating reasonings and computations on vectors into reasonings and computations on their coordinates.

History

Vector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensional space. Around 1636, French mathematicians René Descartes and Pierre de Fermat founded analytic geometry by identifying solutions to an equation of two variables with points on a plane curve. To achieve geometric solutions without using coordinates, Bolzano introduced, in 1804, certain operations on points, lines, and planes, which are predecessors of vectors. introduced the notion of barycentric coordinates. introduced an equivalence relation on directed line segments that share the same length and direction which he called equipollence. A Euclidean vector is then an equivalence class of that relation.
Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and the inception of quaternions by the latter. They are elements in R2 and R4; treating them using linear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.
In 1857, Cayley introduced the matrix notation which allows for harmonization and simplification of linear maps. Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with operations. In his work, the concepts of linear independence and dimension, as well as scalar products are present. Grassmann's 1844 work exceeds the framework of vector spaces as well since his considering multiplication led him to what are today called algebras. Italian mathematician Peano was the first to give the modern definition of vector spaces and linear maps in 1888, although he called them "linear systems". Peano's axiomatization allowed for vector spaces with infinite dimension, but Peano did not develop that theory further. In 1897, Salvatore Pincherle adopted Peano's axioms and made initial inroads into the theory of infinite-dimensional vector spaces.
An important development of vector spaces is due to the construction of function spaces by Henri Lebesgue. This was later formalized by Banach and Hilbert, around 1920. At that time, algebra and the new field of functional analysis began to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.

Examples

Arrows in the plane


The first example of a vector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forces or velocities. Given any two such arrows, and, the parallelogram spanned by these two arrows contains one diagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows, and is denoted. In the special case of two arrows on the same line, their sum is the arrow on this line whose length is the sum or the difference of the lengths, depending on whether the arrows have the same direction. Another operation that can be done with arrows is scaling: given any positive real number, the arrow that has the same direction as, but is dilated or shrunk by multiplying its length by, is called multiplication of by. It is denoted. When is negative, is defined as the arrow pointing in the opposite direction instead.
The following shows a few examples: if, the resulting vector has the same direction as, but is stretched to the double length of . Equivalently, is the sum. Moreover, has the opposite direction and the same length as .