Singular matrix


A singular matrix is a square matrix that is not invertible, unlike non-singular matrix which is invertible. Equivalently, an -by- matrix is singular if and only if determinant,. In classical linear algebra, a matrix is called non-singular when it has an inverse; by definition, a matrix that fails this criterion is singular. In more algebraic terms, an -by- matrix A is singular exactly when its columns are linearly dependent, so that the linear map is not one-to-one.
In this case the kernel of A is non-trivial, and the homogeneous system admits non-zero solutions. These characterizations follow from standard rank-nullity and invertibility theorems: for a square matrix A, if and only if, and if and only if.

Computational implications

An invertible matrix helps in the algorithm by providing an assumption that certain transformations, computations and systems can be reversed and solved uniquely, like to. This helps solver to make sure if a solution is unique or not.
In Gaussian elimination, invertibility of the coefficient matrix ensures the algorithm produces a unique solution. For example, when matrix is invertible the pivots are non-zero, allowing one to row swap if necessary and solve the system, however in case of a singular matrix, some pivots can be zero which can not be fixed by mere row swaps. This imposes a problem where the elimination either breaks or gives an inconsistent result. One more problem a singular matrix produces when solving a Gaussian Elimination is that it can not solve the back substitution because to back substitute the diagonal entries of the matrix must be non-zero, i.e.. However, in case of singular matrix the result is often infinitely many solutions.

Applications

In mechanical and robotic systems, singular Jacobian matrices indicate kinematic singularities. For example, the Jacobian of a robotic manipulator loses rank when the robot reaches a configuration with constrained motion. At a singular configuration, the robot cannot move or apply forces in certain directions.
In graph theory and network physics, the Laplacian matrix of a graph is inherently singular because each row sums to zero. This reflects the fact that the uniform vector is in its nullspace.
In machine learning and statistics, singular matrices frequently appear due to multicollinearity. For instance, a data matrix leads to a singular covariance or matrix if features are linearly dependent. This occurs in linear regression when predictors are collinear, causing the normal equations matrix to be singular. The remedy is often to drop or combine features, or use the pseudoinverse. Dimension-reduction techniques like Principal Component Analysis exploit SVD: singular value decomposition yields low-rank approximations of data, effectively treating the data covariance as singular by discarding small singular values.
Certain transformations are modeled by singular matrices, since they collapse a dimension. Handling these requires care. In cryptography and coding theory, invertible matrices are used for mixing operations; singular ones would be avoided or detected as errors.

History

The study of singular matrices is rooted in the early history of linear algebra. Determinants were first developed in Japan by Seki in 1683 and in Europe by Leibniz and Cramer in the 1690s as tools for solving systems of equations. Leibniz explicitly recognized that a system has a solution precisely when a certain determinant expression equals zero. In that sense, singularity was understood as the critical condition for solvability. Over the 18th and 19th centuries, mathematicians established many properties of determinants and invertible matrices, formalizing the notion that
characterizes non-invertibility.
The term "singular matrix" itself emerged later, but the conceptual importance remained. In the 20th century, generalizations like the Moore–Penrose pseudoinverse were introduced to systematically handle singular or non-square cases. As recent scholarship notes, the idea of a pseudoinverse was proposed by E. H. Moore in 1920 and rediscovered by R. Penrose in 1955, reflecting its longstanding utility. The pseudoinverse and singular value decomposition became fundamental in both theory and applications for dealing with singularity. Today, singular matrices are a canonical subject in linear algebra: they delineate the boundary between invertible cases and degenerate cases. In abstract terms, singular matrices correspond to non-isomorphisms in linear mappings and are thus central to the theory of vector spaces and linear transformations.

Example

Consider. As the second column is a multiple of the first, the determinant is zero, and so is singular. Alternatively, applying Gaussian elimination on,
we see, constraints, and multiple solutions, unlike invertible matrices with unique solutions.