State-space representation
In control engineering and system identification, a state-space representation is a mathematical model of a physical system that uses state variables to track how inputs shape system behavior over time through first-order differential equations or difference equations. These state variables change based on their current values and inputs, while outputs depend on the states and sometimes the inputs too. The state space is a geometric space where the axes are these state variables, and the system’s state is represented by a state vector.
For linear, time-invariant, and finite-dimensional systems, the equations can be written in matrix form, offering a compact alternative to the frequency domain’s Laplace transforms for multiple-input and multiple-output systems. Unlike the frequency domain approach, it works for systems beyond just linear ones with zero initial conditions. This approach turns systems theory into an algebraic framework, making it possible to use Kronecker structures for efficient analysis.
State-space models are applied in fields such as economics, statistics, computer science, electrical engineering, and neuroscience. In econometrics, for example, state-space models can be used to decompose a time series into trend and cycle, compose individual indicators into a composite index, identify turning points of the business cycle, and estimate GDP using latent and unobserved time series. Many applications rely on the Kalman Filter or a state observer to produce estimates of the current unknown state variables using their previous observations.
State variables
The internal state variables are the smallest possible subset of system variables that can represent the entire state of the system at any given time. The minimum number of state variables required to represent a given system,, is usually equal to the order of the system's defining differential equation, but not necessarily. If the system is represented in transfer function form, the minimum number of state variables is equal to the order of the transfer function's denominator after it has been reduced to a proper fraction. It is important to understand that converting a state-space realization to a transfer function form may lose some internal information about the system, and may provide a description of a system which is stable, when the state-space realization is unstable at certain points. In electric circuits, the number of state variables is often, though not always, the same as the number of energy storage elements in the circuit such as capacitors and inductors. The state variables defined must be linearly independent, i.e., no state variable can be written as a linear combination of the other state variables.Linear systems
The most general state-space representation of a linear system with inputs, outputs and state variables is written in the following form:where:
In this general formulation, all matrices are allowed to be time-variant ; however, in the common LTI case, matrices will be time invariant. The time variable can be continuous or discrete. In the latter case, the time variable is usually used instead of. Hybrid systems allow for time domains that have both continuous and discrete parts. Depending on the assumptions made, the state-space model representation can assume the following forms:
| System type | State-space model |
| Continuous time-invariant | |
| Continuous time-variant | |
| Explicit discrete time-invariant | |
| Explicit discrete time-variant | |
| Laplace domain of continuous time-invariant | |
| Z-domain of discrete time-invariant |
Example: continuous-time LTI case
Stability and natural response characteristics of a continuous-time LTI system can be studied from the eigenvalues of the matrix. The stability of a time-invariant state-space model can be determined by looking at the system's transfer function in factored form. It will then look something like this:The denominator of the transfer function is equal to the characteristic polynomial found by taking the determinant of,
The roots of this polynomial are the system transfer function's poles. These poles can be used to analyze whether the system is asymptotically stable or marginally stable. An alternative approach to determining stability, which does not involve calculating eigenvalues, is to analyze the system's Lyapunov stability.
The zeros found in the numerator of can similarly be used to determine whether the system is minimum phase.
The system may still be input–output stable even though it is not internally stable. This may be the case if unstable poles are canceled out by zeros.
Controllability
The state controllability condition implies that it is possible – by admissible inputs – to steer the states from any initial value to any final value within some finite time window. A continuous time-invariant linear state-space model is controllable if and only ifwhere rank is the number of linearly independent rows in a matrix, and where n is the number of state variables.
Observability
Observability is a measure for how well internal states of a system can be inferred by knowledge of its external outputs. The observability and controllability of a system are mathematical duals.A continuous time-invariant linear state-space model is observable if and only if
Transfer function
The "transfer function" of a continuous time-invariant linear state-space model can be derived in the following way:First, taking the Laplace transform of
yields
Next, we simplify for, giving
and thus
Substituting for in the output equation
giving
Assuming zero initial conditions and a single-input single-output system, the transfer function is defined as the ratio of output and input. For a multiple-input multiple-output system, however, this ratio is not defined. Therefore, assuming zero initial conditions, the transfer function matrix is derived from
using the method of equating the coefficients which yields
Consequently, is a matrix with the dimension which contains transfer functions for each input output combination. Due to the simplicity of this matrix notation, the state-space representation is commonly used for multiple-input, multiple-output systems. The Rosenbrock system matrix provides a bridge between the state-space representation and its transfer function.
Canonical realizations
Any given transfer function which is strictly proper can easily be transferred into state-space by the following approach :Given a transfer function, expand it to reveal all coefficients in both the numerator and denominator. This should result in the following form:
The coefficients can now be inserted directly into the state-space model by the following approach:
This state-space realization is called controllable canonical form because the resulting model is guaranteed to be controllable.
The transfer function coefficients can also be used to construct another type of canonical form
This state-space realization is called observable canonical form because the resulting model is guaranteed to be observable.
Proper transfer functions
Transfer functions which are only proper can also be realised quite easily. The trick here is to separate the transfer function into two parts: a strictly proper part and a constant.The strictly proper transfer function can then be transformed into a canonical state-space realization using techniques shown above. The state-space realization of the constant is trivially. Together we then get a state-space realization with matrices A, B and C determined by the strictly proper part, and matrix D determined by the constant.
Here is an example to clear things up a bit:
which yields the following controllable realization
Notice how the output also depends directly on the input. This is due to the constant in the transfer function.
Feedback
A common method for feedback is to multiply the output by a matrix K and setting this as the input to the system:.Since the values of K are unrestricted the values can easily be negated for negative feedback.
The presence of a negative sign is merely a notational one and its absence has no impact on the end results.
becomes
solving the output equation for and substituting in the state equation results in
The advantage of this is that the eigenvalues of A can be controlled by setting K appropriately through eigendecomposition of.
This assumes that the closed-loop system is controllable or that the unstable eigenvalues of A can be made stable through appropriate choice of K.
Example
For a strictly proper system D equals zero. Another fairly common situation is when all states are outputs, i.e. y = x, which yields C = I, the identity matrix. This would then result in the simpler equationsThis reduces the necessary eigendecomposition to just.
Feedback with setpoint (reference) input
In addition to feedback, an input,, can be added such that.becomes
solving the output equation for and substituting in the state equation
results in
One fairly common simplification to this system is removing D, which reduces the equations to
Moving object example
A classical linear system is that of one-dimensional movement of an object.Newton's laws of motion for an object moving horizontally on a plane and attached to a wall with a spring:
where
- is position; is velocity; is acceleration
- is an applied force
- is the viscous friction coefficient
- is the spring constant
- is the mass of the object
where
- represents the position of the object
- is the velocity of the object
- is the acceleration of the object
- the output is the position of the object
which has full rank for all and. This means, that if initial state of the system is known, and if the and are constants, then there is a force that could move the cart into any other position in the system.
The observability test is then
which also has full rank. Therefore, this system is both controllable and observable.