Matched filter
In signal processing, the output of the matched filter is given by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio in the presence of additive stochastic noise.
Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations. Additional applications of note are in seismology and gravitational-wave astronomy.
Matched filtering is a demodulation technique with LTI filters to maximize SNR.
It was originally also known as a North filter.
Derivation
Derivation via matrix algebra
The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals.The matched filter is the linear filter,, that maximizes the output signal-to-noise ratio.
where is the input as a function of the independent variable, and is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above, it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.
We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal with a filter that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive stochastic noise, we have the additional challenge of minimizing the output due to noise by choosing a filter that is orthogonal to the noise.
Let us formally define the problem. We seek a filter,, such that we maximize the output signal-to-noise ratio, where the output is the inner product of the filter and the observed signal.
Our observed signal consists of the desirable signal and additive noise :
Let us define the auto-correlation matrix of the noise, reminding ourselves that this matrix has Hermitian symmetry, a property that will become useful in the derivation:
where denotes the conjugate transpose of, and denotes expectation.
Let us call our output,, the inner product of our filter and the observed signal such that
We now define the signal-to-noise ratio, which is our objective function, to be the ratio of the power of the output due to the desired signal to the power of the output due to the noise:
We rewrite the above:
We wish to maximize this quantity by choosing. Expanding the denominator of our objective function, we have
Now, our becomes
We will rewrite this expression with some matrix manipulation. The reason for this seemingly counterproductive measure will become evident shortly. Exploiting the Hermitian symmetry of the auto-correlation matrix, we can write
We would like to find an upper bound on this expression. To do so, we first recognize a form of the Cauchy–Schwarz inequality:
which is to say that the square of the inner product of two vectors can only be as large as the product of the individual inner products of the vectors. This concept returns to the intuition behind the matched filter: this upper bound is achieved when the two vectors and are parallel. We resume our derivation by expressing the upper bound on our in light of the geometric inequality above:
Our valiant matrix manipulation has now paid off. We see that the expression for our upper bound can be greatly simplified:
We can achieve this upper bound if we choose,
where is an arbitrary real number. To verify this, we plug into our expression for the output :
Thus, our optimal matched filter is
We often choose to normalize the expected value of the power of the filter output due to the noise to unity. That is, we constrain
This constraint implies a value of, for which we can solve:
yielding
giving us our normalized filter,
If we care to write the impulse response of the filter for the convolution system, it is simply the complex conjugate time reversal of the input.
Though we have derived the matched filter in discrete time, we can extend the concept to continuous-time systems if we replace with the continuous-time autocorrelation function of the noise, assuming a continuous signal, continuous noise, and a continuous filter.
Derivation via Lagrangian
Alternatively, we may solve for the matched filter by solving our maximization problem with a Lagrangian. Again, the matched filter endeavors to maximize the output signal-to-noise ratio of a filtered deterministic signal in stochastic additive noise. The observed sequence, again, iswith the noise auto-correlation matrix,
The signal-to-noise ratio is
where and.
Evaluating the expression in the numerator, we have
and in the denominator,
The signal-to-noise ratio becomes
If we now constrain the denominator to be 1, the problem of maximizing is reduced to maximizing the numerator. We can then formulate the problem using a Lagrange multiplier:
which we recognize as a generalized eigenvalue problem
Since is of unit rank, it has only one nonzero eigenvalue. It can be shown that this eigenvalue equals
yielding the following optimal matched filter
This is the same result found in the previous subsection.
Interpretation as a least-squares estimator
Derivation
Matched filtering can also be interpreted as a least-squares estimator for the optimal location and scaling of a given model or template. Once again, let the observed sequence be defined aswhere is uncorrelated zero mean noise. The signal is assumed to be a scaled and shifted version of a known model sequence :
We want to find optimal estimates and for the unknown shift and scaling by minimizing the least-squares residual between the observed sequence and a "probing sequence" :
The appropriate will later turn out to be the matched filter, but is as yet unspecified. Expanding and the square within the sum yields
The first term in brackets is a constant and has no influence on the optimal solution. The last term has constant expected value because the noise is uncorrelated and has zero mean. We can therefore drop both terms from the optimization. After reversing the sign, we obtain the equivalent optimization problem
Setting the derivative w.r.t. to zero gives an analytic solution for :
Inserting this into our objective function yields a reduced maximization problem for just :
The numerator can be upper-bounded by means of the Cauchy–Schwarz inequality:
The optimization problem assumes its maximum when equality holds in this expression. According to the properties of the Cauchy–Schwarz inequality, this is only possible when
for arbitrary non-zero constants or, and the optimal solution is obtained at as desired. Thus, our "probing sequence" must be proportional to the signal model, and the convenient choice yields the matched filter
Note that the filter is the mirrored signal model. This ensures that the operation to be applied in order to find the optimum is indeed the convolution between the observed sequence and the matched filter. The filtered sequence assumes its maximum at the position where the observed sequence best matches the signal model.
Implications
The matched filter may be derived in a variety of ways, but as a special case of a least-squares procedure it may also be interpreted as a maximum likelihood method in the context of a Gaussian noise model and the associated Whittle likelihood.If the transmitted signal possessed no unknown parameters, then the matched filter would, according to the Neyman–Pearson lemma, minimize the error probability. However, since the exact signal generally is determined by unknown parameters that effectively are estimated in the filtering process, the matched filter constitutes a generalized maximum likelihood statistic. The filtered time series may then be interpreted as the profile likelihood, the maximized conditional likelihood as a function of the time parameter.
This implies in particular that the error probability is not necessarily optimal.
What is commonly referred to as the Signal-to-noise ratio , which is supposed to be maximized by a matched filter, in this context corresponds to, where is the maximized likelihood ratio.
The construction of the matched filter is based on a known noise spectrum. In practice, however, the noise spectrum is usually estimated from data and hence only known up to a limited precision. For the case of an uncertain spectrum, the matched filter may be generalized to a more robust iterative procedure with favourable properties also in non-Gaussian noise.