Entropy estimation
In various science/engineering applications, such as independent component analysis, image analysis, genetic analysis, speech recognition, manifold learning, and time delay estimation it is useful to estimate the differential entropy of a system or process, given some observations.
The simplest and most common approach uses histogram-based estimation, but other approaches have been developed and used, each with its own benefits and drawbacks. The main factor in choosing a method is often a trade-off between the bias and the variance of the estimate, although the nature of the distribution of the data may also be a factor, as well as the sample size and the size of the alphabet of the probability distribution.
Histogram estimator
The histogram approach uses the idea that the differential entropy of a probability distribution for a continuous random variable,can be approximated by first approximating with a histogram of the observations, and then finding the discrete entropy of a quantization of
with bin probabilities given by that histogram. The histogram is itself a maximum-likelihood (ML) estimate of the discretized frequency distribution ), where is the width of the th bin. Histograms can be quick to calculate, and simple, so this approach has some attraction. However, the estimate produced is biased, and although corrections can be made to the estimate, they may not always be satisfactory.
A method better suited for multidimensional probability density functions is to first make a pdf estimate with some method, and then, from the pdf estimate, compute the entropy. A useful pdf estimate method is e.g. Gaussian mixture modeling, where the expectation maximization algorithm is used to find an ML estimate of a weighted sum of Gaussian pdf's approximating the data pdf.
Estimates based on sample-spacings
If the data is one-dimensional, we can imagine taking all the observations and putting them in order of their value. The spacing between one value and the next then gives us a rough idea of the probability density in that region: the closer together the values are, the higher the probability density. This is a very rough estimate with high variance, but can be improved, for example by thinking about the space between a given value and the one m away from it, where m is some fixed number.The probability density estimated in this way can then be used to calculate the entropy estimate, in a similar way to that given above for the histogram, but with some slight tweaks.
One of the main drawbacks with this approach is going beyond one dimension: the idea of lining the data points up in order falls apart in more than one dimension. However, using analogous methods, some multidimensional entropy estimators have been developed.