Functional data analysis


Functional data analysis is a branch of statistics that analyses data providing information about curves, surfaces or anything else varying over a continuum. In its most general form, under an FDA framework, each sample element of functional data is considered to be a random function. The physical continuum over which these functions are defined is often time, but may also be spatial location, wavelength, probability, etc. Intrinsically, functional data are infinite dimensional. The high intrinsic dimensionality of these data brings challenges for theory as well as computation, where these challenges vary with how the functional data were sampled. However, the high or infinite dimensional structure of the data is a rich source of information and there are many interesting challenges for research and data analysis.

History

Functional data analysis has roots going back to work by Grenander and Karhunen in the 1940s and 1950s. They considered the decomposition of square-integrable continuous time stochastic process into eigencomponents, now known as the Karhunen-Loève decomposition. A rigorous analysis of functional principal components analysis was done in the 1970s by Kleffe, Dauxois and Pousse including results about the asymptotic distribution of the eigenvalues. More recently in the 1990s and 2000s the field has focused more on applications and understanding the effects of dense and sparse observations schemes. The term "Functional Data Analysis" was coined by James O. Ramsay.

Mathematical formalism

Random functions can be viewed as random elements taking values in a Hilbert space, or as a stochastic process. The former is mathematically convenient, whereas the latter is somewhat more suitable from an applied perspective. These two approaches coincide if the random functions are continuous and a condition called mean-squared continuity is satisfied.

Hilbertian random variables

In the Hilbert space viewpoint, one considers an -valued random element, where is a separable Hilbert space such as the space of square-integrable functions. Under the integrability condition that, one can define the mean of as the unique element satisfying
This formulation is the Pettis integral but the mean can also be defined as Bochner integral. Under the integrability condition that is finite, the covariance operator of is a linear operator that is uniquely defined by the relation
or, in tensor form,. The spectral theorem allows to decompose as the Karhunen-Loève decomposition
where are eigenvectors of, corresponding to the nonnegative eigenvalues of, in a non-increasing order. Truncating this infinite series to a finite order underpins functional principal component analysis.

Stochastic processes

The Hilbertian point of view is mathematically convenient, but abstract; the above considerations do not necessarily even view as a function at all, since common choices of like and Sobolev spaces consist of equivalence classes, not functions. The stochastic process perspective views as a collection of random variables
indexed by the unit interval. The mean and covariance functions are defined in a pointwise manner as
.
Under the mean square continuity, and are continuous functions and then the covariance function defines a covariance operator given by
The spectral theorem applies to, yielding eigenpairs, so that in tensor product notation writes
Moreover, since is continuous for all, all the are continuous. Mercer's theorem then states that
Finally, under the extra assumption that has continuous sample paths, namely that with probability one, the random function is continuous, the Karhunen-Loève expansion above holds for and the Hilbert space machinery can be subsequently applied. Continuity of sample paths can be shown using Kolmogorov continuity theorem.

Functional data designs

Functional data are considered as realizations of a stochastic process that is an process on a bounded and closed interval with mean function and covariance function. The realizations of the process for the i-th subject is, and the sample is assumed to consist of independent subjects. The sampling schedule may vary across subjects, denoted as for the i-th subject. The corresponding i-th observation is denoted as, where. In addition, the measurement of is assumed to have random noise with and, which are independent across and.

1. Fully observed functions without noise at arbitrarily dense grid

Measurements available for all
Often unrealistic but mathematically convenient.
Real life example: Tecator spectral data.

2. Densely sampled functions with noisy measurements (dense design)

Measurements, where are recorded on a regular grid,
, and applies to typical functional data.
Real life example: and

3. Sparsely sampled functions with noisy measurements (longitudinal data)

Measurements, where are random times and their number per subject is random and finite.
Real life example: CD4 count data for AIDS patients.

Functional principal component analysis

is the most prevalent tool in FDA, partly because FPCA facilitates dimension reduction of the inherently infinite-dimensional functional data to finite-dimensional random vector of scores. More specifically, dimension reduction is achieved by expanding the underlying observed random trajectories in a functional basis consisting of the eigenfunctions of the covariance operator on. Consider the covariance operator as in, which is a compact operator on Hilbert space.
By Mercer's theorem, the kernel function of, i.e., the covariance function, has spectral decomposition, where the series convergence is absolute and uniform, and are real-valued nonnegative eigenvalues in descending order with the corresponding orthonormal eigenfunctions . By the Karhunen–Loève theorem, the FPCA expansion of an underlying random trajectory is, where are the functional principal components, sometimes referred to as scores.
The Karhunen–Loève expansion facilitates dimension reduction in the sense that the partial sum converges uniformly, i.e.,
and thus the partial sum with a large enough yields a good approximation to the infinite sum. Thereby, the information in is reduced from infinite dimensional to a -dimensional vector with the approximated process:
Other popular bases include spline, Fourier series and wavelet bases. Important applications of FPCA include the modes of variation and functional principal component regression.

Functional linear regression models

can be viewed as an extension of the traditional multivariate linear models that associates vector responses with vector covariates. The traditional linear model with scalar response and vector covariate can be expressed as
where denotes the inner product in Euclidean space, and denote the regression coefficients, and is a zero mean finite variance random error. Functional linear models can be divided into two types based on the responses.

Functional regression models with scalar response

Replacing the vector covariate and the coefficient vector in model by a centered functional covariate and coefficient function for and replacing the inner product in Euclidean space by that in Hilbert space Lp space|, one arrives at the functional linear model
The simple functional linear model can be extended to multiple functional covariates,, also including additional vector covariates, where, by
where is regression coefficient for, the domain of is, is the centered functional covariate given by, and is regression coefficient function for, for. Models and have been studied extensively.

Functional regression models with functional response

Consider a functional response on and multiple functional covariates,,. Two major models have been considered in this setup. One of these two models, generally referred to as functional linear model, can be written as:
where is the functional intercept, for , is a centered functional covariate on, is the corresponding functional slopes with same domain, respectively, and is usually a random process with mean zero and finite variance. In this case, at any given time, the value of, i.e.,, depends on the entire trajectories of. Model has been studied extensively.

Function-on-scalar regression

In particular, taking as a constant function yields a special case of model which is a functional linear model with functional responses and scalar covariates.

Concurrent regression models

This model is given by
where are functional covariates on, are the coefficient functions defined on the same interval and is usually assumed to be a random process with mean zero and finite variance. This model assumes that the value of depends on the current value of only and not the history or future value. Hence, it is a "concurrent regression model", which is also referred as "varying-coefficient" model. Further, various estimation methods have been proposed.

Functional nonlinear regression models

Direct nonlinear extensions of the classical functional linear regression models still involve a linear predictor, but combine it with a nonlinear link function, analogous to the idea of generalized linear model from the conventional linear model. Developments towards fully nonparametric regression models for functional data encounter problems such as curse of dimensionality. In order to bypass the "curse" and the metric selection problem, we are motivated to consider nonlinear functional regression models, which are subject to some structural constraints but do not overly infringe flexibility. One desires models that retain polynomial rates of convergence, while being more flexible than, say, functional linear models. Such models are particularly useful when diagnostics for the functional linear model indicate lack of fit, which is often encountered in real life situations. In particular, functional polynomial models, functional single and multiple index models and functional additive models are three special cases of functional nonlinear regression models.