Light field


A light field, or lightfield, is a vector function that describes the amount of light flowing in every direction through every point in a space. The space of all possible light rays is given by the five-dimensional plenoptic function, and the magnitude of each ray is given by its radiance. Michael Faraday was the first to propose that light should be interpreted as a field, much like the magnetic fields on which he had been working. The term light field was coined by Andrey Gershun in a classic 1936 paper on the radiometric properties of light in three-dimensional space.
The term "radiance field" may also be used to refer to similar, or identical concepts. The term is used in modern research such as neural radiance fields.

The plenoptic function

For geometric optics—i.e., to incoherent light and to objects larger than the wavelength of light—the fundamental carrier of light is a ray. The measure for the amount of light traveling along a ray is radiance, denoted by L and measured in ; i.e., watts per steradian per square meter. The steradian is a measure of solid angle, and meters squared are used as a measure of cross-sectional area, as shown at right.
Image:Plenoptic function b.svg|left|frame|Parameterizing a ray in 3D space by position and direction.
The radiance along all such rays in a region of three-dimensional space illuminated by an unchanging arrangement of lights is called the plenoptic function. The plenoptic illumination function is an idealized function used in computer vision and computer graphics to express the image of a scene from any possible viewing position at any viewing angle at any point in time. It is not used in practice computationally, but is conceptually useful in understanding other concepts in vision and graphics. Since rays in space can be parameterized by three coordinates, x, y, and z and two angles θ and ϕ, as shown at left, it is a five-dimensional function, that is, a function over a five-dimensional manifold equivalent to the product of 3D Euclidean space and the 2-sphere.
Image:Gershun-light-field-fig17.png|right|thumb|175px|Summing the irradiance vectors D1 and D2 arising from two light sources I1 and I2 produces a resultant vector D having the magnitude and direction shown.
The light field at each point in space can be treated as an infinite collection of vectors, one per direction impinging on the point, with lengths proportional to their radiances.
Integrating these vectors over any collection of lights, or over the entire sphere of directions, produces a single scalar value—the total irradiance at that point, and a resultant direction. The figure shows this calculation for the case of two light sources. In computer graphics, this vector-valued function of 3D space is called the vector irradiance field. The vector direction at each point in the field can be interpreted as the orientation of a flat surface placed at that point to most brightly illuminate it.

Higher dimensionality

Time, wavelength, and polarization angle can be treated as additional dimensions, yielding higher-dimensional functions, accordingly.

The 4D light field

In a plenoptic function, if the region of interest contains a concave object, then light leaving one point on the object may travel only a short distance before another point on the object blocks it. No practical device could measure the function in such a region.
However, for locations outside the object's convex hull, the plenoptic function can be measured by capturing multiple images. In this case the function contains redundant information, because the radiance along a ray remains constant throughout its length. The redundant information is exactly one dimension, leaving a four-dimensional function variously termed the photic field, the 4D light field or lumigraph. Formally, the field is defined as radiance along rays in empty space.
The set of rays in a light field can be parameterized in a variety of ways. The most common is the two-plane parameterization. While this parameterization cannot represent all rays, for example rays parallel to the two planes if the planes are parallel to each other, it relates closely to the analytic geometry of perspective imaging. A simple way to think about a two-plane light field is as a collection of perspective images of the st plane, each taken from an observer position on the uv plane. A light field parameterized this way is sometimes called a light slab.
Image:Light-field-parameterizations.png|left|frame| Some alternative parameterizations of the 4D light field, which represents the flow of light through an empty region of three-dimensional space. Left: points on a plane or curved surface and directions leaving each point. Center: pairs of points on the surface of a sphere. Right: pairs of points on two planes in general position.

Sound analog

The analog of the 4D light field for sound is the sound field or wave field, as in wave field synthesis, and the corresponding parametrization is the Kirchhoff–Helmholtz integral, which states that, in the absence of obstacles, a sound field over time is given by the pressure on a plane. Thus this is two dimensions of information at any point in time, and over time, a 3D field.
This two-dimensionality, compared with the apparent four-dimensionality of light, is because light travels in rays, while by the Huygens–Fresnel principle, a sound wave front can be modeled as spherical waves : light moves in a single direction, while sound expands in every direction. However, light travelling in non-vacuous media may scatter in a similar fashion, and the irreversibility or information lost in the scattering is discernible in the apparent loss of a system dimension.

Image refocusing

Because light field provides spatial and angular information, we can alter the position of focal planes after exposure, which is often termed refocusing. The principle of refocusing is to obtain conventional 2-D photographs from a light field through the integral transform. The transform takes a lightfield as its input and generates a photograph focused on a specific plane.
Assuming represents a 4-D light field that records light rays traveling from position on the first plane to position on the second plane, where is the distance between two planes, a 2-D photograph at any depth can be obtained from the following integral transform:
or more concisely,
where,, and is the photography operator.
In practice, this formula cannot be directly used because a plenoptic camera usually captures discrete samples of the lightfield, and hence resampling is needed to compute. Another problem is high computational complexity. To compute an 2-D photograph from an 4-D light field, the complexity of the formula is.

Fourier slice photography

One way to reduce the complexity of computation is to adopt the concept of Fourier slice theorem: The photography operator can be viewed as a shear followed by projection. The result should be proportional to a dilated 2-D slice of the 4-D Fourier transform of a light field. More precisely, a refocused image can be generated from the 4-D Fourier spectrum of a light field by extracting a 2-D slice, applying an inverse 2-D transform, and scaling. The asymptotic complexity of the algorithm is.

Discrete focal stack transform

Another way to efficiently compute 2-D photographs is to adopt discrete focal stack transform. DFST is designed to generate a collection of refocused 2-D photographs, or so-called Focal Stack. This method can be implemented by fast fractional fourier transform.
The discrete photography operator is defined as follows for a lightfield sampled in a 4-D grid , :
Because is usually not on the 4-D grid, DFST adopts trigonometric interpolation to compute the non-grid values.
The algorithm consists of these steps:
  • Sample the light field with the sampling period and and get the discretized light field.
  • Pad with zeros such that the signal length is enough for FrFT without aliasing.
  • For every, compute the Discrete Fourier transform of, and get the result.
  • For every focal length, compute the fractional fourier transform of, where the order of the transform depends on, and get the result.
  • Compute the inverse Discrete Fourier transform of.
  • Remove the marginal pixels of so that each 2-D photograph has the size by

    Methods to create light fields

In computer graphics, light fields are typically produced either by rendering a 3D model or by photographing a real scene. In either case, to produce a light field, views must be obtained for a large collection of viewpoints. Depending on the parameterization, this collection typically spans some portion of a line, circle, plane, sphere, or other shape, although unstructured collections are possible.
Devices for capturing light fields photographically may include a moving handheld camera or a robotically controlled camera, an arc of cameras, a dense array of cameras, handheld cameras, microscopes, or other optical system.
The number of images in a light field depends on the application. A light field capture of Michelangelo's statue of Night contains 24,000 1.3-megapixel images, which is considered large as of 2022. For light field rendering to completely capture an opaque object, images must be taken of at least the front and back. Less obviously, for an object that lies astride the st plane, finely spaced images must be taken on the uv plane.
The number and arrangement of images in a light field, and the resolution of each image, are together called the "sampling" of the 4D light field. Also of interest are the effects of occlusion, lighting and reflection.

Applications