Image sensor format
In digital photography, the image sensor format is the shape and size of the image sensor.
The image sensor format of a digital camera determines the angle of view of a particular lens when used with a particular sensor. Because the image sensors in many digital cameras are smaller than the 24 mm × 36 mm image area of full-frame 35 mm cameras, a lens of a given focal length gives a narrower field of view in such cameras.
Sensor size is often expressed as optical format in inches. Other measures are also used; see table of sensor formats and sizes below.
Lenses produced for 35 mm film cameras may mount well on the digital bodies, but the larger image circle of the 35 mm system lens allows unwanted light into the camera body, and the smaller size of the image sensor compared to 35 mm film format results in cropping of the image. This latter effect is known as field-of-view crop. The format size ratio is known as the field-of-view crop factor, crop factor, lens factor, focal-length conversion factor, focal-length multiplier, or lens multiplier.
Sensor size and depth of field
Three possible depth-of-field comparisons between formats are discussed, applying the formulae derived in the article on depth of field. The depths of field of the three cameras may be the same, or different in either order, depending on what is held constant in the comparison.Considering a picture with the same subject distance and angle of view for two different formats:
so the DOFs are in inverse proportion to the absolute aperture diameters and.
Using the same absolute aperture diameter for both formats with the "same picture" criterion yields the same depth of field. It is equivalent to adjusting the f-number inversely in proportion to crop factor – a smaller f-number for smaller sensors. This condition of equal field of view, equal depth of field, equal aperture diameter, and equal exposure time is known as "equivalence".
And, we might compare the depth of field of sensors receiving the same photometric exposure – the f-number is fixed instead of the aperture diameter – the sensors are operating at the same ISO setting in that case, but the smaller sensor is receiving less total light, by the area ratio. The ratio of depths of field is then
where and are the characteristic dimensions of the format, and thus is the relative crop factor between the sensors. It is this result that gives rise to the common opinion that small sensors yield greater depth of field than large ones.
An alternative is to consider the depth of field given by the same lens in conjunction with different sized sensors. The change in depth of field is brought about by the requirement for a different degree of enlargement to achieve the same final image size. In this case the ratio of depths of field becomes
In practice, if applying a lens with a fixed focal length and a fixed aperture and made for an image circle to meet the requirements for a large sensor is to be adapted, without changing its physical properties, to smaller sensor sizes neither the depth of field nor the light gathering will change.
Sensor size, noise and dynamic range
Discounting photo response non-uniformity and [|dark noise] variation, which are not intrinsically sensor-size dependent, the noises in an image sensor are shot noise, [|read noise], and dark noise. The overall signal to noise ratio of a sensor, expressed as signal electrons relative to rms noise in electrons, observed at the scale of a single pixel, assuming shot noise from Poisson distribution of signal electrons and dark electrons, iswhere is the incident photon flux, is the quantum efficiency, is the exposure time, is the pixel dark current in electrons per second and is the pixel read noise in electrons rms.
Each of these noises has a different dependency on sensor size.
Exposure and photon flux
Image sensor noise can be compared across formats for a given fixed photon flux per pixel area ; this analysis is useful for a fixed number of pixels with pixel area proportional to sensor area, and fixed absolute aperture diameter for a fixed imaging situation in terms of depth of field, diffraction limit at the subject, etc. Or it can be compared for a fixed focal-plane illuminance, corresponding to a fixed f-number, in which case P is proportional to pixel area, independent of sensor area. The formulas above and below can be evaluated for either case.Shot noise
In the above equation, the shot noise SNR is given byApart from the quantum efficiency it depends on the incident photon flux and the exposure time, which is equivalent to the exposure and the sensor area; since the exposure is the integration time multiplied with the image plane illuminance, and illuminance is the luminous flux per unit area. Thus for equal exposures, the signal to noise ratios of two different size sensors of equal quantum efficiency and pixel count will be in proportion to the square root of the sensor area. If the exposure is constrained by the need to achieve some required depth of field then the exposures will be in inverse relation to the sensor area, producing the interesting result that if depth of field is a constraint, image shot noise is not dependent on sensor area. For identical f-number lenses the signal to noise ratio increases as square root of the pixel area, or linearly with pixel pitch. As typical f-numbers for lenses for cell phones and DSLR are in the same range it is interesting to compare performance of cameras with small and big sensors. A good 2018 cell phone camera with a typical pixel size of 1.1 μm would have about 3 times worse SNR due to shot noise than a 3.7 μm pixel interchangeable lens camera and 5 times worse than a 6 μm full frame camera. Taking into consideration the dynamic range makes the difference even more prominent. As such the trend of increasing the number of "megapixels" in cell phone cameras during last 10 years was caused rather by marketing strategy to sell "more megapixels" than by attempts to improve image quality.
Read noise
The read noise is the total of all the electronic noises in the conversion chain for the pixels in the sensor array. To compare it with photon noise, it must be referred back to its equivalent in photoelectrons, which requires the division of the noise measured in volts by the conversion gain of the pixel. This is given, for an active pixel sensor, by the voltage at the input of the read transistor divided by the charge which generates that voltage,. This is the inverse of the capacitance of the read transistor gate since capacitance. Thus.In general for a planar structure such as a pixel, capacitance is proportional to area, therefore the read noise scales down with sensor area, as long as pixel area scales with sensor area, and that scaling is performed by uniformly scaling the pixel.
Considering the signal to noise ratio due to read noise at a given exposure, the signal will scale as the sensor area along with the read noise and therefore read noise SNR will be unaffected by sensor area. In a depth of field constrained situation, the exposure of the larger sensor will be reduced in proportion to the sensor area, and therefore the read noise SNR will reduce likewise.
Dark noise
contributes two kinds of noise: dark offset, which is only partly correlated between pixels, and the shot noise associated with dark offset, which is uncorrelated between pixels. Only the shot-noise component Dt is included in the formula above, since the uncorrelated part of the dark offset is hard to predict, and the correlated or mean part is relatively easy to subtract off. The mean dark current contains contributions proportional both to the area and the linear dimension of the photodiode, with the relative proportions and scale factors depending on the design of the photodiode. Thus in general the dark noise of a sensor may be expected to rise as the size of the sensor increases. However, in most sensors the mean pixel dark current at normal temperatures is small, lower than 50 e- per second, thus for typical photographic exposure times dark current and its associated noises may be discounted. At very long exposure times, however, it may be a limiting factor. And even at short or medium exposure times, a few outliers in the dark-current distribution may show up as "hot pixels". Typically, for astrophotography applications sensors are cooled to reduce dark current in situations where exposures may be measured in several hundreds of seconds.Dynamic range
Dynamic range is the ratio of the largest and smallest recordable signal, the smallest being typically defined by the 'noise floor'. In the image sensor literature, the noise floor is taken as the readout noise, so .Sensor size and diffraction
The resolution of all optical systems is limited by diffraction. One way of considering the effect that diffraction has on cameras using different sized sensors is to consider the modulation transfer function. Diffraction is one of the factors that contribute to the overall system MTF. Other factors are typically the MTFs of the lens, anti-aliasing filter and sensor sampling window. The spatial cut-off frequency due to diffraction through a lens aperture iswhere λ is the wavelength of the light passing through the system and N is the f-number of the lens. If that aperture is circular, as are most photographic apertures, then the MTF is given by
for and for
The diffraction based factor of the system MTF will therefore scale according to and in turn according to .
In considering the effect of sensor size, and its effect on the final image, the different magnification required to obtain the same size image for viewing must be accounted for, resulting in an additional scale factor of where is the relative crop factor, making the overall scale factor. Considering the three cases above:
For the 'same picture' conditions, same angle of view, subject distance and depth of field, then the f-numbers are in the ratio, so the scale factor for the diffraction MTF is 1, leading to the conclusion that the diffraction MTF at a given depth of field is independent of sensor size.
In both the 'same photometric exposure' and 'same lens' conditions, the f-number is not changed, and thus the spatial cutoff and resultant MTF on the sensor is unchanged, leaving the MTF in the viewed image to be scaled as the magnification, or inversely as the crop factor.