Rendering (computer graphics)


Rendering is the process of generating a photorealistic or non-photorealistic image from input data such as 3D models. The word "rendering" originally meant the task performed by an artist when depicting a real or imaginary thing. Today, to "render" commonly means to generate an image or video from a precise description using a computer program.
A software application or component that performs rendering is called a rendering engine, render engine, rendering system, graphics engine, or simply a renderer.
A distinction is made between real-time rendering, in which images are generated and displayed immediately, and offline rendering in which images, or film or video frames, are generated for later viewing. Offline rendering can use a slower and higher-quality renderer. Interactive applications such as games must primarily use real-time rendering, although they may incorporate pre-rendered content.
Rendering can produce images of scenes or objects defined using coordinates in 3D space, seen from a particular viewpoint. Such 3D rendering uses knowledge and ideas from optics, the study of visual perception, mathematics, and software engineering, and it has applications such as video games, simulators, visual effects for films and television, design visualization, and medical diagnosis. Realistic 3D rendering requires modeling the propagation of light in an environment, e.g. by applying the rendering equation.
Real-time rendering uses high-performance rasterization algorithms that process a list of shapes and determine which pixels are covered by each shape. When more realism is required slower pixel-by-pixel algorithms such as ray tracing are used instead. A type of ray tracing called path tracing is currently the most common technique for photorealistic rendering. Path tracing is also popular for generating high-quality non-photorealistic images, such as frames for 3D animated films. Both rasterization and ray tracing can be sped up by specially designed microprocessors called GPUs.
Rasterization algorithms are also used to render images containing only 2D shapes such as polygons and text. Applications of this type of rendering include digital illustration, graphic design, 2D animation, desktop publishing and the display of user interfaces.
Historically, rendering was called image synthesis but today this term is likely to mean AI image generation. The term "neural rendering" is sometimes used when a neural network is the primary means of generating an image but some degree of control over the output image is provided. Neural networks can also assist rendering without replacing traditional algorithms, e.g. by removing noise from path traced images.

Features

Photorealistic rendering

A large proportion of computer graphics research has worked towards producing images that resemble photographs. Fundamental techniques that make this possible were invented in the 1980s, but at the end of the decade, photorealism for complex scenes was still considered a distant goal. Today, photorealism is routinely achievable for offline rendering, but remains difficult for real-time rendering.
In order to produce realistic images, rendering must simulate how light travels from light sources, is reflected, refracted, and scattered by objects in the scene, passes through a camera lens, and finally reaches the film or sensor of the camera. The physics used in these simulations is primarily geometrical optics, in which particles of light follow lines called rays, but in some situations the wave nature of light must be taken into account.
Effects that may need to be simulated include:
  • Shadows, including both shadows with sharp edges and soft shadows with umbra and penumbra
  • Reflections in mirrors and smooth surfaces, as well as rough or rippled reflective surfaces
  • Refraction the bending of light when it crosses a boundary between two transparent materials such as air and glass. The amount of bending varies with the wavelength of the light, which may cause colored fringes or "rainbows" to appear.
  • Volumetric effects absorption and scattering when light travels through partially transparent or translucent substances
  • Caustics bright patches, sometimes with distinct filaments and a folded or twisted appearance, resulting when light is reflected or refracted before illuminating an object.
In realistic scenes, objects are illuminated both by light that arrives directly from a light source, and light that has bounced off other objects in the scene. The simulation of this complex lighting is called global illumination. In the past, indirect lighting was often faked by placing additional hidden lights in the scene, but today path tracing is used to render it accurately.
For true photorealism, the camera used to take the photograph must be simulated. The thin lens approximation allows combining perspective projection with depth of field emulation. Camera lens simulations can be made more realistic by modeling the way light is refracted by the components of the lens. Motion blur is often simulated if film or video frames are being rendered. Simulated lens flare and bloom are sometimes added to make the image appear subjectively brighter.
Realistic rendering uses mathematical descriptions of how different surface materials reflect light, called reflectance models or bidirectional reflectance distribution functions . Rendering materials such as marble, plant leaves, and human skin requires simulating an effect called subsurface scattering, in which a portion of the light travels into the material, is scattered, and then travels back out again. The way color, and properties such as roughness, vary over a surface can be represented efficiently using texture mapping.

Other styles of 3D rendering

For some applications, simplified rendering styles such as wireframe rendering may be appropriate, particularly when the material and surface details have not been defined and only the shape of an object is known. Games and other real-time applications may use simpler and less realistic rendering techniques as an artistic or design choice, or to allow higher frame rates on lower-end hardware.
Orthographic and isometric projections can be used for a stylized effect or to ensure that parallel lines are depicted as parallel in CAD rendering.
Non-photorealistic rendering uses techniques like edge detection and posterization to produce 3D images that resemble technical illustrations, cartoons, or other styles of drawing or painting.

2D rendering

In 2D computer graphics the positions and sizes of shapes are specified using 2D coordinates instead of 3D coordinates. 2D rendering APIs often use a resolution-independent coordinate system, with a viewport determining how to convert coordinates to pixel indexes called device coordinates. Transformations such as scaling, translation, and rotation may be applied before rendering the shapes. These affine transformations are often represented by 3 × 3 matrices, allowing easier composition of transformations.
Higher-quality 2D rendering engines such as SVG renderers usually implement anti-aliasing to reduce the jagged appearance of rasterized lines and shape edges. When rendering overlapping shapes, renderers commonly use a "painter's model" in which the shapes are drawn in some determined order, or their contributions to each pixel are composited using blending operations that may depend on the order of the inputs. Renderers may allow giving shapes a "z index" or "stacking order" to specify the rendering or blending order.
2D rendering typically does not simulate light propagation. Effects such as drop shadows and transparency are defined by mathematical functions with no physical basis.
2D rendering for print output may need to support very high resolutions, e.g. 600 or 1200 DPI for a typical laser printer, or 2400 DPI or higher for an imagesetter or platesetter. Grayscale and color images require halftones and color separations. A rendering engine called a raster image processor converts input data such as PDF files into the high-resolution bitmap images used by the printer.

Inputs

Before a 3D scene or 2D image can be rendered, it must be described in a way that the rendering software can understand. Historically, inputs for both 2D and 3D rendering were usually text files, which are easier than binary files for humans to edit and debug. For 3D graphics, text formats have largely been supplanted by more efficient binary formats, and by APIs which allow interactive applications to communicate directly with a rendering component without generating a file on disk.
Traditional rendering algorithms use geometric descriptions of 3D scenes or 2D images. Applications and algorithms that render visualizations of data scanned from the real world, or scientific simulations, may require different types of input data.
The PostScript format provides a standardized, interoperable way to describe 2D graphics and page layout. The Scalable Vector Graphics format is also text-based, and the PDF format uses the PostScript language internally. In contrast, although many 3D graphics file formats have been standardized, different rendering applications typically use formats tailored to their needs, and this has led to a proliferation of proprietary and open formats, with binary files being more common.

2D vector graphics

A vector graphics image description may include:
  • Coordinates and curvature information for line segments, arcs, and Bézier curves
  • Center coordinates, width, and height of basic shapes such as rectangles, circles and ellipses
  • Color, width and pattern for rendering lines
  • Colors, patterns, and gradients for filling shapes
  • Bitmap image data along with scale and position information
  • Text to be rendered
  • Clipping information, if only part of a shape or bitmap image should be rendered
  • Transparency and compositing information for rendering overlapping shapes
  • Color space information, allowing the image to be rendered consistently on different displays and printers