Texture mapping
Texture mapping is a term used in computer graphics to describe how 2D images are projected onto 3D models. The most common variant is the UV unwrap, which can be described as an inverse paper cutout, where the surfaces of a 3D model are cut apart so that it can be unfolded into a 2D coordinate space.
Semantic
Texture mapping can multiply refer to the task of unwrapping a 3D model, applying a 2D texture map onto the surface of a 3D model, and the 3D software algorithm that performs both tasks.A texture map refers to a 2D image that adds visual detail to a 3D model. The image can be stored as a raster graphic. A texture that stores a specific property—such as bumpiness, reflectivity, or transparency—is also referred to as a color map or roughness map.
The coordinate space that converts from a 3D model's 3D space into a 2D space for sampling from the texture map is variously called UV space, UV coordinates, or texture space.
Algorithm
The following is a simplified explanation of how an algorithm could work to render an image:- For each pixel, trace the coordinates of the screen into the 3D scene.
- If a 3D model is hit or, more precisely, the polygon of a 3D model hits the 2D UV coordinates, then
- The UV Coordinates are used to read the color from the texture and apply it to the pixel.
History
Texture mapping originally referred to diffuse mapping, a method that simply mapped pixels from a texture to a 3D surface. In recent decades, the advent of multi-pass rendering, multitexturing, mipmaps, and more complex mappings such as height mapping, bump mapping, normal mapping, displacement mapping, reflection mapping, specular mapping, occlusion mapping, and many other variations on the technique have made it possible to simulate near-photorealism in real time by vastly reducing the number of polygons and lighting calculations needed to construct a realistic and functional 3D scene.
Texture maps
A is an image applied to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3D model formats or material definitions, and assembled into resource bundles.They may have one to three dimensions, although two dimensions are most common for visible surfaces. For use with modern hardware, texture map data may be stored in swizzled or tiled orderings to improve cache coherency. Rendering APIs typically manage texture map resources as buffers or surfaces, and may allow 'render to texture' for additional effects such as post processing or environment mapping.
Texture maps usually contain RGB color data, and sometimes an additional channel for alpha blending especially for billboards and decal overlay textures. It is possible to use the alpha channel for other uses such as specularity.
Multiple texture maps may be combined for control over specularity, normals, displacement, or subsurface scattering, e.g. for skin rendering.
Multiple texture images may be combined in texture atlases or array textures to reduce state changes for modern hardware.. Modern hardware often supports cube map textures with multiple faces for environment mapping.
Creation
Texture maps may be acquired by scanning or digital photography, designed in image manipulation software such as GIMP or Photoshop, or painted onto 3D surfaces directly in a 3D paint tool such as Mudbox or ZBrush.Texture application
This process is akin to applying patterned paper to a plain white box. Every vertex in a polygon is assigned a texture coordinate. This may be done through explicit assignment of vertex attributes, manually edited in a 3D modelling package through UV unwrapping tools. It is also possible to associate a procedural transformation from 3D space to texture space with the material. This might be accomplished via planar projection or, alternatively, cylindrical or spherical mapping. More complex mappings may consider the distance along a surface to minimize distortion. These coordinates are interpolated across the faces of polygons to sample the texture map during rendering. Textures may be repeated or mirrored to extend a finite rectangular bitmap over a larger area, or they may have a one-to-one unique "injective" mapping from every piece of a surface.Texture space
Texture mapping maps the model surface into texture space; in this space, the texture map is visible in its undistorted form. UV unwrapping tools typically provide a view in texture space for manual editing of texture coordinates. Some rendering techniques such as subsurface scattering may be performed approximately by texture-space operations.Multitexturing
Multitexturing is the use of more than one texture at a time on a polygon. For instance, a light map texture may be used to light a surface as an alternative to recalculating that lighting every time the surface is rendered. Microtextures or detail textures are used to add higher frequency details, and dirt maps add weathering and variation; this can greatly reduce the apparent periodicity of repeating textures. Modern graphics may use more than 10 layers, which are combined using shaders, for greater fidelity. Another multitexture technique is bump mapping, which allows a texture to directly control the facing direction of a surface for the purposes of its lighting calculations; it can give a very good appearance of a complex surface that takes on lighting detail in addition to the usual detailed coloring. Bump mapping has become popular in video games, as graphics hardware has become powerful enough to accommodate it in real-time.Texture filtering
The way that samples are calculated from the texels is governed by texture filtering. The cheapest method is to use the nearest-neighbour interpolation, but bilinear interpolation or trilinear interpolation between mipmaps are two commonly used alternatives which reduce aliasing or jaggies. In the event of a texture coordinate being outside the texture, it is either clamped or wrapped. Anisotropic filtering better eliminates directional artefacts when viewing textures from oblique viewing angles.Texture streaming
Texture streaming is a means of using data streams for textures, where each texture is available in two or more different resolutions, as to determine which texture should be loaded into memory and used based on draw distance from the viewer and how much memory is available for textures. Texture streaming allows a rendering engine to use low resolution textures for objects far away from the viewer's camera, and resolve those into more detailed textures, read from a data source, as the point of view nears the objects.Baking
As an optimization, it is possible to render detail from a complex, high-resolution model or expensive process into a surface texture. This technique is called baking and is most commonly used for light maps, but may also be used to generate normal maps and displacement maps. Some computer games have used this technique. The original Quake software engine used on-the-fly baking to combine light maps and colour maps in a process called surface caching.Baking can be used as a form of level of detail generation, where a complex scene with many different elements and materials may be approximated by a single element with a single texture, which is then algorithmically reduced for lower rendering cost and fewer drawcalls. It is also used to take high-detail models from 3D sculpting software and point cloud scanning and approximate them with meshes more suitable for realtime rendering.
Rasterisation algorithms
Various techniques have evolved in software and hardware implementations. Each offers different trade-offs in precision, versatility, and performance.Affine texture mapping
linearly interpolates texture coordinates across a surface, making it the fastest form of texture mapping. Some software and hardware project vertices in 3D space onto the screen during rendering and linearly interpolate the texture coordinates in screen space between them. This may be done by incrementing fixed-point UV coordinates or by an incremental error algorithm akin to Bresenham's line algorithm.In contrast to perpendicular polygons, this leads to noticeable distortion with perspective transformations, especially as primitives near the camera. This distortion can be reduced by subdividing polygons into smaller polygons.
Using quad primitives for rectangular objects can look less incorrect than if those rectangles were split into triangles. However, since interpolating four points adds complexity to the rasterization, most early implementations preferred triangles only. Some hardware, such as the forward texture mapping used by the Nvidia NV1, offered efficient quad primitives. With perspective correction, triangles become equivalent to quad primitives and this advantage disappears.
For rectangular objects that are at right angles to the viewer, the perspective only needs to be corrected in one direction across the screen rather than both. The correct perspective mapping can be calculated at the left and right edges of the floor. Affine linear interpolation across that horizontal span will look correct because every pixel along that line is the same distance from the viewer.
Perspective correctness
accounts for the vertices' positions in 3D space rather than simply interpolating coordinates in 2D screen space. While achieving the correct visual effect, perspective correct texturing is more expensive to calculate.To perform perspective correction of the texture coordinates and, with being the depth component from the viewer's point of view, it is possible to take advantage of the fact that the values,, and are linear in screen space across the surface being textured. In contrast, the original,, and, before the division, are not linear across the surface in screen space. It is therefore possible to linearly interpolate these reciprocals across the surface, computing corrected values at each pixel, to produce a perspective correct texture mapping.
To do this, the reciprocals at each vertex of the geometry are calculated. Vertex has reciprocals,, and. Then, linear interpolation can be done on these reciprocals between the vertices, resulting in interpolated values across the surface. At a given point, this yields the interpolated and . However, as our division by altered their coordinate system, this cannot be used as texture coordinates. To correct back to the space, the corrected is calculated by taking the reciprocal once again:. This is then used to correct the coordinates: and.
This correction makes it so that the difference from pixel to pixel between texture coordinates is smaller in parts of the polygon that are closer to the viewer and is larger in parts that are farther away.
Affine texture mapping directly interpolates a texture coordinate between two endpoints and :
where.
Perspective correct mapping interpolates after dividing by depth, then uses its interpolated reciprocal to recover the correct coordinate:
3D graphics hardware typically supports perspective correct texturing.
Various techniques have evolved for rendering texture mapped geometry into images with different quality and precision trade-offs, which can be applied to both software and hardware.
Classic software texture mappers generally only performed simple texture mapping with one lighting effect at most, and the perspective correctness was about 16 times more expensive.