Deep Learning Super Sampling


Deep Learning Super Sampling is a suite of real-time deep learning image enhancement and upscaling technologies developed by Nvidia that are available in a number of video games. The goal of these technologies is to allow the majority of the graphics pipeline to run at a lower resolution for increased performance, and then infer a higher resolution image from this that approximates the same level of detail as if the image had been rendered at this higher resolution. This allows for higher graphical settings or frame rates for a given output resolution, depending on user preference.
All generations of DLSS are available on all RTX-branded cards from Nvidia in supported titles. However, the Frame Generation feature is only supported on 40 series GPUs or newer and Multi Frame Generation is only available on 50 series GPUs.

History

Nvidia advertised DLSS as a key feature of the GeForce 20 series cards when they launched in September 2018. At that time, the results were limited to a few video games, namely Battlefield V, or Metro Exodus, because the algorithm had to be trained specifically on each game on which it was applied and the results were usually not as good as simple resolution upscaling. In 2019, the video game Control shipped with real-time ray tracing and an image processing algorithm that approximated DLSS, which did not use the Tensor Cores.
In April 2020, Nvidia advertised and shipped an improved version of DLSS named DLSS 2.0 with driver version 445.75. DLSS 2.0 was available for a few existing games including Control and Wolfenstein: Youngblood, and would later be added to many newly released games and game engines such as Unreal Engine and Unity. This time Nvidia said that it used the Tensor Cores again, and that the AI did not need to be trained specifically on each game. Despite sharing the DLSS branding, the two iterations of DLSS differ significantly and are not backwards-compatible.
In January 2025, Nvidia stated that there are over 540 games and apps supporting DLSS, and that over 80% of Nvidia RTX users activate DLSS.
In March 2025, there were more than 100 games that support DLSS 4, according to Nvidia. By May 2025, over 125 games supported DLSS 4.
The first video game console to use DLSS, the Nintendo Switch 2, was released on June 5, 2025.
Nvidia announced DLSS 4.5 at CES 2026.
In January 2026, Nvidia stated that over 250 games and applications support Multi Frame Generation.

Release history

Quality presets

When using DLSS, depending on the game, users have access to various quality presets in addition to the option to set the internally rendered, upscaled resolution manually:

Implementation

DLSS 1.0

The first iteration of DLSS is a predominantly spatial image upscaler with two stages, both relying on convolutional auto-encoder neural networks. The first step is an image enhancement network which uses the current frame and motion vectors to perform edge enhancement, and spatial anti-aliasing. The second stage is an image upscaling step which uses the single raw, low-resolution frame to upscale the image to the desired output resolution. Using just a single frame for upscaling means the neural network itself must generate a large amount of new information to produce the high resolution output, this can result in slight hallucinations such as leaves that differ in style to the source content.
The neural networks are trained on a per-game basis by generating a "perfect frame" using traditional supersampling to 64 samples per pixel, as well as the motion vectors for each frame. The data collected must be as comprehensive as possible, including as many levels, times of day, graphical settings, resolutions, etc. as possible. This data is also augmented using common augmentations such as rotations, colour changes, and random noise to help generalize the test data. Training is performed on Nvidia's Saturn V supercomputer.
This first iteration received a mixed response, with many criticizing the often soft appearance and artifacts along with glitches in certain situations; likely a side effect of the limited data from only using a single frame input to the neural networks which could not be trained to perform optimally in all scenarios and edge-cases. Nvidia also demonstrated the ability for the auto-encoder networks to learn the ability to recreate depth-of-field and motion blur, although this functionality has never been included in a publicly released product.

DLSS 2.0

DLSS 2.0 is a temporal anti-aliasing upsampling implementation, using data from previous frames extensively through sub-pixel jittering to resolve fine detail and reduce aliasing. The data DLSS 2.0 collects includes: the raw low-resolution input, motion vectors, depth buffers, and exposure / brightness information. It can also be used as a simpler TAA implementation where the image is rendered at 100% resolution, rather than being upsampled by DLSS, Nvidia brands this as DLAA.
TAA is used in many modern video games and game engines; however, all previous implementations have used some form of manually written heuristics to prevent temporal artifacts such as ghosting and flickering. One example of this is neighborhood clamping which forcefully prevents samples collected in previous frames from deviating too much compared to nearby pixels in newer frames. This helps to identify and fix many temporal artifacts, but deliberately removing fine details in this way is analogous to applying a blur filter, and thus the final image can appear blurry when using this method.
DLSS 2.0 uses a convolutional auto-encoder neural network trained to identify and fix temporal artifacts, instead of manually programmed heuristics as mentioned above. Because of this, DLSS 2.0 can generally resolve detail better than other TAA and TAAU implementations, while also removing most temporal artifacts. This is why DLSS 2.0 can sometimes produce a sharper image than rendering at higher, or even native resolutions using traditional TAA. However, no temporal solution is perfect, and artifacts are still visible in some scenarios when using DLSS 2.0.
Because temporal artifacts occur in most art styles and environments in broadly the same way, the neural network that powers DLSS 2.0 does not need to be retrained when being used in different games. Despite this, Nvidia does frequently ship new minor revisions of DLSS 2.0 with new titles, so this could suggest some minor training optimizations may be performed as games are released, although Nvidia does not provide changelogs for these minor revisions to confirm this. The main advancements compared to DLSS 1.0 include: Significantly improved detail retention, a generalized neural network that does not need to be re-trained per-game, and ~2x less overhead.
It should also be noted that forms of TAAU such as DLSS 2.0 are not upscalers in the same sense as techniques such as ESRGAN or DLSS 1.0, which attempt to create new information from a low-resolution source; instead, TAAU works to recover data from previous frames, rather than creating new data. In practice, this means low resolution textures in games will still appear low-resolution when using current TAAU techniques. This is why Nvidia recommends game developers use higher resolution textures than they would normally for a given rendering resolution by applying a mip-map bias when DLSS 2.0 is enabled.

DLSS 3.0

Augments DLSS 2.0 with improved image quality and the introduction of a new motion interpolation feature, called Frame Generation. The DLSS Frame Generation algorithm takes two rendered frames from the rendering pipeline and generates a new frame that smoothly transitions between them. For every frame rendered, one additional frame is generated. DLSS 3.0 makes use of a new generation Optical Flow Accelerator included in the Ada Lovelace architecture of GeForce RTX 40 series GPUs and with that is exclusive to them. The new OFA is said to be faster and more accurate than the one already available in previous Turing and Ampere RTX GPUs.

DLSS 3.5

DLSS 3.5 adds Ray Reconstruction, replacing multiple denoising algorithms with a single AI model trained on five times more data than DLSS 3. Ray Reconstruction is available on all RTX GPUs and first targeted games with path tracing, including Cyberpunk 2077's Phantom Liberty DLC, Portal with RTX, and Alan Wake 2.

DLSS 4.0

The fourth generation of DLSS was unveiled alongside the GeForce RTX 50 series. DLSS 4 upscaling uses a new vision transformer-based model for enhanced image quality with reduced ghosting and greater image stability in motion compared to the previous convolutional neural network model. DLSS 4 allows a greater number of frames to be generated and interpolated based on a single traditionally rendered frame. This form of frame generation called Multi Frame Generation is exclusive to the GeForce RTX 50 series while the GeForce RTX 40 series is limited to one interpolated frame per traditionally rendered frame. Nvidia claims that DLSS 4x Frame Generation model uses 30% less video memory with the example of Warhammer 40,000: Darktide using 400MB less memory at 4K resolution with Frame Generation enabled. Nvidia claims that 75 games will integrate DLSS 4 Multi Frame Generation at launch, including Alan Wake 2, Cyberpunk 2077, Indiana Jones and the Great Circle, and Star Wars Outlaws.
GeForce RTX 20 seriesGeForce RTX 30 seriesGeForce RTX 40 seriesGeForce RTX 50 series
Transformer Model
2× Frame Generation
3–4× Frame Generation

DLSS 4.5

DLSS 4.5 introduces Dynamic Multi Frame Generation, which can dynamically generate up to 6x the amount of frames for GeForce RTX 50-series GPUs, and upgrades the transformer model used for upscaling to a second-generation model for better temporal stability, ghosting and anti-aliasing.
On RTX 30 series and older, DLSS 4.5 is around 5 times more computationally demanding because "Nvidia is leveraging FP8 precision in RTX 40 and RTX 50 series cards for DLSS 4.5 to lessen the performance impact on newer cards."