'It is basically DLSS. That’s the way graphics ought to be': Nvidia's Jensen Huang has a clear vision for the future of its gaming GPUs and is going to be all about neural rendering

Perhaps to the surprise of no one, Nvidia's time at CES 2026 was all about one thing: AI. That said, PC gaming wasn't entirely ignored, as DLSS 4.5 was ninja-launched with the promise of '4K 240 Hz path traced gaming'. However, DLSS is still AI-based and in a Q&A session with members of the press, CEO Jensen Huang made it clear that artificial intelligence isn't just for improving performance, it's how graphics needs to be done in the future.

This much we already know, as Nvidia banged its neural rendering drum starting at last year's CES and then throughout 2025, and it wasn't the only graphics company to do so. Microsoft announced the addition of cooperative vectors to Direct3D, which is pretty much required to implement neural rendering in games, and AMD's FSR Redstone is as AI-based as anything from Intel and Nvidia.

So, when PC World's Adam Patrick Murray asked Huang, "Is the RTX 5090 the fastest GPU that gamers will ever see in traditional rasterization? And what does an AI gaming GPU look like in the future?", it wasn't surprising that Nvidia's co-founder avoided the first question entirely and skipped straight to the topic of AI.

"I think that the answer is hard to predict. Maybe another way of saying it is that the future is neural rendering. It is basically DLSS. That’s the way graphics ought to be."

He then expanded with some examples of what he meant by this: "I would expect that the ability for us to generate imagery of almost any style from photo realism, extreme photo realism, basically a photograph interacting with you at 500 frames a second, all the way to cartoon shading, if you like."

Nvidia's neural rendering demo Zorah at GDC 2025 (Image credit: Future)

The keyword here is generate. If one wishes to be pedantic, all graphics are generated, either through rasterization or neural networks. It's all just a massive heap of mathematics, broken down into logic operations on GPUs, crunching through endless streams of binary values. But there is one important difference with neural rendering, and it's that it requires far less input data to generate the same graphical output as rasterization.

Fire up the original Crisis from 2007, and all those beautiful visuals are generated from lists of vertices, piles of texture maps, and a veritable mountain of resources that are created during the process of rendering (e.g. depth buffers, G-buffers, render targets, and so on). That's still the case almost 20 years on, and the size and quantity of those resources are now truly massive.

As DLSS Super Resolution proves, though, they don't need to be in the era of AI graphics. Nvidia's upscaling system works by reducing the frame resolution for rendering, scaling it back up once finished, and then applying a neural network to the frame to clean up artefacts. One idea behind neural rendering is take that a step further and to use lower resolution assets in the first place, and generate higher quality stuff as and when required.

The original 2007 Crysis (top) still looks outstanding compared to the 2020 remaster (bottom) (Image credit: Crytek via Jonathan Bolding and Filip_7)

Does it ultimately matter how a game's graphics are produced as long as they look absolutely fine and run smoothly? I dare say most people will say 'no', but we don't have any games right now that use neural rendering for any part of the graphics pipeline, other than upscaling and/or frame generation. Everything else is still rasterization (even if ray tracing is used, raster is still there behind the scenes).

CES 2026

(Image credit: Future)

Catch up with CES 2026: We're on the ground in sunny Las Vegas covering all the latest announcements from some of the biggest names in tech, including Nvidia, AMD, Intel, Asus, Razer, MSI and more.

That means GeForce GPUs of the future, both near and far, will still need to progress in rasterization to ensure games of tomorrow look and run as intended. But with Nvidia being dead set on neural rendering (I don't think Huang said "That’s the way graphics ought to be" lightly), have RTX graphics cards reached a plateau in that respect?

Does the company now expect that all generational performance increments will come from better DLSS? Will GPUs of the future be nothing more than ASICs for AI? How would such chips process older graphics routines? Is PC gaming heading backwards in time to the era when you needed a new GPU for every major new game, because previous chips didn't support the tech inside?

Answers that generate more questions than they resolve certainly aren't a bad thing, but in this case, I wish Nvidia would give us a much clearer picture as to its roadmap for gaming GPUs and how it plans to support games of the past, present, and future.

Читайте на сайте