The game is rendered at a lower resolution, this saves a lot of resources. This isn’t a linear thing, lowering the resolution reduces the performance needed by a lot more than you would think. Not just in processing power but also bandwidth and memory requirements. Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution. This is a fixed cost and can be done with little power since the components are designed to do this task.
My TV for example has an AI scaler chip which is pretty nice (especially after tuning) for showing old content on a large high res screen. For games applying AI up scaling to old textures also does wonders.
Now even though this gets the AI label slapped on, this is nothing like the LMMs such as chat GPT. These are expert systems trained and designed to do exactly one thing. This is the good kind of AI that’s actually useful instead of the BS AI like LLMs. Now these systems have their limitations, but for games the trade off between details and framerate can be worth it. Especially if our bad eyes and mediocre screens wouldn’t really show the difference anyways.
The game is rendered at a lower resolution, this saves a lot of resources.
Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution.
I get that much. Or at least, I get that’s the intention.
This is a fixed cost and can be done with little power since the components are designed to do this task.
This us the part I struggle to believe/understand. I’m roughly aware of how resource intensive upscaling is on locally hosted models. The necessary tech/resources to do that to 4k+ in real time (120+ fps) seems at least equivalent, if not more expensive, to just rendering it that way in the first place. Are these “scaler chips” really that much more advanced/efficient?
Further questions aside, I appreciate the explanation. Thanks!
Rendering a 3D scene is much more intensive and complicated than a simple scaler. The scaler isn’t advanced at all, it’s actually very simple. And it can’t be compared with running a large model locally. These are expert systems, not large models. They are very good at one thing and can do only that thing.
Like I said the cost is fixed, so if the scaler can handle 1080p at 120fps to upscale to 2K, then it can always handle that. It doesn’t matter how complex or simple the image is, it will always use the same amount of power. It reads the image, does the calculation and outputs the resulting image.
Rendering a 3D scene is much much more complex and power intensive. The amount of power highly depends on the complexity of the scene and there is a lot more involved. It needs the gpu, cpu, memory and even sometimes storage, plus all the bandwidth and latency in between.
Upscaling isn’t like that, it’s a lot more simple. So if the hardware is there, like the AI cores on a gpu or the dedicated upscaler chip, it will always work. And since that hardware will normally not be heavily used, the rest of the components are still available for the game. A dedicated scaler is the most efficient, but the cores on the gpu aren’t bad either. That’s why something like DLSS doesn’t just work on any hardware, it needs specialized components. And different generations and parts have different limitations.
Say your system can render a game at 1080p at a good solid 120fps. But you have a 2K monitor, so you want the game to run at 2K. This requires a lot more from the system, so the computer struggles to run the game at 60 fps and has annoying dips in demanding parts. With upscaling you run the game at 1080p at 120fps and the upscaler takes that image stream and converts it into 2K at a smooth 120fps. Now the scaler may not get all the details right, like running native 2K and it may make some small mistakes. But our eyes are pretty bad and if we’re playing games our brains aren’t looking for those details, but are instead focused on gameplay. So the output is probably pretty good and unless you were to compare it with 2K native side by side, probably you won’t even notice the difference. So it’s a way of having that excellent performance, without shelling out a 1000 bucks for better hardware.
There are limitations of course. Not all games conform to what the scaler is good at. It usually does well with realistic scenes, but can struggle with more abstract stuff. It can get annoying halos and weird artifacts. There are also limitations to what bandwidth it can push, so for example not all gpus can do 4K at a high framerate. If the game uses the AI cores as well for other stuff, that can become an issue. If the difference in resolution is too much, that becomes very noticeable and unplayable. Often there’s also the option to use previous frames to generate intermediate frames, to boost the framerate with little cost. In my experience this doesn’t work well and just makes the game feel like it’s ghosting and smearing.
But when used properly, it can give a nice boost basically for free. I have even seen it used where the game could be run at a lower quality at the native resolution and high framerate, but looked better at a lower resolution with a higher quality setting and then upscaled. The extra effects outweighed the small loss of fidelity.
It started as good tech to make GPUs last longer, but now is a crutch that even top notch hardware like a 4090 needs to actually achieve playable performance with ray tracing at high resolutions. And that hardware is already way overpriced, imagine the price of something that could do it natively.
Is there an eli5 on how “ai upscaling” is less (or even equally) technologically demanding than just putting in better hardware?
The game is rendered at a lower resolution, this saves a lot of resources. This isn’t a linear thing, lowering the resolution reduces the performance needed by a lot more than you would think. Not just in processing power but also bandwidth and memory requirements. Then dedicated AI cores or even special AI scaler chips get used to upscale the image back to the requested resolution. This is a fixed cost and can be done with little power since the components are designed to do this task.
My TV for example has an AI scaler chip which is pretty nice (especially after tuning) for showing old content on a large high res screen. For games applying AI up scaling to old textures also does wonders.
Now even though this gets the AI label slapped on, this is nothing like the LMMs such as chat GPT. These are expert systems trained and designed to do exactly one thing. This is the good kind of AI that’s actually useful instead of the BS AI like LLMs. Now these systems have their limitations, but for games the trade off between details and framerate can be worth it. Especially if our bad eyes and mediocre screens wouldn’t really show the difference anyways.
lol, trying to hedge against downvotes from the anti-AI crowd?
I get that much. Or at least, I get that’s the intention.
This us the part I struggle to believe/understand. I’m roughly aware of how resource intensive upscaling is on locally hosted models. The necessary tech/resources to do that to 4k+ in real time (120+ fps) seems at least equivalent, if not more expensive, to just rendering it that way in the first place. Are these “scaler chips” really that much more advanced/efficient?
Further questions aside, I appreciate the explanation. Thanks!
Rendering a 3D scene is much more intensive and complicated than a simple scaler. The scaler isn’t advanced at all, it’s actually very simple. And it can’t be compared with running a large model locally. These are expert systems, not large models. They are very good at one thing and can do only that thing.
Like I said the cost is fixed, so if the scaler can handle 1080p at 120fps to upscale to 2K, then it can always handle that. It doesn’t matter how complex or simple the image is, it will always use the same amount of power. It reads the image, does the calculation and outputs the resulting image.
Rendering a 3D scene is much much more complex and power intensive. The amount of power highly depends on the complexity of the scene and there is a lot more involved. It needs the gpu, cpu, memory and even sometimes storage, plus all the bandwidth and latency in between.
Upscaling isn’t like that, it’s a lot more simple. So if the hardware is there, like the AI cores on a gpu or the dedicated upscaler chip, it will always work. And since that hardware will normally not be heavily used, the rest of the components are still available for the game. A dedicated scaler is the most efficient, but the cores on the gpu aren’t bad either. That’s why something like DLSS doesn’t just work on any hardware, it needs specialized components. And different generations and parts have different limitations.
Say your system can render a game at 1080p at a good solid 120fps. But you have a 2K monitor, so you want the game to run at 2K. This requires a lot more from the system, so the computer struggles to run the game at 60 fps and has annoying dips in demanding parts. With upscaling you run the game at 1080p at 120fps and the upscaler takes that image stream and converts it into 2K at a smooth 120fps. Now the scaler may not get all the details right, like running native 2K and it may make some small mistakes. But our eyes are pretty bad and if we’re playing games our brains aren’t looking for those details, but are instead focused on gameplay. So the output is probably pretty good and unless you were to compare it with 2K native side by side, probably you won’t even notice the difference. So it’s a way of having that excellent performance, without shelling out a 1000 bucks for better hardware.
There are limitations of course. Not all games conform to what the scaler is good at. It usually does well with realistic scenes, but can struggle with more abstract stuff. It can get annoying halos and weird artifacts. There are also limitations to what bandwidth it can push, so for example not all gpus can do 4K at a high framerate. If the game uses the AI cores as well for other stuff, that can become an issue. If the difference in resolution is too much, that becomes very noticeable and unplayable. Often there’s also the option to use previous frames to generate intermediate frames, to boost the framerate with little cost. In my experience this doesn’t work well and just makes the game feel like it’s ghosting and smearing.
But when used properly, it can give a nice boost basically for free. I have even seen it used where the game could be run at a lower quality at the native resolution and high framerate, but looked better at a lower resolution with a higher quality setting and then upscaled. The extra effects outweighed the small loss of fidelity.
That is interesting. Thanks for the extra info!
It started as good tech to make GPUs last longer, but now is a crutch that even top notch hardware like a 4090 needs to actually achieve playable performance with ray tracing at high resolutions. And that hardware is already way overpriced, imagine the price of something that could do it natively.