This may be nothing more than some kind of morbid curiosity, but I really want to know what these “neural rendering capabilities” turn out to be. Maybe it’s cool stuff.
It seems like everyone in this thread thinks it’s like that AI generated Minecraft demo. Though I can’t blame them too much since the article is complete shit as well.
The potential of this tech is enormous. Imagine an alternative to RTX Remix that turns any game into a near photoreal experience. Genres like simulations and racing games - which tend to attempt photorealism instead of creative art styles anyway, are primarily featuring inanimate objects (avoiding most of the uncanny valley that way) and could be transformed with models based on relatively limited training data - would be ideal at first.
This may be nothing more than some kind of morbid curiosity, but I really want to know what these “neural rendering capabilities” turn out to be. Maybe it’s cool stuff.
It’s not a secret, Nvidia publishes white papers about what their technologies are and how they work:
https://research.nvidia.com/labs/rtr/neural_appearance_models/
It seems like everyone in this thread thinks it’s like that AI generated Minecraft demo. Though I can’t blame them too much since the article is complete shit as well.
Thank you so much! Looks very interesting indeed. Definitely gonna watch the video when I’m on WiFi again.
I’ve seen some early demonstrations of this, like this one from three years ago that makes GTA V look photoreal:
https://www.youtube.com/watch?v=P1IcaBn3ej0
The potential of this tech is enormous. Imagine an alternative to RTX Remix that turns any game into a near photoreal experience. Genres like simulations and racing games - which tend to attempt photorealism instead of creative art styles anyway, are primarily featuring inanimate objects (avoiding most of the uncanny valley that way) and could be transformed with models based on relatively limited training data - would be ideal at first.