Good design is obvious and in creating processes of 3D configurators very important: you recognize it when you see it, What is less obvious is the Trail & Error behind the process to achieve the corresponding result. Designers, manufacturers and other creative people have to try out several variants of an idea. Every time you render an image, you examine, adjust, vary and try as many variations as you need to get the result.
The more time you have for the iteration, the better the end result. Of course time is money and you won`t be able to work on the project forever due to deadlines.
Founder and CEO of Nvidia, Jensen Huang, showed today at the GPU Technology Conference how Nvidia drives the interactive design process to accurately predict the final renderings using artificial intelligence (AI) in raytracing.
The raytracing process produces very realistic images, but is computationally intensive and can leave a certain amount of noise in an image. Removing this noise while maintaining sharp edges and texture details is known in the industry as denoinsing. With Nvidia Iray, Huang demonstrated how Nvidia was the first manufacturer to enable high-quality, real-time denoinsing by combining deep learning prediction algorithms with Pascal architecture-based Nvidia Quadro GPUs.
It is a complete game changer for graphics-intensive industries such as entertainment, product design, manufacturing, architectural engineering and many others.
This technique can be applied to various types of raytracing systems. Nvidia already integrates deep learning techniques into its own rendering products, starting with Iray.
How Iray Interactive Denoinsing works.
Existing algorithms for high-quality denoinsing take seconds to minutes per frame, making them impractical for interactive applications.
By predicting final images from only partially completed results, Iray generates artificial intelligence (AI) and thus accurate, photorealistic models without waiting for the final image.
Designers can iterate and finish final images 4x faster to get a much faster understanding of a final scene or model. The cumulative time savings can significantly accelerate a company`s time to market.
To achieve this, Nvidia researchers and engineers turned to a class of neutral networks, the auto encoder. Auto encoders are used to increase image resolution, compress video and many other image processing algorithms.
With the supercomputer Nvidia DGX-1 AI, the team trained a neutral network to operationalize a noisy image into a clean reference image. In less than 24 hours, the neural network was trained with 15.000 image pairs of varying noise from 3.000 different scenes. Once trained, the network only takes a fraction of a second to clean up the noise in almost every image – even those not included in the original training set.
With Iray, you don`t have to worry about how deep learning works. We`ve already trained the network and are using GPU-accelerated inference to Iray output. Creative people can work interactively at the touch of a button and take advantage of improved image quality with any graphics processor.
The Iray Deep Learning feature is integrated into the Iray SDK, which is delivered to software companies. Nvidia also plans to add an AI mode to Mental Ray. Various renderers are expected to adopt this technology. The basics of this technology were published at the ACM SIGGRAPH 2017 Computer Graphics Conference in July.
Leave A Comment