If you`ve only been working with 3D for a short time, you might have wondered what exactly is meant by rendering (important task to create a 3D configurator).
An analysis of the term from a mathematical and scientific point of view would go beyond the scope here. Therefore we will deal with the role of rendering in computer graphics in the following.
The process has analogies to film development.
Rendering is the most technically complex aspect of 3D production, but can be easily understood in the context of an analogy: Just as a film photographer has to develop and print his photos before they can be displayed, so computer graphic designers are exposed to a similar necessity.
When a computer graphic artist works on a 3D scene, the models he manipulates are actually a mathematical representation of point and surfaces (more precisely, corners and polygons) in three-dimensional space.
The term rendering refers to the calculations performed by the render engine of a 3D software package to translate the scene from a mathematical approximation into a finished 2D image. The spatial, textural and lighting information of the entire scene is combined to determine the color value of each pixel in the flattered image.
The following image illustrates the computer-aided reproduction of a Bentley:
There are two different types of rendering:
There are two main types of rendering, the main difference being the speed with which images are calculated and developed.
Real-time rendering is most commonly used in games and interactive graphics where images need to be computed from 3D information at incredibly high speeds.
- Interactivity: Since it is impossible to predict exactly how a player will interact with the game environment, images must be rendered in “real time”.
- Speed questions: For motion to appear fluid, at least 18-20 frames per second must be displayed on the screen. Anything else would not look optimal.
- The methods: The real-time rendering is drastically improved by dedicated graphics hardware (GPUs) and by precompiling as much information as possible. Much of the lighting information in a game environment is pre-calculated and directly translated into the environment`s texture files to increase rendering speed.
Offline rendering or pre-rendering: Offline rendering is used in situations where speed is less problematic and calculations are usually performed with multi-core CPUs rather than dedicated graphics hardware.
- Predictability: Offline rendering is most commonly used in animations and effects where visual complexity and photorealism have a much higher standard. Since there is no unpredictability about what will appear in each frame, large studios are known to spend up to 90 hours of rendering time on individual frames.
- Photorealism: Since offline rendering takes place within an open time frame, more realistic images can be produced than with real-time rendering. Characters, environments and associated textures and lights are usually allowed in higher polygon numbers and texture files with a resolution of 4k (or higher).
There are three different rendering techniques.
As a rule, three different rendering techniques are used in practice, which are presented below. Each has its own advantages and disadvantages, so that all three options are feasible in certain situations.
Scanline rendering is a good choice if the renderings are to be created as quickly as possible. Instead of rendering an image pixel by pixel, Scanline renderers calculate on a polygon basis. Scanline techniques in combination with precalculated (baked) lighting can achieve speeds of 60 frames per second or better on a high-end graphics card.
In raytracing, one or more rays of light are tracked from the camera to the next 3D object for each pixel of the scene. The light beam is then guided through a fixed number of bounces, which can include reflection or refraction depending on the material of the 3D scene. The color of each pixel is algorithmically calculated based on the interaction of the light beam with the objects in its traced path. Raytracing is able to produce more photorealism than Scanline, but is exponentially slower.
In contrast to raytracing, radiosity is calculated independently of the camera and is not pixel-oriented, but surface-oriented. The primary function of radiosity is to simulate the surface color more accurately by considering indirect illumination (pressed diffuse light). Radiosity is typically characterized by soft, graded shadows and color bleeding, where light form colored objects “bleeds” onto nearby surfaces.
In practice, radiosity and raytracing are often used in combination. On this basis, impressive and photorealistic renderings can be created.
The most widely used rendering engines.
Althrough rendering is based on incredibly sophisticated calculations, today`s software offers easy-to-understand parameters. On this basis a user does not have to deal with the underlying mathematics to achieve photorealistic results. A render engine is included in every 3D software package, and most of them contain material and lighting packages that allow impressive photorealistic values to be achieved.
Mental Ray (Autodesk Maya) is incredibly versatile, relatively fast and probably the best renderer for character images that require underground scattering. Mental Ray uses a combination of raytracing and global illumination (radiosity).
V-Ray, on the other hand, is typically used with 3ds Max. This combination is absolutely unrivaled for architectural visualizations and environmental renderings. The main advantages of V-Ray over the alternatives are the lighting tools and the extensive material library for arch-viz.
This was just a brief overview of the basics of rendering. It`s technical topic, but it can be very interesting to take a colser look at some of the common techniques.
Thank you very much for reading.
Leave A Comment