In the following article we would like to introduce two different rendering methods to create a 3D configurator. One method is micropolygon rendering and the other is raytracing. It can be said that Tactical Gamer (TG) is a hybrid renderer, since the different rendering techniques are usually used together to render a scene. This article gives background information about the two methods and how they are used in TG.
Rendering micropolygons.
Micropolygon rendering is a technique that takes a surface and divides it into small polygons or micropolygons that are smaller than the pixels in the rendered image. These micropolygons are then shifted and shaded. Shading is basically the process of calculating the color for a micropolygon taking into account factors such as surface color and lighting.
Micropolygon rendering is particularly suitable for displaying process data. Procedural data is something that is calculated as needed using a mathematical formula. The prime example of something procedural is a fractal. If you have ever used a fractal viewing program, you know that you can enlarge the fractal almost infinitely. This is because the fractal is created mathematically and a new view of the fractal is calculated every time you zoom in. The number of details is more or less limited.
The representation of process data is the core of what TG does. Process data allows TG to display complex surfaces with very high levels of detail without having to store masses of data. Micropolygon rendering is ideal for displaying process data as it helps to limit the amount of data to be rendered. If you have process data with effectively infinite levels of detail, you need to find a balance between rendering too many details that would be slow and not enough details that would look bad. The ideal level of detail is a polygon that is slightly smaller than a pixel in the final image. This also means that an appropriate level of detail is used depending on the distance to the camera. Areas further away from the camera can be displayed with less detail. A pixel for an area close to the camera can cover a few millimeters in World Space, while a pixel for an area farther away from the camera can cover ten, a hundred, or even a thousand meters. Micropolygon rendering is an effective technique for breaking down surfaces into polygons that are appropriately detailed.
Another great advantage of a micropolygon renderer is that it is able to work efficiently with displacements. Massive displacements are one of TG’s strengths. Once a surface has been disassembled into micropolygons, each of these micropolygons can be moved in any direction in 3D space. Moving the micropolygons in this way is called displacement.
Micropolygon rendering is used by default to represent terrain, water, and sky.
Raytracing.
Raytracing is the other rendering technique used by TG. You may already be familiar with what raytracing does, as it is a common technique used in other renderers. It works by projecting lines or rays into the scene. When one of these rays hits an element in the scene, the renderer calculates the shading of the scene element. Rays can also collect shading information on their way through the scene, e.g. when walking through clouds.
There are two main types of rays. The first is the primary Ray. The primary rays start from the camera and are projected through the pixels of the image into the scene. Imagine holding a part of the Screen Door Mesh in front of your face. A primary Ray would emanate from her eye and then go out through one of the holes in the mesh into the world. Ray would end up where he meets an object in front of them. Higher quality is achieved by using multiple rays for each pixel in the rendered image.
Another type of ray is a secondary ray. Secondary rays begin where a primary ray meets an element in the scene. A good example of a secondary ray is a reflection ray. Suppose a primary Ray meets a reflective object in the scene. We need to find out both what this point actually reflects on the reflective surface and what color it should reflect. This is found by sending a secondary ray into the scene to see what it hits. Secondary rays are also used to calculate light and shadow.
You can come across node parameters with names like “Enable secondary” or “Visible to other rays”. These parameters typically affect how a node interacts with secondary rays. Suppose you had an Object Node and you cleared the Visible to other rays check box. This would mean that although the object is still visible to the camera, it is not hit by secondary rays. One consequence of this is that the object does not appear in reflections.
It is not really possible to use raytracing alone to render a scene in TG. You can do this by enabling the Ray Trace Everything parameter on the Extra tab of the Node Render. However, as the parameter name indicates, this is not recommended. An important reason for this is that the raytracer does not yet support displacement or at least does not support it efficiently. If you try it out, you’ll notice that the terrain is rendered in blocks, even at a high level of detail. There are other settings that you can adjust to improve the results, such as the ray detail multiplier in the Node Render Subdiv Settings, but they can significantly increase the render time.
By default, TG hybrid renders using both micropolygon rendering and raytracing. Micropolygon rendering is used to render terrain, water, and sky. Ray tracing is mainly used as a secondary ray for reflection, illumination and shadows.
An important point is that raytracing is also used as a standard method for rendering objects such as imported models. This is controlled by the Ray Trace Objects parameter in Node Render. The reason raytracing is used for objects is because they are effectively static data and can be rendered efficiently with the raytracer. The raytracer can provide a higher visual quality for a given render quality setting than can be achieved with the micropolygon renderer. In summary, raytracing makes objects look better and renders faster.
The only disadvantage of using raytracing for objects is that the raytracer does not support displacement. It is capable of converting displacement data to bump mapping data, but bump mapping does not provide the same visual quality as displacement. Think of a bark texture on a tree. Through displacement, the tree bark can have a true 3D shape and the silhouette of the trunk would have lumps and bumps. In bump mapping, the shape of the bark is falsified by light effects. The underlying surface remains smooth, but gives the impression of a real 3D shape. But if you look at the silhouette of the trunk, you can see the flatter shape of the underlying geometry.
If your scene requires displacement on models to look good, you should disable the Ray Trace objects. This will render objects with the micropolygon renderer. You may also want to increase the Detail and Antialiasing settings for the Render Node.
You can also enable raytracing for atmosphere rendering. This is done with the Ray Trace Atmosph. Parameter in Node Render. This is disabled by default. The raytracing of the atmosphere can give better results than the micropolygon renderer at lower detail settings, but it is not as clear an advantage as when rendering objects. It may require some experimentation to get the best results.
When using raytracing for objects and/or atmosphere, the quality and speed is heavily dependent on anti-aliasing and sampling settings, much more than when rendering micropolygons.
Thank you for visiting.
Leave A Comment