Game engines (ideal to create a 3D configurator) do most of their shading work per pixel or per fragment. But there is another alternative that has been popular in film for decades: Object Space Shading. Pixar`s Renderman, the best known renderer for computer graphics, uses the Reyes rendering method, which is an object space shading method.
This blog post deals with an Object Space Shading method that works on Direct3D 11 class hardware. In particular we will deal with the shading of the texture space, which uses the texture parameterization of the model. Shading in Object Space means that we have already decoupled the shading rate from the pixels and it is also easy to decouple it temporally. We have used these decouplings to find ways to increase performance, but we will also discuss some future possibilities in this area.
Texture Space Shading.
When most people hear the term “Texture Space Shading”, they generally think of gridding the geometry in the texture space. There are two difficulties with this approach: visibility and choosing the right resolution. Gridding in the texture area means that you have no visibility information from the camera view and at the end you shade text that is not visible. Also, their resolution options are limited because you can only rasterize at one resolution at a time.
The choice of resolution is important because it increases your overall shading costs. Each midmap layer costs an additional 4 x shading costs. So if you need 512×512 to adjust the pixel rate for one part of an object and 1024×1024 for another part, but you can only select one, which one do you choose? If you select 512 x 512, part of the object will be under the sample. If you select 1024×1024, part of your object will cost 4x as much as necessary. This increases with each level you bridge. For example, if you have an object that spans 4 mipmap layers, you can increase the shading cost up to 64x to achieve a specific resolution target for the entire object.
So let’s try a different approach. Instead, we rasterize in the screen area and instead of shading, we record only the texts we need as shading work. Therefore we have chosen the term “Texel Shading”. Since we rasterize from the camera perspective, we get the two pieces of information that are normally not available in Texture Space methods – visibility after an early depth test and the derivation of the screen area to select the mipmap layer. The work itself is the texts.
How Texel Shading works.
At a high level, the process is as follows:
Note that Texel Shading can be applied either per object or even per fragment. Fragment-level selection takes place at the time of the first geometry pass, so there are still additional costs for other levels, but it allows you to use standard forward rendering. You can also split the shading work between Texel Shading and Pixel Shading.
One piece of information you should know is that we actually shade 8×8 tiles from textiles. We have a kind of cache that has one entry per tile that we use to eliminate redundant cable shading, as well as age tracking for techniques that use shadings from previous frames.
The second is that we need to interpolate vertex attributes in the compute shader. For this we have a map called “Triangle Index Texture” that we use to determine which triangle we need for shading a particular texel.
The disadvantages.
That alone does not benefit them. In fact, additional costs arise in different ways:
Advantages and potential.
As soon as you have rendered such an Object Space, new possibilities open up. We will list some of them in this section. The first two and a little of the third are what we’ve tried so far. The rest we will try in the future.
Integration into existing engines.
We are not the first to try something like this. For example, the Nitrous engine uses a texture space shading technique suitable for the Real Time Strategy (RTS) games they’re targeting. They don’t have to worry so much about objects that extend across mipmap layers or hidden text, so they took a different approach.
Texel Shading fits more naturally into a Forward Rendering Pass than into a Deferred (as far as I’ve thought about it – although you could also do Deferred Texture Space Shading). It requires that the object has a unique texture parameterization, as is often the case with a lightmap. It also requires space for the texture of the shaded results and the choice of how high the resolution should be. This depends on how the texture is to be used – not all shadings need to be done in Texture Space.
If there are textures that do not use the unique parameterization, additional derivatives are required for the Mipmap level search. This basically fills the function of the screenspace derivatives in standard midmapping, but instead tries to compare the rate of the second texture with the first. However, since the mapping from one UV set to the other tends to remain fixed, it may be precalculated.
As for engine integration, it would require the ability to bind the object’s index and vertex buffer during a compute shader pass, preferably a vertex buffer that is already skinned for performance reasons. Alternatively, you can pre-interpolate anything that doesn’t change dynamically, but costs more memory and hasn’t been tried by us.
Taking over Texel Shading does not require any all-or-nothing choice. You can apply it to a single object. You can even make a selection per fragment within that object.
Conclusions.
We have described a way to do Object Space Shadings with Direct3D 11 hardware. It is a texture space approach, but uses a camera grid to help with occlusion and selecting the correct mipmap plane for a particular view. We have explored some ways to reduce the shader load by spatially and temporally decoupling the shading rate, but there is much more to explore.