Game engines (ideal to create a 3D configurator) do most of their shading work per pixel or per fragment. But there is another alternative that has been popular in film for decades: Object Space Shading. Pixar`s Renderman, the best known renderer for computer graphics, uses the Reyes rendering method, which is an object space shading method.
This blog post deals with an Object Space Shading method that works on Direct3D 11 class hardware. In particular we will deal with the shading of the texture space, which uses the texture parameterization of the model. Shading in Object Space means that we have already decoupled the shading rate from the pixels and it is also easy to decouple it temporally. We have used these decouplings to find ways to increase performance, but we will also discuss some future possibilities in this area.
Texture Space Shading.
When most people hear the term “Texture Space Shading”, they generally think of gridding the geometry in the texture space. There are two difficulties with this approach: visibility and choosing the right resolution. Gridding in the texture area means that you have no visibility information from the camera view and at the end you shade text that is not visible. Also, their resolution options are limited because you can only rasterize at one resolution at a time.
The choice of resolution is important because it increases your overall shading costs. Each midmap layer costs an additional 4 x shading costs. So if you need 512×512 to adjust the pixel rate for one part of an object and 1024×1024 for another part, but you can only select one, which one do you choose? If you select 512 x 512, part of the object will be under the sample. If you select 1024×1024, part of your object will cost 4x as much as necessary. This increases with each level you bridge. For example, if you have an object that spans 4 mipmap layers, you can increase the shading cost up to 64x to achieve a specific resolution target for the entire object.
So let’s try a different approach. Instead, we rasterize in the screen area and instead of shading, we record only the texts we need as shading work. Therefore we have chosen the term “Texel Shading”. Since we rasterize from the camera perspective, we get the two pieces of information that are normally not available in Texture Space methods – visibility after an early depth test and the derivation of the screen area to select the mipmap layer. The work itself is the texts.
How Texel Shading works.
At a high level, the process is as follows:
- Render the model and record the desired texts.
- Shadow the texts and write them into a texture map of the results.
- Lookup results at the second rendering of the model.
Note that Texel Shading can be applied either per object or even per fragment. Fragment-level selection takes place at the time of the first geometry pass, so there are still additional costs for other levels, but it allows you to use standard forward rendering. You can also split the shading work between Texel Shading and Pixel Shading.
One piece of information you should know is that we actually shade 8×8 tiles from textiles. We have a kind of cache that has one entry per tile that we use to eliminate redundant cable shading, as well as age tracking for techniques that use shadings from previous frames.
The second is that we need to interpolate vertex attributes in the compute shader. For this we have a map called “Triangle Index Texture” that we use to determine which triangle we need for shading a particular texel.
The disadvantages.
That alone does not benefit them. In fact, additional costs arise in different ways:
- Adds an additional geometry pass, although it can be combined with another geometry pass or with the last pass of the previous frame.
- The number of shadows starts higher. In our experiments it is 20% to 100% more shades depending on the model. One reason is that we choose a conservative resolution to achieve at least the pixel rate. This is also because we actually work with 8×8 tiles, so that some texts are shaded when only a subset of the 8×8 texts is visible in the correct resolution. Note: I used the term “Start”. This is because in the next section we will look for ways to reduce this.
- The overhead of the shading in the calculation. Vertex operations are moved to Per-Texel operations. For this reason, it is important to consider using preskinned vertices. You must also calculate barycentrics and interpolation in the shader code yourself. Here, too, there is still untapped potential – with Compute, you have explicit access to neighboring information and the possibility to distribute the shading costs over the Texel tile.
- Storage costs. Our method requires an allocation of sufficient texture memory to store shaded results and the Triangle Index texture in a sufficiently high resolution.
- Problems with filtering. We only experimented with bilinear point filtering. This is not necessarily good enough for a final hue. In fact, it is possible to do a much more intelligent filtering, which makes this an advantage. In the end, however, everything done in Texture Space must go through Texture Filtering, which is anything but ideal.
Advantages and potential.
As soon as you have rendered such an Object Space, new possibilities open up. We will list some of them in this section. The first two and a little of the third are what we’ve tried so far. The rest we will try in the future.
- Skip frames. Since shading takes place in the object space, it is easier to change the temporal shading rate. Our implementation uses a cache that indicates how old the shadings are and reuses the shadings from previous images if they’re not too old.
- Variable and multi-level shadings. There is no reason why the selection of the mipmap layer needs to be adjusted to the pixel rate – you can choose higher for a better shader antialiasing or lower for regions with low variance for a greatly reduced shading rate and this choice can be made for each fragment. We experimented with the mipmap plane bias per triangle by checking the normal variance and mipmap plane bias for lighting calculations when the triangle was relatively flat. It is also possible to actually shade the same fragment at different speeds – in some ways we did this by just calculating the illumination in Texture Space and doing a texture search at fragment speed – but the work could have been further divided to calculate different parts with different resolutions at the same time.
- Decoupled geometry. In a forward renderer, shading efficiency decreases rapidly for small triangles and becomes worse with multisampling anti-aliasing (MSAA) when samples in one pixel are covered by more than one triangle. Texel shading does not suffer from any of these problems because the shading rate is bound to the Texel shading rate. We have found cases where Texel shading exceeds standard forward rendering before we could even skip frames or change the shading rate. But there are actually more possibilities. The geometry used for shading can even be completely different from the triangles that are rasterized in the screen area. They can also be different on different Mipmap layers. This offers additional possibilities for geometric anti-aliasing that we haven’t explored yet.
- Object Space, temporal anti-aliasing. Instead of skipping frames to save shading costs, you could instead shading each frame and averaging it in shadings from previous frames, as is done with temporary screen space anti-aliasing. Performing this operation in Object Space avoids some artifacts and workarounds necessary for temporary anti-aliasing in Screen Space.
- Stochastic effects. An attractive feature of Object Space Shading is that you can reuse shadings while applying depth of field or motion blur without worrying about the artifacts of the screen. In fact, this was an important motivation for the Reyes rendering system. There are also ways to find the right mipmap bias to save shadings when rendering a blurry object. We haven’t investigated this point yet.
- Better filtering. Shadings in object space, in compute, where we have access to adjacent texts, and in a kind of multi-resolution space, open up further possibilities for shader filtering. We haven’t researched this as much as we want to.
- Asynchronous shading and lighting. That was actually the original motivation for this research. Skipping frames is a simpler first step. However, if you have a fallback system for shadings that are not ready in time, you may be able to calculate the shadings completely asynchronously while still updating the projection of the model on the screen at full speed, regardless of the last shading value. This is another area we want to explore.
- Stereo and other multiview scenarios. It is also possible to reuse shading calculations over the eyes for VR, with the proviso that many mirror effects may not work well. With asynchronous shading or lighting, you can even share shading over a network of individual users viewing the same scenes, such as a multiplayer game.
Integration into existing engines.
We are not the first to try something like this. For example, the Nitrous engine uses a texture space shading technique suitable for the Real Time Strategy (RTS) games they’re targeting. They don’t have to worry so much about objects that extend across mipmap layers or hidden text, so they took a different approach.
Texel Shading fits more naturally into a Forward Rendering Pass than into a Deferred (as far as I’ve thought about it – although you could also do Deferred Texture Space Shading). It requires that the object has a unique texture parameterization, as is often the case with a lightmap. It also requires space for the texture of the shaded results and the choice of how high the resolution should be. This depends on how the texture is to be used – not all shadings need to be done in Texture Space.
If there are textures that do not use the unique parameterization, additional derivatives are required for the Mipmap level search. This basically fills the function of the screenspace derivatives in standard midmapping, but instead tries to compare the rate of the second texture with the first. However, since the mapping from one UV set to the other tends to remain fixed, it may be precalculated.
As for engine integration, it would require the ability to bind the object’s index and vertex buffer during a compute shader pass, preferably a vertex buffer that is already skinned for performance reasons. Alternatively, you can pre-interpolate anything that doesn’t change dynamically, but costs more memory and hasn’t been tried by us.
Taking over Texel Shading does not require any all-or-nothing choice. You can apply it to a single object. You can even make a selection per fragment within that object.
Conclusions.
We have described a way to do Object Space Shadings with Direct3D 11 hardware. It is a texture space approach, but uses a camera grid to help with occlusion and selecting the correct mipmap plane for a particular view. We have explored some ways to reduce the shader load by spatially and temporally decoupling the shading rate, but there is much more to explore.
Leave A Comment