What you should know about Subdivision Surfaces and Displacement Mapping in Takua.

Two standard functions that every modern renderer (ideal for creating 3D configurators) supports are Subdivision Surfaces and various forms of displacement mapping. As we will discuss later in this article, these two functions are usually very closely related, both in use and in implementation. Subdivision and displacement are crucial tools for displaying details in computer graphics. From both a technical and an authoritarian point of view, it is advantageous to be able to display more details than are actually present in a mesh. The application of details at runtime allows geometry to consume less storage space than would be required if all details were burned into the geometry, and artists often like the ability to separate broad functions from high frequency details.

Both the subdivision and displacement originally came from the world of raster rendering, as flying geometry generation has historically been both easier to implement and more practical/plausible to use. Rastering involves streaming the geometry through the renderer and drawing it on the screen, so that each individual piece of geometry can be divided, tessellated, moved, placed on the frame buffer, and then discarded to free up space. Reyes Renderman was efficient in rendering Subdivision and Displacement Surfaces for exactly this reason. In naive raytracing, however, the rays can intersect at any time and in any order in the geometry. Subdividing and shifting the geometry “on the fly” for each ray and then discarding the geometry is extremely expensive compared to processing geometry once over an entire frame buffer. The simplest solution to this problem is to simply divide, move and keep everything in memory during raytracing. Historically, however, the simple caching of everything has never been a practical solution because computers simply didn’t have enough memory to store so much data. As a result, previous research has made considerable efforts into more intelligent raytracing architectures that made immediate subdivision and displacement affordable again.

Over the past five years, however, the history of raytraced displacement has changed. Much more powerful engines are now available (in some studios, render farm nodes with 256 GB of memory or more are no longer unusual). As a result, raytraced renderers don’t have to be so clever when it comes to managing shifted geometry. A combination of camera-aptive tesselation and simple geometry cache with a last used displacement strategy is often enough to make raytraced displacement practical. Today, high-performance displacement is common in workflows for a variety of pathracers, including Arnold, Renderman/RIS, Vray, Corona, Hyperion, Manuka, etc. With this in mind, attempts were made to make subdivision and displacement in Takua as simple as possible.

Takua has no concept for a displacement strategy for cached mosaic geometry. The hope is to be as efficient as possible without completely overloading the memory. Since Takua is admittedly a hobby renderer for most users and we personally also use engines with 48 GB memory, we haven’t thought about cases where things should overload the memory. Instead of tessellating “on-the-fly” by ray or something like that, we just pre-divide and shift everything in advance during the first scene load. The meshes are loaded, divided and moved in parallel. If Takua finds that the entire divided and shifted geometry does not fit into the allocated memory budget, the renderer stops automatically.

Note that Takua’s scene format distinguishes between a mesh and a geom: a mesh is a combination of the raw vertex/face/primvar data that makes up a surface, while a geom is an object that contains a reference to a mesh along with transformation matrices, shader bindings, etc. The mesh is a combination of the raw vertex/face/primvar data that makes up a surface. This separation between the mesh data and the geometric object allows some useful functions in the subdivision/displacement system. Takua’s scene file format allows to bind subdivision and displacement modifiers either on shader level or per geom. Geome-level bindings override shader-level bindings, which is useful for authoring, since a number of objects can share the same shader, but then have individual specializations for different subdivision rates and different subdivision rates and displacement maps. When loading scenes, Takua analyzes which subdivisions/displacements are required for which meshes by which geomes, and then removes and aggregates all cases where different geomes want the same subdivision or displacement for the same mesh. This de-duplication even works with instances.

Once Takua has compiled a list of all meshes that require a subdivision, the meshes are divided in parallel. For the Catmull Clark subdivision, we use OpenSubdiv to calculate subdivision template tables, evaluate the templates, and perform the final tessellation. As far as we can tell, the template calculation in OpenSubdiv is straightforward, so it can be quite slow for really heavy meshes. However, the template evaluation and final tessellation is super fast because OpenSubdiv provides a number of parallel evaluators that can run with a variety of backends ranging from TBB on the CPU to CUDA or OpenGL compute shaders on the GPU. Takua currently uses the TBB evaluator from OpenSubdiv. A special feature of the template implementation in OpenSubdiv is that the template calculation depends only on the topology of the mesh and not on single primvars, so that a single template calculation can be used multiple times to interpolate many different primvars such as positions, normals, UVs and more.

No writing of the Subdivision Surfaces is complete without dividing an image of a cube into a sphere, so the figure below shows a representation of a cube with multiple subdivision levels. Each subdivided cube is rendered with a procedural wireframe texture that we implemented to illustrate what happened to the subdivision.

Each subdivided mesh is placed in a new mesh. Basic meshes that require multiple subdivision levels for multiple different geomes will receive a new subdivided mesh per subdivision level. After all subdivided meshes are finished, Takua then performs a shift. The displacement is parallelized both by the mesh and within each mesh. Takua also supports both on-the-fly and fully cached displacements, which can be specified per shader or per geom. When a mesh is marked for full caching, the mesh is completely shifted, saved as a separate mesh from the non-moved subdivision mesh, and then a BVH is built for the shifted mesh. If a mesh is marked for full caching, the adjustment system calculates each offset area, then calculates the boundaries for that area, and then discards the area. The displaced boundaries are then used to form a dense BVH for the displaced mesh without having to store the displaced mesh itself. Instead, only a reference to the non-displaced Mesh subdivision needs to be retained. When a beam crosses the BVH for an on-the-fly displacement mesh, each BVH leaf node specifies which triangles must be displaced on the non-displaced mesh to generate end polys for the intersection and then the displaced polys are cut and discarded. For the scenes in this article, the on-the-fly displacement seems to be about twice as slow as the fully cached displacement, which is to be expected, but if the same mesh is shifted in several different ways, there will be correspondingly large memory savings. After all displacements have been calculated, Takua goes back and analyzes which base meshes and unoffset subdivision meshes are no longer needed and releases those meshes to recover memory.

We have implemented support for both scalar displacements over normal grayscale texture maps and vector displacement from OpenEXR textures. The ocean render uses a vector displacement applied to a single layer from the beginning.

For both ocean renderers, the vector shift OpenEXR texture has been taken from Autodesk, which generously provides it as part of an article on vector displacement in Arnold. The renderers are illuminated with a skydome using the HDRI Sky 193 texture.

For both scalar and vector displacement, the displacement viewer can be controlled by a single value from the displacement texture. Vector displacement maps are assumed to be in a local tangent space, which axis is used as base for the tangent space can be defined per displacement map. The figure below shows three tacos with different scaling values. The left taco has a displacement scale of 0, which effectively prevents displacement. The middle taco, on the other hand, has a displacement scale of 0.5 of the native displacement values in the vector displacement map. The very right taco has a displacement scale of 1.0, i.e. simply use the native displacement values from the vector displacement map.

The figure below shows a close-up of the right taco from the figure above. The base mesh for the taco is relatively low resolved, but by subdivision and displacement a large amount of geometric details can be added to the rendering. In this case, the taco is tesselated to a point where each micropolygon has a subpixel size. The displacement map and other textures for the taco are taken from Quixel’s Megascans library.

A major challenge in displacement mapping is crack formation. Cracking occurs when neighboring polygons move the same common nodes in different ways for each polygon. This can happen if the normals over a surface are not continuous or if there is a discontinuity either in how the displacement texture is mapped onto the surface or in the displacement texture itself. We have implemented an optional, somewhat rough solution for displacement cracking. When Crack Removal is enabled, Takua analyzes the mesh at the time of displacement and records how many different paths of each node in the mesh have been shifted by different faces, along with which faces of that node should be shifted. After an initial displacement pass, the crack remover then goes back and for each node that is moved more than one direction, all displacements are averaged into a single displacement and all surfaces that use that node are updated to share the same averaged result. This approach requires a reasonable amount of accounting and pre-analysis of the displaced mesh, but it seems to work well.

In most cases, the crack removal system seems to work quite well. However, the system is not perfect, sometimes stretching artifacts can occur, especially on surfaces with a textured base color. This rotation occurs because the crack removal system basically stretches micropolygons to cover the crack.

Takua automatically recalculates normals for subdivided/shifted polygons. By default, Takua simply uses the geometric standards as shading standards for shifted polygons. However, there is an option to calculate smooth standards for the shading standards as well. We decided to use geometric standards as our standard in the hope that no other shading standard would be necessary for the subpixel subdivision and displacement.

In the future, we may implement our own subdivision library and we should probably also think more about what kind of combined tessellation cache and displacement strategy is appropriate for better memory efficiency. For now, however, everything seems to work well and render relatively efficiently. The non-oceanic renderings in this article all have a subpixel subdivision with millions of pixels and each took several hours to render at a resolution of 4K (3840×2160) on an engine with two Intel Xeon X5675 CPUs (12 cores total). The two ocean renderings we ran overnight at 1080p took longer to converge, mainly due to the depth of field. All renderings in this article were shaded with a new, greatly improved shading system, so Takua can now display much more complexity than before.

Thank you very much for your visit.

3DMaster