Eye Dome Lighting (EDL) is a non-photorealistic shading technique (important in the creation process of a 3D configurator) designed to improve depth perception in scientific visualization images. It relies on efficient postprocessing passes implemented on the GPU with GLSL shaders to achieve interactive rendering. To calculate the shading function, only projected depth information is required, which is then applied to the colored scene image. EDL can therefore be applied to any type of data, regardless of its geometric characteristics, except for data that requires transparent representation.
In this article we first briefly describe EDL and then give some details about how it has been integrated into ParaView. EDL was developed by Christian Boucheny during his doctoral thesis. The original goal was to improve depth perception in the visualization of large 3D data sets representing complex industrial plants or equipment for Electricite de France (EDF). In fact, EDF is a large European electrical company where engineers visualize complex data on a daily basis, such as 3D scans of power plants or results from numerical multiphysics simulations.
What is Eyedome Lighting?
Shading occpuies a special place among the visual mechanism used to perceive complex scenes. Global lighting models, including a physically inspired Ambient Occlusion term, are often used to emphasize the relief of surfaces and clarify spatial relationsships. However, the use of such models remains costly as it often requires elaborate precalculations and is therefore unsuitable for an exploratory process in scientific visualization. On the other hand, image-based techniques, such as edge enhancement or halos due to the depth differences, provide useful hints for understanding complex scenes. Subtle spatial relationsships that are not visible with realistic lighting models can be enhanced with these non-photorealistic techniques.
The non-photorealistic shading technique EDL presented here is based on the following guiding principles.
Image-based Lighting: This method is inspired by Ambient Occlusion or Skydome Lighting techniques, with the addition of Viewport Dependency. In contrast to the standard application of these techniques, in our approach the calculations are performed in the image coordinate space, using only the depth buffer information as in Crytek Screen-Space Ambient Occlusion. These techniques require no representation in the object coordinate space, so no knowledge of the geometry of the visualized data or pre-processing steps is required.
Locality: The shading of a particular pixel should be based primarily on its immediate vicinity in the image space, since the effects of long-range interactions are not initially recognized by viewers.
Interactivity: Our main concern is to avoid costly operations that would slow down interactive exploration and thus limit understanding of the data. Due to the development of graphics hardware, a limited number of operations performed on fragments appears to be the most efficient approach.
The basic principle of the EDL algorithm is to look at a hemisphere (the dome) centered on each pixel p. The hemisphere (the dome) that is centered on each pixel p. This dome is bounded by a “horizontal plane” perpendicular to the direction of the observer at the point p. The shading is a function of the amount of this dome visible at p or, conversely, is determined by the amount of this dome hidden by the neighbors of p. In other words, a neighboring pixel reduces the illumination at p if its depth is less than p. This method defines a shading magnitude that depends solely on the depth values of the close neighbors. To achieve better shading that takes into account distant neighbor pixels, a multiscale approach is implemented using the same shading function at lower resolutions. These shaded images are then filtered to limit the aliasing caused by a lower resolution using a bilateral cross filter and then merged with the shaded image at full resolution.
Compiling and using EDL in ParaView.
Eye Dome Lighting Shading is implemented in ParaView as a plugin. The code can be found in the ParaView source tree under /Plugins/EyeDomeLighting. Before the build of the system is executed, a new variable PARAVIEW_BUILD_PLUGIN_EyeDomeLighting must be switched to ON in the cmake interface in order to enable the setup of the system.
Once the system has been created, the plugin can be loaded via the Manage Plugins function in the Tools menu. The dynamic library of the EDL plugin is currently called libRenderPassEyeDomeLightingView. When loading, a new view type named “Render View + Eye Dome Lighting” appears in the list of available views. Simply select it and all 3D data loaded in the view will be shaded with EDL. Note that the EDL shading function has so far been superimposed on the classic view due to the ParaView specific material pipeline. By changing the latter it would be possible to define different shades more flexibly.
Plugin architecture in ParaView.
The EDL algorithm is implemented as vtklImageProcessingPass. This allows us to call the algorithm from the plugin in the following way:
If we look at this code, we can already describe some important details of the implementation.
The plugin itself consists of a vtkPVRenderView, a ParaView view. The view contains a new methode SetImageProcessingPass, which inserts our algorithm at the right place in the visualization pipeline. This framework for post-processing image passes was recently added to EDF`s R&D. However, the position where the Imaging Pass is inserted does not currently allow transparency to be applied correctly. This is due to the more complex pipeline design for the transparency representation that relies on depth peeling and the further development in VTK should take place as needed.
The SetUseDepthBuffer method has been added to vtkPVSynchronizedRendered to enable the use of depth buffers. In fact, EDL uses a depth buffer, but most algorithms do not. Continuously turning on the algorithm can cause the system to slow down if the depth buffer is not needed. To avoid this problem, SetUseDepthBuffer is included. The user is responsible for activating the depth buffer when his algorithm requires it. The default value is off.
The use of the depth buffer by EDL was one of the main challenges for the integration of EDL in ParaView. In fact, the plugin should work in standalone, client, server, and parallel server modes. Tiled displays are also taken into account. Some developments have been made to allow parallel compositing of the depth buffer with IceT. Subjecting this functionality of IceT to the render passes was the main goal for the correct implementation of EDL.
Shading algorithm.
The vtkEDLShading is based on another class called vtkDepthImageProcessingPass. This class contains some general methods that are not specific to the EDL algorithm and can be used to implement other algorithms. For example, we have implemented an image-based Ambient Occlusion Shading algorithm and a ParaView view based on it. So any user could derive a class from vtkDepthImageProcessingPass to implement such an algorithm.
Acknowledgements.
The EDL algorithm is the result of a joint work of Electricite de France, CNRS, College de France and Universite J. Fourier on the doctoral thesis of Christian Boucheny.