Volume Ray Casting, sometimes referred to as Volumetric Ray Casting, Volumetric Ray Tracing or Volume Ray Marching, is an image-based volume rendering technique. Ray Casting plays an important in the creating processes of a 3D configurator. It calculates 2D images from 3D volume data sets (3D scalar fields). Volume ray casting, which processes volume data, must not be confused with ray casting in the sense of raytracing, which processes surface data. In the volumetric variant, the calculation does not stop at the surface, but “pushes” the object through and tests the object along the ray. In contrast to raytracing, Volume Ray Casting does not generate secondary beams. When the context is clear, some experts speak of ray casting. Because rayarching does not necessarily require an accurate solution for ray intersection and collisions, it is suitable for real-time computing for many applications for which ray tracing is unsuitable.

d charakter ar e

Classification.

The technique of Volume Ray Casting can be derived directly from the rendering equation. It provides results of very high quality. Volume ray casting is classified as an image-based volume rendering technique because the calculation is performed from the output image and not from the input volume data, as is the case with object-based techniques.

Basic algorithm.

In its basic form, the Volume Ray Casting algorithm consists of four steps:

  1. Ray Casting. For each pixel of the final image, a sighting beam is shot (“thrown”) through the volume. In this phase it is useful to look at the volume that is touched and enclosed in a limited primitive, a simple geometric object – usually a cuboid – that is used to intersect the viewing beam and the volume.
  2. Sampling. Along the part of the field of view that lies within the volume, equidistant sampling points or samples are selected. In general, the volume is not aligned with the sight beam, and the sampling points are usually located between the voxels. For this reason, it is necessary to interpolate the values of the samples from the surrounding voxels (usually with trilinear interpolation).
  3. Shading. For each sampling point, a transfer function retrieves an RGBA material color and a gradient of the illumination values is calculated. The gradient represents the orientation of the local surfaces within the volume. The samples are then shaded (i.e. colored and illuminated) according to their surface orientation and the position of the light source in the scene.
  4. Compositing. After all scanning points have been shaded, they are assembled along the field of view, resulting in the final color value for the currently processed pixel. The composition is derived directly from the rendering equation and is comparable to mixing acetate plates on an overhead projector. It can work back-to-front, i.e. the calculation starts with the sample furthest away from the viewer and ends with the sample closest to the viewer. This workflow direction ensures that masked parts of the volume do not affect the resulting pixel. The order from front to back could be more computationally intensive as the remaining radiation energy rises downwards as the beam travels away from the camera – so the contribution to the rendering integral decreases so that more aggressive compromises between speed and quality can be made (increasing the distances between samples along the beam is one of those compromises between speed and quality).

Advanced adaptive algorithms.

The adaptive sampling strategy drastically shortens the render time for high-quality rendering – the higher the quality and/or size of the data set, the greater the advantage over the regular/even sampling strategy. However, adaptive ray casting on one projection plane and adaptive scanning along each individual beam do not map well to the SIMD architecture of modern GPUs. Multicore GPUs, however, are perfectly suited for this technique and are therefore suitable for interactive volumetric rendering of the highest quality.

Thank you very much for your visit.