Cone tracing and beam tracing are a derivative of the raytracing algorithm for 3D configurators, which replaces beams that have no thickness with thick beams.

This is for two reasons:

**From the physical point of view of the transport perspective.**

The energy that the pixel reaches comes from the entire solid angle around which the eyes see the pixel in the scene, not from its central sample. This results in the key concept of the pixel footprint on surfaces or in the texture space, which is the rear projection of the pixel on the scene.

The above description corresponds to the simplified pinhole camera optics classically used in computer graphics. Note that this approach can also represent a lens-based camera and thus depth-of-field effects by using a cone whose cross-section decreases from lens size to zero in the focal plane and then increases.

In addition, due to diffraction and imperfections, a true optical system does not focus on exact points. This can be modeled as a point spread function (PSF) that is weighted within a spatial angle larger than the pixel.

**From a signal processing perspective.**

Raytracing images suffer from strong aliasing because the “projected geometric signal” has very high frequencies beyond the Nyquist Shannon maximum frequency, which can be displayed at the high pixel sampling rate, so the input signal must be low-pass filtered, i.e. integrated over a solid angle around the pixel center.

Note that unlike intuition, the filter should not be the pixel footprint, as a box filter has poor spectral properties. Conversely, the ideal Sinc function is not practical because it has infinite support and possibly negative values. A Gaussian or Lanczos filter is considered a good compromise.

**Computer graphics models.**

Cone and Beam are based on various simplifications: The first looks at a circular cross-section and treats the intersection with various possible shapes. The second treats an exact pyramidal beam through the pixel and along a complex path, but this only works with polyhedral shapes.

Cone tracing solves certain problems associated with sampling and aliasing that can affect traditional raytracing techniques. However, cone tracing creates a number of problems of its own. For example, the mere crossing of a cone with the scene geometry leads to an enormous variety of possible results. For this reason, cone tracing has mostly remained unpopular. In recent years, Monte Carlo algorithms such as distributed raytracing – i.e. stochastic explicit pixel integration – have been used much more than cone tracing because the results are accurate when enough samples are used. Convergence, however, is so slow that a lot of time is needed to avoid noise during offline rendering. Recent work has focused on eliminating this noise through machine learning techniques.

Differential Cone Tracing, taking into account a differential angular neighborhood around a beam, avoids the complexity of accurate geometry cuts, but requires a LOD representation of the geometry and appearance of the objects. Mip mapping is an approximation limited to the integration of the surface structure within a cone footprint. Differential raytracing extends to structured surfaces viewed through complex paths of cones reflected or refracted by curved surfaces.

Acceleration structures for cone tracing such as bounding volume hierarchies and KD trees were investigated by Wiche.

## Leave A Comment