If we look at an image that contains non-transparent objects and surfaces, we cannot see the objects behind the objects that are closer to the eye. We need to remove these hidden surfaces to get a realistic screen image like a 3D configurator. The identification and removal of these objects is called the hidden surface problem. There are two approaches to eliminating hidden surface problems – object and image space methods. The object space method is implemented in the physical coordinate system and the image space method in the screen coordinate system.

If we want to display a 3D object on a 2D screen, we need to identify the parts of the screen that are visible from a selected viewing position.

Depth Buffer (Z Buffer) method.

This method was developed by Catmull. It is an image space approach. The basic idea is to test the Z depth of each surface to determine the nearest (visible) surface.

In this procedure, each surface is processed individually, one pixel position above the surface at a time. The depth values for one pixel are compared and the nearest (smallest z) surface determines the color to be displayed in the frame buffer.

It is applied very efficiently to polygon surfaces. The surfaces can be edited in any order. To overwrite the narrower polygons from the far away ones, two buffers called frame buffers and depth buffers are used.

The depth buffer is used to store depth values for position (x, y) when machining surfaces (0 < depth < 1).

The frame buffer is used to store the intensity value of the color value at each position (x, y).

The Z coordinates are usually normalized to the range [0, 1]. The value 0 for the Z coordinate shows the rear section and the value 1 for the Z coordinate the front section.

Algorithm.

Step 1 – Setting the buffer values –

Depth buffer (x, y) = 0

Frame buffer (x, y) = background color

Step 2 – edit each polygon (one by one)

Calculate the depth z for each projected (x, y) pixel position of a polygon.

If Z > Depthbuffer (x, y)

Calculate the surface color,

Set depth buffer (x, y) = z.

Frame buffer (x, y) = surface color (x, y)

• It is easy to implement.
• It reduces the speed problem when implemented in hardware.
• It processes one object at a time.

• It requires a large amount of memory.
• It is a time consuming process.

Scan-Line method.

It is an image space method to identify the visible surface. This method has depth information for only a single scan line. In order to need a scan line with depth values, we need to group and process all polygons that intersect simultaneously with a particular scan line before we process the next scan line.

Edge Table – It contains coordinate vertices of each line in the scene, the reverse slope of each line, and pointers in the polygon table to connect edges to surfaces.

Polygon Table – It contains the plane coefficients, surface material properties, other surface data and can serve as a pointer to the corner table.

An active list of edges is created to make it easier to find surfaces that cross a specific scan line. The active list stores only those edges that cross the scan line in the order of incrementing X. The scan line can be scanned in the scan table. In addition, a flag is set for each surface indicating whether a position along a scan line is inside or outside the surface.

The pixel positions across each scan line are processed from left to right. At the left intersection with a surface, the surface flag is turned on, and at the right, the flag is turned off. You only need to perform depth calculations if multiple surfaces are turned on at a specific position on the scan line with their flags.

Area subdivision method.

The surface subdivision method takes advantage of this by locating those visible surfaces that are part of a single surface. Divide the entire visible area into smaller rectangles until each small area is the projection of part of a single visible area or no area at all.

Continue this process until the subdivisions can be easily analyzed as belonging to a single surface or reduced to the size of a single pixel. An easy way to do this is to split the area into four equal parts at each step. There are four possible relationships that a surface with a certain area boundary can have.

• Surrounding Surface – One that completely surrounds the area.
• Overlapping Surface – One that is partially inside and partially outside the area.
• Inside Surface – One that is completely inside the area.
• Outside Surface – One that is completely outside the range.

The tests to determine the surface visibility within a range can be specified in relation to these four classifications. No further subdivisions of a particular area are required if any of the following conditions are met

• All surfaces are outside surfaces related to the surface.
• There is only one inner, overlapping or surrounding surface in the area.
• A surrounding surface covers all other surfaces within the area boundaries.

Recognizing backsides.

A quick and easy object space method to identify the backs of a polyhedron is based on inside-outside tests. A point (x, y, z) is within a polygon face with plane parameters A, B, C, and D if an inner point is along the line of sight to the surface, the polygon must be a rear face (we are within that face and cannot see the front face from our viewing position).

We can simplify this test by viewing the normal vector N on a polygon face that has Cartesian components (A, B, C).

In general, if V is a vector in the line of sight from the eye position, then this polygon is a back surface if

V.N > 0

If object descriptions are converted to projection coordinates and their viewing direction is parallel to the Z-axis of the view, then –

V = (0, 0, Vz) and V.N = VZC

So that we only have to consider the sign of C, the component of the normal vector N.

In a right-handed viewing system with a viewing direction along the negative UV axis, the polygon is a back surface if C < 0 and we cannot see a surface whose normal state is the Z-component C = 0, because its viewing direction is directed to this polygon. So in general we can call any polygon a back surface if its normal vector has a Z-component value –

C <= 0 Similar methods can be used in packages that use a left-handed viewing system. In these packages, the plane parameters A, B, C, and D can be calculated from polygon node coordinates given clockwise (as opposed to the counterclockwise direction used in the right-handed system). In addition, backs have normal vectors that point away from the viewing position and are marked with C >= 0 when the viewing direction is along the positive ZV axis. By examining the C parameter for the different planes that define an object, we can immediately identify all backs.

A buffer method.

The A-Buffer method is an extension of the Depthbuffer method. The A-Buffer method is a visibility detection method developed by Lucas Filmstudios for the rendering system Rendern Everything You Ever Saw (REYES).

The A buffer extends the depthbuffer method to enable transparency. The key data structure in the A buffer is the accumulation buffer.

Each position in the A buffer has two fields –

• Depth Field – it stores a positive or negative real number.
• Integrity Field – it stores surface intensity information or a pointer value.
• If the depth is >= 0, the number stored at this location is the depth of a single surface overlapping the corresponding pixel area.
• The intensity field then stores the RGB components of the surface color at that point and the percentage of pixel coverage.

If the depth is < 0, it displays multi-surface contributions to pixel intensity. The Intensity field then stores a pointer to a linked list of surface data. The surface buffer in the A buffer contains –

• RGB intensity components
• opacity parameters
• depth
• Percentage of area coverage
• surface identifier
• The algorithm works exactly like the Depthbuffer algorithm. The depth and opacity values are used to determine the final color of a pixel.

Depth sorting method.

The Depth Sorting method uses both Image Space and Object Space operations. The Depth Sorting method has two basic functions:

First, the surfaces are sorted in order of decreasing depth.
Second, the surfaces are converted in the order of scanning, starting with the surface with the greatest depth.
The scan conversion of the polygon surfaces is done in the Image Space. This method of solving the hidden surface problem is often referred to as the Painter’s algorithm.

The algorithm starts with sorting by depth. For example, the initial “depth estimate” of a polygon can be viewed on the nearest Z-value of any corner of the polygon.

Let’s take the polygon P at the end of the list. Consider all polygons Q whose Z extents overlap with P`s. Before we draw P, we perform the following tests. If one of the following tests is positive, we can assume that P can be drawn before Q.

• Don’t the X-extents overlap?
• Do the Y-extents not overlap?
• From the point of view of the observer, is P on the opposite side of the Q plane?
• Is Q completely on the same side of the plane of P as the viewpoint?
• Do the projections of the polygons not overlap?

If all tests fail, we share either P or Q with the other’s level. The newly cut polygons are inserted into the depth order and the process continues. Theoretically this partitioning O (n2) could create single polygons, but in practice the number of polygons is much smaller.

Binary Space Partition (BSP) Trees.

Binary space partitioning is used to calculate visibility. To build the BSP trees, you should start with polygons and label all edges. If you only edit one edge at a time, lengthen each edge so that it divides the layer into two parts. Place the first edge in the tree as a root. Add more edges, depending on whether they are inside or outside. Edges that span the extension of an edge already in the tree are divided into two parts and both added to the tree.

We hope that we have been able to give them a first brief overview of this topic. If you have any questions or suggestions, please contact our experts in our forum.

Thank you very much for your visit.