Categories: 3D renderings

Guide for beginners: What is Scanline Rendering?

Scanline rendering is the preferred method for creating most computer graphics in moving images for example for 3D configurators. A special implementation called REYES is so popular that it has almost become the standard in this industry. Scanline rendering is also the method used by video games and mosz scientific and technical visualization programs. Scanline algorithms are also implemented cost-effectively in many hardware solutions.

In scanline rendering, drawing is achieved by iterating component parts of scene geoemtry primitives. If the number of output pixels remains constant, the render time tends to increase linearly proportional to the number of primitives. OpenGL and Photorealistic Renderman are two examples of scanline rendering.

Increase your conversion rate.

We help you to generate more inquiries from your website with 3D renderings.

The following video provides a visual explanation of Scanline rendering:

For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Datenschutzerklärung.

Before drawing, a Z or depth buffer containing as many pixels as the output buffer is assigned and initialized. The Z buffer is like a height field facing the camera and tracks which part of the scene geometry is closest to the camera, making it easier to remove hidden areas. The Z buffer can store additional attributes per pixel or assign other buffers to achieve this goal. As long as the primitives are not arranged in the order of back-to-front painting and have no pathological depth problems, a Z buffer is mandatory.

For each primitive, it either consists of an easy-to-draw part or can be split into such parts. Triangles or polygons that fit into screen pixels are called micropolygons and represent the smallest size a polygon needs to draw.

Increase your conversion rate.

We help you to generate more inquiries from your website with 3D renderings.

The assignment of colors to output pixels using these polygons is called rasterization. After finding out which image pixel positions the corners of a polygon occupy, the polygon is scanned into a series of horizontal or vertical stripes. Since each scanline is passed pixel by pixel, different attributes of the polygon are calculated so that each pixel can be colored correctly. These include area normal, scene position, Z buffer depth, and polygon s, t coordinates. If the depth of a polygon pixel is closer to the camera than the value for the respective screen pixel in the Z buffer, the Z buffer is updated and the pixel is displayed in color. Otherwise, the polygon pixel is ignored and the next one is tried.

3DMaster