Since the beginnings of real-time 3D is the triangle of brushes with the scenes to be drawn. Although modern GPUs can perform all sorts of eye-catching effects to hide this secret, triangles among all the shadings are still the medium in which they work. The graphics pipeline that OpenGL implements reflect this. The host program fills OpenGL-managed memory buffers with vertices arrays. These vertices are projected into the screen space, assembled into triangles and rasterized into pixel-sized fragments. Finally, color values are assigned to the fragments and drawn into the frame buffer. Modern graphics processors gain their flexibility by delegating the steps “project to screen space” and “assign color values” to uploadable programs, so-called shaders. Let`s take a closer look at the stages:
The vertex and element arrays.
A render job begin its journey through the pipeline in a set of one or more vertex buffers filled with arrays of vertex attributes. These attributes are used as input for the vertex shader. Common vertex attributes include the location of the vertex in 3D space and one or more sets of texture coordinates that map the vertex to a sample point on one or more textures. The set of vertex buffers that deliver data to a render job is referred to collectively as the vertex array. When a render job is passed, we provide an additional element array, an array of indexes, in the vertex array that selects which vertices are fed into the pipeline. The order of the indexes also controls how the vertices are later assembled into triangles.
Uniform state and textures.
A rendering job also has a uniform state that provides the shaders at each programmable stage of the pipeline with a set of common, read-only values. This allows the shader program to take parameters that do not switch between vertices or fragments. The unified state includes textures, which are one-, two- or three-dimensional arrays that can be scanned by shaders. As the name suggests, textures are often used to map texture images onto surfaces. They can also be used as lookup tables for precalculated functions or as datasets for various types of effects.
The Vertex Shader.
The GPU begins by reading each selected vertex from the vertex array and passing it through the vertex shader, a program that uses a set of vertex attributes as inputs and outputs a new set of attributes, called varying values, to the rasterizer. The vertex shader calculates at least the projected position of the vertex in the screen area. The vertex can also generate other, varying outputs, such as color or texture coordinates, so that the rasterizer passes over the surface of the triangles connecting the vertex.
The GPU then connects the projected vertices to triangles. The corner points are taken in the order defined by the element array and grouped in groups of three. The corner points can be grouped in different ways:
- Take all three elements as an independent triangle.
- Create a triangle strip by reusing the last two corner points of each triangles as the first two corner points of the next triangle.
- Create a triangle fan that connects the first element to each subsequent element.
The diagram shows how the three different modes behave. Strips and fans both need only one new index per triangle in the element array after the first three and swap the flexibility of independent triangles for additional storages efficiency in the element array.
The rasterizer takes each triangle, cuts it off and discards parts that are outside the screen and splits the remaining visible parts into pixel-sized fragments. As mentioned earlier, the Vertex Shader`s different outputs are also interpolated across the snapped surface of each triangle, assigning a smooth gradient of values to each fragment. For example, if the vertex shader assigns a color value to each vertex, the rasterizer will mix those colors across the pixelated surface.
The Fragment Shader.
The generated fragments then pass through another program, the Fragment Shader. The Fragment Shader receives the variable values output by the Vertex Shader and is interpolated by the rasterizer as input. It outputs color and depth values, which are then drawn into the frame buffer. Common Fragment Shader operations include texture mapping and lighting. Because the Fragment Shader works independently for each pixel, it can perform the most sophisticated special effects, but is also the most performance-sensitive part of the graphics pipeline.
Framebuffer, test and fade.
A frame buffer is the final destination for outputting a render job. In addition to the standard framebuffer that OpenGL provides on-screen, most modern OpenGL implementations allow you to create framebuffer objects that are drawn in offscreen render buffers or textures. These textures can then be used as input for other render jobs. A framebuffer is more than a single 2D image. In addition to one or more color buffers, a framebuffer can have a depth buffer and/or stencil buffer, both of which optionally filter fragments before they are dragged into the framebuffer. Depth testing discards fragments of objects behind the already drawn objects and stencil testing sues shapes drawn in the stecil buffer to limit the drawable part of the frame buffer and “stencils” the render job. Fragments that survice these processes have mixed their alpha color value with the color value to be overwritten, and the final values for color, depth, and template are drawn into the appropriate buffers.
This is the process from vertex buffer zo frame buffer that your data goes through when you make a single “draw” call in OpenGL. Rendering a scene usually involves multiple drawing jobs, changing textures, other uniform states or shaders between transitions, and using the depth and stencil buffer of the frame buffer to combine the results of each transitions. Now that we have covered the general data flow of 3D rendering, we can write a simple program to see how OpenGL makes everything possible.
Thanks for reading.