Categories: Uncategorized

How to get started with the OpenGL graphics pipeline.

OpenGL has been around for a long time and when you read the collected documentations on the internet, it`s not always clear which parts are outdated and which parts are still useful and supported on modern graphics hardware. With this in mind, we have decided to write a new beginner`s guide for OpenGL (important to create 3D configurators). In this article we will only discuss articles that are still up to date today.

What is OpenGL?

Wikipedia gives a good overview of the purpose and history of OpenGL, but we will give a short summary here. In its modern form, OpenGL is a cross-platform library for connecting to programmable graphics processors for rendering real-time 3D graphics. Its use is common in games, CAD and data visualization applications. It began in the early 1990s as a cross-platform standardization of SGI`s proprietary Graphics Library (GL), which powered the graphics hardware in its high-end workstations. A few years later GLQuake and the Voodoo graphics acceleration pushed 3dfx 3D accelerators into the mainstream and OpenGL became the standard for controlling graphics accelerators in consumer PCs alongside Microsoft`s proprietary Direct3D library. In recent years, the Khronos Group has taken responsibility for the OpenGL standard by further developing it to support the features of modern programmable GPUs and enable use on web sites and mobile devices. In addition, obsolete features that overloaded earlier versions of the library have been discarded.

Another recent development is the introduction of GPU libraries (GPGPU), including Nvidia`s CUDA and Khronos OpenGL. These libraries implement dialects of C with additional Data parallelism features, so that the GPU can be used for general computing without having to work within the graphics oriented framework of OpenGL. These GPGPU frameworks do not replace OpenGL, however, because their primary purpose is not graphics programming, they simply provide access to the computing units of a GPU without considering its graphics specific hardware. However, they can act as accessories to OpenGL. Both CUDA and OpenGL can share buffers of GPU memory with OpenGL and exchange data between GPGPU programs and the graphics pipeline. We will not go into GPGPU here. We will concentrate here on using OpenGL for graphics tasks.

In order to be able to follow the further explanations, you should have programming knowledge in C, but you do not have to be familiar with OpenGL or graphics programming. A basic algebra and geometry will help you well. We will discuss OpenGL 2.0 and avoid discussing API features that have become obsolete or removed in OpenGL 3 or OpenGL ES. In addition to OpenGL, we will use two more auxiliary libraries: GLUT (GL Utility Toolkit), which provides a cross-platform interface between the Window System and OpenGL, and GLEW (GL Extension Wrangler), which simplifies the handling of different versions of OpenGL and their extensions.

Where can I get OpenGL, GLUT and GLEW?

OpenGL comes standard in some from on MacOS X, Windows and most Linux distributors. If you want to follow this tutorial, you need to make sure that your OpenGL implementation supports at least version 2.0. The OpenGL implementation of MacOS X always supports OpenGL 2.0, at least in the software, if the graphics card driver does not support it. Under windows, you will need your video card drivers to support OpenGL or higher. You can use RealTech`s free OpenGL Extensions Viewer to see which OpenGL version your driver supports. The Nvidia and AMD OpenGL drivers support at least OpenGL 2.0 on all graphics cards released in the last four years. Users of Intel onboard or older graphics cards have less luck. For a fallback, Mesa 3D offers an open source, cross-platform OpenGL 2.1 implementation that works on Windows and almost all Unix platforms.

Mesa is also the most widely used OpenGL implementation on Linux, where it also works with the X server to connect OpenGL to graphics hardware using direct rendering infrastructure (DRI) drivers. You can see if your special DRI driver supports OpenGL 2.0 by running the glxinfo command from an xterm. If OpenGL 2.0 is not supported on your hardware, you can disable the driver to use the Mesa software implementation. Nvidia also provides its own proprietary OpenGL implementation for Linux that targets its own GPUs. This implementation should provide OpenGL 2.0 or higher on any current Nvidia card.

To install GLUT and GLEW, look for the binary packages on their respective sites. MacOS X comes with GLUT pre-installed. Most Linux distributions have GLUT and GLEW available through their package system, although GLUT may require you to activate the optional “non-free” package repositories of your distributions, as your license is technically not open source. There is an open source GLUT clone called OpenGLUT, if you are an advocate of such things.

If you are an experienced C programmer, you should be able to install these libraries and get them to run smoothly in your development environment. But before we get any deeper into the subject, here are a few concepts for large images. In this article, we will explain the graphics pipeline of a typical rendering job.

The graphics pipeline.

For privacy reasons YouTube needs your permission to be loaded. For more details, please see our Datenschutzerklärung.

Since the beginnings of real-time 3D is the triangle of brushes with the scenes to be drawn. Although modern GPUs can perform all sorts of eye-catching effects to hide this secret, triangles among all the shadings are still the medium in which they work. The graphics pipeline that OpenGL implements reflect this. The host program fills OpenGL-managed memory buffers with vertices arrays. These vertices are projected into the screen space, assembled into triangles and rasterized into pixel-sized fragments. Finally, color values are assigned to the fragments and drawn into the frame buffer. Modern graphics processors gain their flexibility by delegating the steps “project to screen space” and “assign color values” to uploadable programs, so-called shaders. Let`s take a closer look at the stages:

The vertex and element arrays.

A render job begin its journey through the pipeline in a set of one or more vertex buffers filled with arrays of vertex attributes. These attributes are used as input for the vertex shader. Common vertex attributes include the location of the vertex in 3D space and one or more sets of texture coordinates that map the vertex to a sample point on one or more textures. The set of vertex buffers that deliver data to a render job is referred to collectively as the vertex array. When a render job is passed, we provide an additional element array, an array of indexes, in the vertex array that selects which vertices are fed into the pipeline. The order of the indexes also controls how the vertices are later assembled into triangles.

Uniform state and textures.

A rendering job also has a uniform state that provides the shaders at each programmable stage of the pipeline with a set of common, read-only values. This allows the shader program to take parameters that do not switch between vertices or fragments. The unified state includes textures, which are one-, two- or three-dimensional arrays that can be scanned by shaders. As the name suggests, textures are often used to map texture images onto surfaces. They can also be used as lookup tables for precalculated functions or as datasets for various types of effects.

The Vertex Shader.

The GPU begins by reading each selected vertex from the vertex array and passing it through the vertex shader, a program that uses a set of vertex attributes as inputs and outputs a new set of attributes, called varying values, to the rasterizer. The vertex shader calculates at least the projected position of the vertex in the screen area. The vertex can also generate other, varying outputs, such as color or texture coordinates, so that the rasterizer passes over the surface of the triangles connecting the vertex.

Triangle arrangement.

The GPU then connects the projected vertices to triangles. The corner points are taken in the order defined by the element array and grouped in groups of three. The corner points can be grouped in different ways:

  • Take all three elements as an independent triangle.
  • Create a triangle strip by reusing the last two corner points of each triangles as the first two corner points of the next triangle.
  • Create a triangle fan that connects the first element to each subsequent element.

The diagram shows how the three different modes behave. Strips and fans both need only one new index per triangle in the element array after the first three and swap the flexibility of independent triangles for additional storages efficiency in the element array.

Rastering.

The rasterizer takes each triangle, cuts it off and discards parts that are outside the screen and splits the remaining visible parts into pixel-sized fragments. As mentioned earlier, the Vertex Shader`s different outputs are also interpolated across the snapped surface of each triangle, assigning a smooth gradient of values to each fragment. For example, if the vertex shader assigns a color value to each vertex, the rasterizer will mix those colors across the pixelated surface.

The Fragment Shader.

The generated fragments then pass through another program, the Fragment Shader. The Fragment Shader receives the variable values output by the Vertex Shader and is interpolated by the rasterizer as input. It outputs color and depth values, which are then drawn into the frame buffer. Common Fragment Shader operations include texture mapping and lighting. Because the Fragment Shader works independently for each pixel, it can perform the most sophisticated special effects, but is also the most performance-sensitive part of the graphics pipeline.

Framebuffer, test and fade.

A frame buffer is the final destination for outputting a render job. In addition to the standard framebuffer that OpenGL provides on-screen, most modern OpenGL implementations allow you to create framebuffer objects that are drawn in offscreen render buffers or textures. These textures can then be used as input for other render jobs. A framebuffer is more than a single 2D image. In addition to one or more color buffers, a framebuffer can have a depth buffer and/or stencil buffer, both of which optionally filter fragments before they are dragged into the framebuffer. Depth testing discards fragments of objects behind the already drawn objects and stencil testing sues shapes drawn in the stecil buffer to limit the drawable part of the frame buffer and “stencils” the render job. Fragments that survice these processes have mixed their alpha color value with the color value to be overwritten, and the final values for color, depth, and template are drawn into the appropriate buffers.

Conclusions.

This is the process from vertex buffer zo frame buffer that your data goes through when you make a single “draw” call in OpenGL. Rendering a scene usually involves multiple drawing jobs, changing textures, other uniform states or shaders between transitions, and using the depth and stencil buffer of the frame buffer to combine the results of each transitions. Now that we have covered the general data flow of 3D rendering, we can write a simple program to see how OpenGL makes everything possible.

Thanks for reading.

3DMaster