Learning how to write graphic shaders (important in the creation processes of 3D configurators) means harnessing the power of the GPU with its thousands of cores all running in parallel. It`s a kind of programming that requires a different way of thinking, but unlocking its potential is worth the initial effort.

Virtually every modern graphics simulation you see is powered in some way by code written for the graphics processor, from the realistic lighting effects in modern AAA games to 2D post-processing effects and fluid simulations.

Aim of this guide.

Shader programming sometimes comes along as  mysterious black magic and is often misunderstood. There are many code examples that show you how to create incredible effects, but offer little or no explanation. This guide is intended to fill this gap. I will focus more on the basics of writing and understanding shader code, so that you can easily customize, combine or rewrite your own shader code from scratch.

This is a general guide.

What is a shader?

A shader is simply a program that runs in the graphics pipeline and tells the computer how to render each pixel. These programs are called shaders because they are often used to control light and shadow effects, but there`s no reason why they can`t handle other special effects as well.

Shaders are written in a special shader language. Don`t worry, you don`t have to learn a whole new language. We use GLSL, a C-like language. (There are a number of shading languages for different programming languages, but since they are all adapted for execution on the GPU, they are very similar.)

Let`s get started.

We will use ShaderToy for this tutorial. So you can start programming shaders directly in your browser without having to bother with the setup. Creating an account is optional, but handy to store your code.

Note: ShaderToy is currently in beta. Some small UI/Synthax details may differ slightly.

The little black arrow below is what you click to compile your code.

What is happening here?

The following explains how shaders work in a sentence. Let`s start now.

The only purpose of a shader is to return four numbers: r, g, b, and a.

That`s alle it does or can do. The function you see in front of you runs for every pixel on the screen. There are 4 color values back and this results in the color of the pixel. This is a so-called pixel shader (sometimes also called fragment shader).

With this in mind, we will try to create a continuous red on the screen below. The rgba values range from 0 to 1, so we have to return r, g, b, a = 1, 0, 0, 1. ShaderToy expected,

that the final pixel color is saved in fragColor.

Copy to Clipboard

This is our first working shader. Congratulations to you.

Challenge: Can you change it to a monochrome gray color?

Vec4 is just a data type, so we could have declared our color as a variable:

Copy to Clipboard

That`s not very exciting. We have the power to run code on hundreds of thousands of pixels in parallel and we set them all to the same color.

Let`s try rendering a gradient across the screen. Without knowing more about the pixel to influence, such as the position on the screen, we can`t go much further.

Shader inputs.

The pixel shader passes a few variables that you can use. The most useful one for us is fragCoord, which contains the x and y coordinates of the pixel. Now let`s try to rotate all pixels on the left half of the screen black and all pixels on the right half red:

Copy to Clipboard

Note: For each vec4 you can access its components via obj.x, obj.z and obi.w or via obj.r, obj.g, obj.b, obj.a. They are equivalent. It`s just a more convenient way to name them to make their code more readable, so that others understand when you see obj.r that obj represents a color.

Do you see a problem with the above code? Try clicking the “Go Fullscreen” button at the bottom right of the preview window.

The proportion of the red screen depends on the size of the screen. To make sure that exactly half of the screen is red, we need to know how big our screen is. The screen size is not a built-in variable, but is usually set by the programmer who created the application. In this case, it is the ShaderToy deverlopers who determine the screen size.

If something is not a built-in variable, you can send this information from the CPU to the GPU. ShaderToy does that for us. You can see all variables passed to the shader in the Shader Inputs tab. Variables that are passed this way from the CPU to the GPU are uniformly named in GLSL.

Let`s adjust our code above to correctly generate the center of the screen. We need to use the shader input iResolution.

Copy to Clipboard

If you try to enlarge the preview window this time, the colors should still divide the screen perfectly in two halves.

From a split to a gradient.

Turning this into a gradient should be pretty easy. Our color values go from 0 to 1 and our coordinates now also go from 0 to 1.

Copy to Clipboard

And voila!

Challenge: Can you turn this into a vertical gradient? What about diagonals? How about a gradient with more than one color?

If you pay around with it enough, you can see that the upper left corner has coordinates (0, 1), not (0, 0). This is important to note.

Draw images.

Playing around with colors is fun, but if we want to do something impressive, our shader must be able to take input from  an image and change it. In this way, we can create a shader that affects our entire game screen ( like an underwater fluid effect or color correction) or affect only certain objects in a certain way based on the input (like a realistic lighting system).

If we were programming on a normal platform, we would have to send our image (or texture) as a uniform to the graphics processor, just as you would have sent the screen resolution. ShaderToy does that for us. Below are four input channels:

Click on iChannel0 and select any texture (image).

Once this is done, you now have an image that will be passed to your shader. But there is a problem: There is no DrawImage() function. Remember that the Pixel shader can only change the color of each pixel.

So if we can only return one color, how do we draw our texture on the screen? We have to somehow map the current pixel where our shader is located to the corresponding pixel in the texture:

We can do this by using the texture(textureData.coordinates) function, which takes texture data and an (x, y) coordinate pair as input, and returns the color of the texture at those at those coordinates as vec4.

You can adapt the coordinates to the screen as you like. You can draw the entire texture on a quarter of the screen (by skipping pixels and effectively shrinking them) or just draw a part of the texture.

For our proposes, we only want to see the image, so we adjust the pixels 1:1:

Copy to Clipboard

This is our first picture.

erguen e

Now the data is drawn correctly from the texture and can be manipulated as you wish. You can stretch and scale it or play with its colors.

Let`s try to change this with a gradient similar to what we did above:

ergueninegi e

Congratulations, you just made your first post-processing effect.

Challenge: Can you write a shader that converts an existing image to black and white?

You should note that although it is a static image, everything you see in front of you happens in real time. You can see this for yourself by replacing the static image with a video. Click the iChannel() input again and select one of the videos.

Add a motion.

Until now, all our effects were static. We can do much more interesting things if we use the inputs that ShaderToy gives us. IglobalTime is a growing variable that we can use as seed for periodic effects. Let`s try playing around with the colors a bit.

Copy to Clipboard

GLSL integrates sine and cosine functions, as well as many other useful functions, such as the length of a vector or the distance between two vectors. Colors should not be negative, so let`s male sure we get the absolute value by using the abs function.

Challenge: Can you create a shader that switches an image from black and white to color?

A hint for debugging shaders.

While you`re used to going through your code and printing the calues of everything to see what`s going on. writing shaders isn`t really possible. You may find some debugging tools specifically tailored to your platform, but in general it`s best to set the value you`re testing to something graphical that you can see instead.

Conclusions.

These are just the basics of working with shaders, but if you become familiar with these basics, you can do much more. Browse through the effects on ShaderToy and see if you can understand or replicate some of them.

One thing that hasn`t been mentioned in this article is Vertex Shaders. They are still written in the same language except that they run on every vertex instead of on every pixel and they return a position as well as a color. Vertex shaders are usually responsible for projecting a 3D scene onto the screen. Pixel shaders are usually responsible for projecting a 3D scene onto the screen. Pixel shaders are responsible for many of the advanced effects we see, so they are in our focus.

Last challenge: Can you write a shader that removes the green screen on the videos on ShaderToy and adds another video as background to the first one?

Now we are through with this post. We are looking forward to your feedback and questions. If you would like to learn more, please leave a comment.