The code is quite simple. We declare global variables, the GL and Canvas object, which we need to perform the basic initialization and in the render function to perform the drawing continously. Next, we set the Window Onload event to init.
The initialization function retrieves the HTML canvas object and initializes a WebGL object based on it. We define the height and width of the canvas and then initialize the so-called viewport of the WebGL object. Then a first call for rendering is made to actually draw something.
The render function first requests a next animation frame to ensure continuous rendering. Then a clear color is set to red with full opacity and gl.clear is called. The call to gl.clear contains a constant that specifies that the color of the screen buffer should be affected.
So far this is not too exciting. To get something more useful and something to play around with, we need the concept of a shader. Shaders are small programs that usually run in parallel on the graphics processor. The two shader types relevant here are the so-called fragment shader and the vertex shader. In the cas of WebGL, they are written in a derivative of GLSL.
The 3D graphics processing works as follows: A 3D model is loaded which is basically a set of points in three-dimensional space and a description of how these points from polygons. This model and especially the set of vertices in then transformed from a local, natural description into a description that can be used directly to draw the polygons. The various trasnformations do not play a major role here, but it is worth noting that scaling, rotation and perspective corrections are all performed during the transformation phase of a mesh from object to screen space. Most of these calculations can be performed vertex by vertex with a vertex shader. However, we will only use a trivial vertex shader here that does not perform trasnformations, so you can ignore most of the information in this section.
With the normalized information, the render engine can then use relatively simple algorithms to draw polygons pixel by pixel. This is where the fragment shader comes in. Imagine fragments as pixels covered by a polygon. For each fragment, a fragment shader is called which has at least the coordinates of the fragment as input and returns the color to be rendered as output.
Fragment shaders are usually used for lighting and post-processing. However, the idea for a simple experiment is as follows: we draw two polygons (triangles) that only cover the entire visible screen. Using a fragment shader we can then determine the color of each pixel on the screen. So we can imagine the fragment shader as a function of screen coordinates up to colors and experiment with such functions. And it turns out to be a very funny toy. In fact, what you can draw in this way is limited only by your curiosity. As an example, whole raytracing engines are equipped with fragment shaders. But we`ll start with something simpler. Let`s start with some code that defines the framework for our experiments.
First, we need to draw a primitive on the tidy screen, a quad consisting of two triangles that fills the whole screen and servers as a canvas for our fragment shader, as described above. For this reason, we introduce a new global variable that contains its description: