Preface
In modern times the need to generate robust procedural content has become more prevalent then ever. With advancements in CPU/GPU processing power the ability to create dynamic content on the fly is now an option more then ever. What started out as a means to produce simple representations of natural processes has now grown into a multi faceted field, ranging from producing pseudo random procedural content to synthesized textures and models constructed from reference data. Whole worlds can be crafted from a single simple seed. Using methods that are often simplified from real world physics and systems from nature, a user is able to try to control the creation process to mold a certain result.
The main complication of this is the control factor, due to the inherent properties of the “random” or “noise” functions that are used to create the data samples. As the artist/developer it is our goal to understand how we can manipulate this seemly uncontrollable processes to better suit our needs and produce content that is within scope of expectations. We can attempt to create control by introducing sets of parameters that manipulate the underlying structure of our functions or filter the results.
Introduction
First off lets get some things straight. I am in no way a math wizard, or even conventionally trained in programming so all of this information that will be presented is based off of my interpretations of advance topics that I probably have no business explaining to someone else. Do not take any of the concepts I will discuss as verbatim fact, but use them as a basis if you have none to try to obtain a level of understanding of you’r own. The main point of this article or tutorial (not sure what this would be… a research log?) is to document a laymen interpretation of the works of genius like Kevin Perlin, Edwin Catmull, Steven Worley….
I recently got my hands on the third edition of the publication: TEXTURING & MODELING – A Procedural Approach [1], which is a great resource though somewhat dated with the languages it uses. I am going to review the concepts presented in this wonderful resource and tailor the script examples to work with webGL. Currently webGL 2.0 supports GLSL ES 3.00 methods and will be the focus of this article, if you have any questions about this please review the webGL specifications. I will also be using BABYLON.JS library to handle the webGL environment as opposed to having to worry about all the buffer binding and extra steps that we would need to take otherwise. There is also the assumption that if you are reading this you have a basic understanding of HTML, CSS and Javascript; otherwise this is not the tutorial for you.
Setting up the Environment
Before the creation of anything can happen we will need to set up a easy development environment. To do this we are going to create a basic html page, include the babylon library, make a few styling rules, then finally create a scene that will allow us to develop GLSL code to create and test different effects easily. Though we wont be ray-marching anytime soon but the set-up described by the legendary Iñigo Quilez in the presentation “Rendering Worlds with Two Triangles with ray-tracing on the GPU in 4096 bytes” [2] will be the same set up we will go with for outputting our initial tests. Later we will look at deploying the same effects on a 3d object then start introducing lighting (I am dreading the lighting part). To save time please follow along with: http://doc.babylonjs.com/ and get the basic scene follow the directions to get the basic scene presented running.
Once we have our basic scene going, we are going to reorder and structure some of the elements plus drop out unnecessary elements like the light object at this point. You can follow along here if you are unfamiliar with BJS alternatively if you just want to get started skip this section and download this page.
Basic Scene
I assume you know how to create a standard web page with a header and body section like as follows:
Introduction - Environment Setup
or you can copy and paste this into you IDE. Right away we are going to get rid of overflow, padding, margin and make it full width/height on the content of the page because for most purposes the scene we are working on will take up the whole screen. Then in our head section we need to include the reference to BJS.
...
This should effectively give us all the elements we need to start developing, we just need to create our initial scene in order to get the webGL context initialized and outputting. To do this in the body section of the page we create a canvas element and give it a unique identifier with some simple css rules then pass that canvas element over to BJS for the webGL initialization.
...
...
...
The touch action rule is for future proofing in case the scene needs to have mobile touch support (which will most likely never come into application with what we are doing) the other rules define the size of the canvas to be inherent from its parent container. When we later initialize the scene we will fire a function that sets the canvas size from its innerWidth and Height values as described here [3]. Luckily BJS handles all the resizing as long as we remember to bind the function to fire when the window is resized, but we will cover that when we set up our scene function.
Now its time to get the scene running, we do this inside a script element after the body is created. We also should wrap it in a DOM Content Loaded callback to prevent the creation of the scene from bogging the page load. Then we create a function that will initialize the very minimum elements BJS requires in a scene in order for it to compile (a camera).
...
Then inside the createScene function lets add one last line to set effectively the background color of the scene/canvas.
...
scene.clearColor = new BABYLON.Color3(1,0,0);
return scene;
}
If everything is set up correctly you can now load the page and should see an entire red screen. If you are having trouble review this and see where the differences are. What we have done here, is create the engine and scene objects, start the render loop and bind the resize callback on a window resize event. From here we have all the basic elements together to set up a test environment.
Getting Output
With the webGL context initialized and our scene running its render loop its time to create a method of outputting the data we will be creating. To simplify things at first we will create a single geometric plane also known as a quad that will always take up the whole context of our viewport then create a simple GLSL shader to control its color. There are multiple ways to create the quad but for learning purposes I think it is prudent to create a Custom Mesh function that creates and binds the buffers for the object manually instead of using a built in BJS method for a plane/ground creation. The reason we will do it this way is it give us complete control over the data and will give us and understanding of how BJS creates its geometry with its built in methods.
First lets create the function and its constructors, so in the script area before out DOM Content Loaded event binding we make something like the following:
...
The most important argument for this function will be to pass the scene reference to it, so that way we have access to it within the scope of the function alternatively you could have the scope of the scene on the global context but that creates vulnerabilities and is not advised. The other benefit of passing the scope as a argument is when we start working with more advance projects that use multiple scenes we can easily reuse this function.
Now that we have the function declared we can work on its procedure and return. The method for creating custom geometry in BJS is as follows:
Create a blank Mesh Object
var createOuput = function(scene){
var mesh = new BABYLON.Mesh('output', scene);
return mesh;
};
also in our createScene function and add these two lines:
...
var output = createOuput(scene);
console.log(output);
return scene;
}
If everything is correct when we reload this page now and check the dev console we should see:
BJS – [timestamp]: Babylon.js engine (v3.1.1) launched
[object]
Create/Bind Buffers
Now that there is a blank mesh to work with, we have to create the buffers to tell webGL where the positions of the vertices are, how our indices are organized, and what our uv values are. With those three arrays/buffers we apply that to our blank mesh and should be able to produce geometry that we will use as our output. Initially we will hard code the size values and see then go back and revise the function to adjust for viewport size.
...
var createOuput = function(scene){
var mesh = new BABYLON.Mesh('output', scene);
var vDat = new BABYLON.VertexData();
vDat.positions =
[
-0.5, 0.5, 0,//0
0.5, 0.5, 0,//1
0.5, -0.5, 0,//2
-0.5, -0.5, 0 //3
];
vDat.uvs =
[
0.0, 1.0, //0
1.0, 1.0, //1
1.0, 0.0, //2
0.0, 0.0 //3
];
vDat.normals =
[
0.0, 0.0, 1.0,//0
0.0, 0.0, 1.0,//1
0.0, 0.0, 1.0,//2
0.0, 0.0, 1.0 //3
];
vDat.indices =
[
2,1,0,
3,2,0
];
vDat.applyToMesh(mesh);
return mesh;
};
If done correctly when the page is refreshed there should be a large black section. The next step will be to have the size of the mesh dynamically be created instead of hard-coded so that way we can have it work with a resize function. The solution behind this is not my own you can see the discussion that lead up to this here. Modifying the createOuput to reflect the solution is very simple, we add one line to define our width and height values and then multiply our width and height position values by the respective results.
...
var c = scene.activeCamera;
var fov = c.fov;
var aspectRatio = scene._engine.getAspectRatio(c);
var d = c.position.length();
var h = 2 * d * Math.tan(fov / 2);
var w = h * aspectRatio;
vDat.positions =
[
w*-0.5, h*0.5, 0,//0
w*0.5, h*0.5, 0,//1
w*0.5, h*-0.5, 0,//2
w*-0.5, h*-0.5, 0 //3
];
Now when the page is refreshed it should be solid black, this is because our mesh now takes up the entire camera frustum and there is no light to make the mesh show up hence its black. A light for our purposes right now is of little use, later we will try to implement lighting. Another thing we will ignore for right now is the repose to a resize for the output. Later as we get more of our development environment set up we will come back to this.
First Shader
Creating a blank screen is all fine and dandy, but not very handy… So now would be the time to set up our shaders which will be the main program responsible for most of the procedural content methods we will be developing. Unfortunately webGL 2.0 does not handle geometry shaders only vertex and fragment, hence limiting the GPU to texture creation or simulations, not models. For any geometric procedural process we will need to rely on the CPU.
This process for creating shaders to work with BABYLON is extremely easy, we simply construct some literal strings and store them in a DOM accessible Object then have BJS work its magic with the bindings of all the buffers. You can read more about Fragment and Vertex shaders on the BJS website and through a great article written by BAYBLON’s original author on Building Shaders with webGL.
Lets take a look at what we are going to need to develop our first procedural texture, firstly we need some sort of reference unit for this we will use the UV of our mesh transposed from a 0 to 1 range to a -1 to 1 range. Using the UV is advantageous when working with 2D content, if we add the third dimension into the process it becomes more relevant to sample the position as the reference point. With this idea in mind the basic shader program becomes similar to this:
...
var vx =
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;
void main(void) {
vec4 p = vec4( position, 1. );
gl_Position = worldViewProjection * p;
vUV = uv;
}`;
var fx =
`precision highp float;
//Varyings
varying vec2 vUV;
void main(void) {
vec3 color = vec3(1.,1.,1.);
gl_FragColor = vec4(color, 1.0);
}`;
Then we store these literal strings into our BJS shaderStore object, which will allow the library to construct and bind it through its methods. If we were doing this through raw webGL this would add a bunch of steps but due to this amazing library most of the busy work is eliminated.
...
BABYLON.Effect.ShadersStore['basicVertexShader'] = vx;
BABYLON.Effect.ShadersStore['basicFragmentShader'] = fx;
Lastly we use the CustomShader creation function and assign the results to the material of our output mesh. As of for right now this is done inside the createScene function after the createOutput function.
...
var shader = new BABYLON.ShaderMaterial("shader", scene, {
vertex: "basic",
fragment: "basic",
},{
attributes: ["position", "normal", "uv"],
uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
});
output.material = shader;
If everything is done right, when we refresh the page we should now see a fully white page! WOWZERS so amazing… red, black then white… we are realllly cooking now -_-…. Well at least this is all the elements we will need to start making some procedural content. If you are having trouble at this point you can always reference here or just download it if you are lazy.
Refining the Environment
At this point it would be smart to reorder our elements and create a container object that will hold all important parameters and functions associated with what ever content we are trying to create. This way we can make sure the scope is self contained and that we could have multiple instances of our environment on the same page without them conflicting with each other. Object prototyping is very useful for this, as we can construct the object and have the ability to reference it later by accessing what ever variable we assigned the response of the object to.
The Container Object
If you have never made a JS Object then this might be a little strange. Those familiar with prototyping and object scope should have no problem with this part. In order to organize our data and make this a valid development process we have to create some sort of wrapper object like such:
SM = function(args, scene){
this.scene = scene;
args = args || {};
this.uID = 'shader'+Date.now();
return this;
}
This has now added on the window scope a new constructor for our container object. To call this we simply write the string “new SM({}, scene);”, where ever we have access to the scene variable. If we define this to a variable it will now be assigned the instance of this object with what ever variables assigned to “this” scope being contained within that instance. With this object constructor in place we can look to extend it now by prototyping some functions and variables into its scope. If you are unfamiliar with this please review the information presented here [4].
The first thing we will add into the prototype is the space for the shaders that our shaderManager (aka shaderMonkey since I’m Pryme8 ^_^) will reference when ever it needs to rebuild and bind the program/shader.
...
SM.prototype = {
shaders:{
/*----TAB RESET FOR LITERALS-----*/
vx:
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;
void main(void) {
vec4 p = vec4( position, 1. );
gl_Position = worldViewProjection * p;
vUV = uv;
}`,
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
void main(void) {
vec3 color = vec3(1.,1.,1.);
gl_FragColor = vec4(color, 1.0);
}`
}//End Shaders
}
Now that we have the shaders wrapped under the ‘this’ scope of the object we can start migrating some of the elements from when we set up our environment to be contained inside the object as well. The main elements were the construction of the mesh, the storing and binding of the shader.
...
SM.prototype = {
storeShader : function(){
BABYLON.Effect.ShadersStore[this.uID+'VertexShader'] = this.shaders.vx;
BABYLON.Effect.ShadersStore[this.uID+'FragmentShader'] = this.shaders.fx;
},
shaders:{
...
}//End Shaders
}
This method simply sets the ShaderStore value on the DOM for BJS to reference when it builds. After adding it to the object its simple to integrate it into the initialization of the object. We can also take this time to define a response to the user including some custom arguments when they construct the objects instance that will overwrite the default shaders that we hard coded into the SM object.
SM = function(args, scene){
this.scene = scene;
args = args || {};
this.shaders.vx = args.vx || this.shaders.vx;
this.shaders.fx = args.fx || this.shaders.fx;
this.uID = 'shader'+Date.now();
this.shader = null;
this.storeShader();
return this;
}
As this object is created it checks the argument object for variables assigned to vx & fx respectively, if that argument is not present then it keeps the shader the same as the default version. We set it up this way so that as we start making different scenes that use our SM object we do not have to change any of the objects internal scripting but just use arguments or fire built in methods to manipulate it.
Now we need to bind the shader to the webGL context so that it accessable/usable. This process is fairly simple once we add another method to our object.
SM = function(args, scene){
...
this.storeShader();
this.buildShader();
return this;
};
SM.prototype = {
buildShader : function(){
var scene = this.scene;
var uID = this.uID;
var shader = new BABYLON.ShaderMaterial("shader", scene, {
vertex: uID,
fragment: uID,
},{
attributes: ["position", "normal", "uv"],
uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
});
if(this.shader){this.shader.dispose()}
this.shader = shader;
if(this.output){
this.output.material = this.shader;
}
},
storeShader : function(){
...
},
shaders:{
...
}//End Shaders
}
When our object is initialized now, it builds/binds the shader and assigned it to its this.shader variable, which is now accessible under this objects scope. It then checks if the object has an output mesh and if it does it assigns it. Each time this method is fired it checks to see if there is a shader already complied and if there is it disposes it to save on overhead. The last step would be to migrate the createOutput function to become a method for our SM object and make simple modifications to have it effectively do the same process that our buildShader method does to conserve resources.
SM = function(args, scene){
...
this.storeShader();
this.buildShader();
this.buildOutput();
return this;
};
SM.prototype = {
buildOutput : function(){
if(this.output){this.output.dispose()}
var scene = this.scene;
var mesh = new BABYLON.Mesh('output', scene);
var vDat = new BABYLON.VertexData();
var c = scene.activeCamera;
var fov = c.fov;
var aspectRatio = scene._engine.getAspectRatio(c);
var d = c.position.length();
var h = 2 * d * Math.tan(fov / 2);
var w = h * aspectRatio;
vDat.positions =
[
w*-0.5, h*0.5, 0,//0
w*0.5, h*0.5, 0,//1
w*0.5, h*-0.5, 0,//2
w*-0.5, h*-0.5, 0 //3
];
vDat.uvs =
[
0.0, 1.0, //0
1.0, 1.0, //1
1.0, 0.0, //2
0.0, 0.0 //3
];
vDat.normals =
[
0.0, 0.0, 1.0,//0
0.0, 0.0, 1.0,//1
0.0, 0.0, 1.0,//2
0.0, 0.0, 1.0 //3
];
vDat.indices =
[
2,1,0,
3,2,0
];
vDat.applyToMesh(mesh);
this.output = mesh;
if(this.shader){
this.output.material = this.shader;
}
},
buildShader : function(){
...
storeShader : function(){
...
},
shaders:{
...
}//End Shaders
}
That should effectively be it. If we make a couple modifications to our createScene function now, we can test the results.
...
var sm;
window.addEventListener('DOMContentLoaded', function(){
...
var createScene = function(){
var scene = new BABYLON.Scene(engine);
var camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 0, -1), scene);
camera.setTarget(BABYLON.Vector3.Zero());
scene.clearColor = new BABYLON.Color3(1,0,0);
/*----TAB RESET FOR LITERALS-----*/
sm = new SM(
{
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
void main(void) {
vec3 color = vec3(0.,0.,1.);
gl_FragColor = vec4(color, 1.0);
}`
},scene);
console.log(sm);
return scene;
}
If everything is 100% you should now see a fully blue screen when you refresh otherwise please reference here. Now would be a good time to add the resize response as well, which is why we defined our sm var outside of the DOM content loaded callback. There are better ways to do this, but for our purposes it will do. Simply calling the buildOutput method for the SM object when a DOM element containing the canvas is resized should handle things nicely.
...
window.addEventListener('resize', function(){
engine.resize();
sm.buildOutput();
});
Finally we have gotten to the point where we can start developing more things then a solid color. We will at some point need to create methods for binding samplers and uniforms on the shader but we can tackle that later. Feel free to play around with this set up and break or add things, try to understand the scope of the object and how everything is associated. In the next section we will examine the principles of sampling space and how we can manipulate it.
Continue… to Section I
References
1. TEXTURING & MODELING – A Procedural Approach (third edition)
2. Rendering Worlds with Two Triangles with raytracing on the GPU in 4096 bytes
3. https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html
4. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/prototype