Procedural Investigations in webGL – Section IV

Section IV
Getting Noisy in Here

Finally the part I have been wanting to get to! One of the power players in the procedural world are methods called noise. Noise is a random (in our case pseudorandom) distribution of values over a range. Normally these values range from -1 to 1, but can have other values. We use these predictably random functions to control our methods. The simplest, yet least useful to use will be a white noise function, which some of you should be familiar with. Picture your old analog TV set to a blank channel or the scene from the movie Poltergeist.

Example

There has been quite a bit of advancement in the generation of random values, with the works of Steven Worley and Ken Perlin. We can use a combination of their methods, to achieve some really interesting results. The are two main types of noise that we will be covering are lattice and gradient based noise methods.

Lattice Noise/Value Noise

There is a little bit of confusion in the procedural world with what is a value based noise and what is gradient. This is demonstrated by the article we will be referencing calls the process we will be reviewing [1] Perlin when it is actually Value Noise.

Value Noise is a series of nDimensional points at a set frequency that have a value, we then interpolate between the points which then gives us our values. There are multiple ways to interpolate the values and most do it smoothly but to help you understand the concept lets temporarily do a linear interpolation.

Take for example if we had a 1D noise with our lattice set at every integer value and a linear interpolation we get a graph similar to this:

If we were to sample any point now between any of the lattice points we would get a value between the values of the closets points. In this 1d grid it would be the two closest points, in a 2d grid it would be 4 and in a 3d grid it would be 6 (for 2d/3d you can sample more but these are the minimum neighborhoods). So if we were to sample from this 1d Noise at the coordinate x=1.5 we would end up with a value of 0.55 (unless my math sucks).

If we use this process and mix together value noises of increasing frequency and decreasing amplitude we can make some interesting results. Another parameter we can introduce for control is persistence, which has some confusion as well as to its “official” definition. The term was first used by Mandelbrot while developing fractal systems. The simplest way to describe it would be the weighted effect of the values on the sum of the noise functions.

Random Function

In order to get our noise functions rolling we first need to create a random number generation method. Here is a section of pseudo-code presented in [1]:

function IntNoise(32-bit integer: x)
    x = (x<<13) ^ x;
    return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) &#038; 7fffffff) / 1073741824.0);
end IntNoise function

Right away one should notice that this is very close to the code that we used for the white noise generator above. There are many ways to generate a random number but we will convert this one initially and then test other methods to see which are more effective.
The GLSL version of this code would be:

float rValueInt(int x){
    x = (x >> 13) ^ x;
    int xx = (x * (x * x * 60493 + 19990303) + 1376312589) & 0x7fffffff;
    return 1.0 - (float(xx) / 1073741824.0);
}

This function requires our input value to be an integer (hence making it a lattice), we then use bit-wise operators as explained in GLSL specs[2]. I have no clue what is really happening with the bit-wise stuff other then we are shifting the number around… sorry I dont know more. The numbers that we used are right from Hugos example [1] and are prime numbers. You can change these numbers all you want, just make sure you keep them as prime in order to prevent as noticeable of graphic artifacts. From here we just need to decide how we want to interpolate the values between points.

Its all up to Interpolation…

The simplest way to interpolate is linearly like what we used above is represented by this equation:

Mock CodeGLSL Code


function Linear_Interpolate(a, b, x)
	return  a*(1-x) + b*x
  end of function

float linearInterp(float a, float b, float x){
	return a*(1.-x) + b*x;
}

This is ok if we want sharp elements, but if we want smoother transitions we can use a cosine interpolation.

function Cosine_Interpolate(a, b, x)
    ft = x * 3.1415927
    f = (1 - cos(ft)) * .5
    return  a*(1-f) + b*f
end of function
float cosInterp(float a, float b, float x){
	float ft = x*3.12159276;
	float f = (1.0 - cos(ft)) * .5;
	return a*(1.-f) + b*f;
}

There is also cubic iterp, but we will skip that for now and focus on linear and cosine. The last thing we will want to do, in order to make our noises smoother on their transitions is introduce a you guessed it smoothing function. This function can optionally be used and can be expanded to how ever many dimensions you would need. The smoothing helps reduce the appearance of block artifacts when rendering out to 2+ dimensions. Here is a snip-it of pseudo-code from [1].


//1-dimensional Smooth Noise
  function Noise(x)
    ...    .
  end function

  function SmoothNoise_1D(x)
    return Noise(x)/2  +  Noise(x-1)/4  +  Noise(x+1)/4
  end function

//2-dimensional Smooth Noise
function Noise(x, y)
...
end function

function SmoothNoise_2D(x>, y)

    corners = ( Noise(x-1, y-1)+Noise(x+1, y-1)+Noise(x-1, y+1)+Noise(x+1, y+1) ) / 16
    sides   = ( Noise(x-1, y)  +Noise(x+1, y)  +Noise(x, y-1)  +Noise(x, y+1) ) /  8
    center  =  Noise(x, y) / 4

    return corners + sides + center

end function

Later in this section we will look at the differences between smoothed and non-smoothed noise. Now we need to start taking all these elements and put them together. Here is the pseudo code as exampled by [1]


function Noise1(integer x)
    x = (x<<13) ^ x;
    return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) &#038; 7fffffff) / 1073741824.0);    
  end function


  function SmoothedNoise_1(float x)
    return Noise(x)/2  +  Noise(x-1)/4  +  Noise(x+1)/4
  end function


  function InterpolatedNoise_1(float x)

      integer_X    = int(x)
      fractional_X = x - integer_X

      v1 = SmoothedNoise1(integer_X)
      v2 = SmoothedNoise1(integer_X + 1)

      return Interpolate(v1 , v2 , fractional_X)

  end function


  function PerlinNoise_1D(float x)

      total = 0
      p = persistence
      n = Number_Of_Octaves - 1

      loop i from 0 to n

          frequency = 2i
          amplitude = pi

          total = total + InterpolatedNoisei(x * frequency) * amplitude

      end of i loop

      return total

  end function

I decided to make a few structural changes to this for the GLSL conversion. In the above example they use four functions to make it happen, we are going to do it with three. I think it will also be relevant to add uniforms (or defines depends on your preference) to control things like octaves, persistence and a smoothness toggle. I will also be using strictly the cos interpolation, this is by personal choice any method can be used though. So following the structure of our SM object, we set up the shader argument as follows:


sm = new SM(
{
size : new BABYLON.Vector2(512, 512),
hasTime : false,
//timeFactor : 0.1,
uniforms:{
	octaves:{
		type : 'float',
		value : 4,
		min : 0,
		max : 124,
		step: 1,
		hasControl : true
	},
	persistence:{
		type : 'float',
		value : 0.5,
		min : 0.001,
		max : 1.0,
		step: 0.001,
		hasControl : true
	},
	smoothed:{
		type : 'float',
		value : 1.0,
		min : 0,
		max : 1.0,
		step: 1,
		hasControl : true
	},
	zoom:{
		type : 'float',
		value : 1,
		min : 0.001,
		step: 0.1,
		hasControl : true
	},
	offset:{
		type : 'vec2',
		value : new BABYLON.Vector2(0, 0),
		step: new BABYLON.Vector2(1, 1),
		hasControl : true
	},
},
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
varying vec2 tUV;
/*----- UNIFORMS ------*/
uniform float time;
uniform vec2 tSize;
uniform float octaves;
uniform float persistence;
uniform float smoothed;
uniform float zoom;
uniform vec2 offset;

This will set up all of our uniforms and the defaults for them. You can do these as defines, but if having the ability to manipulate it on the fly they should be uniforms. I also added a uniform that we will not be manipulating directly ever but letting the size of the canvas/texture set this value when the shader is compiled. With this we need to make some changes to our SM object to accommodate this new uniform.

SM = function(args, scene){	
	...
	this.buildGUI();
	
	this.setSize(args.size);
	
	return this;
}

SM.prototype = {
...
setSize : function(size){
	var canvas = this.scene._engine._gl.canvas;
	size = size || new BABYLON.Vector2(canvas.width, canvas.height);
	this._size = size;	
	var pNode = canvas.parentNode;
	pNode.style.width = size.x+'px';
	pNode.style.height = size.y+'px';
	this.scene._engine.resize();
	this.buildOutput();
	this.buildShader();
}
...

Now the shader will always know what the size of the texture is, because we have made this a inherent feature of the SM object we need to add the uniform for tSize to the default fragment that the shader has built in. This is in the situation that the default shader get bound that it will validate and compile. From here we need to include our random number function, our interpolation function and the noise function itself. I am going to include a lerp function as well in case you want to use this and the interpolation vs cos.

//Methods
//1D Random Value from INT;
float rValueInt(int x){
	x = (x >> 13) ^ x;
	int xx = (x * (x * x * 60493 + 19990303) + 1376312589) & 0x7fffffff;
	return 1. - (float(xx) / 1073741824.0);
}
//float Lerp
float linearInterp(float a, float b, float x){
	return a*(1.-x) + b*x;
}
//float Cosine_Interp
float cosInterp(float a, float b, float x){
	float ft = x*3.12159276;
	float f = (1.0 - cos(ft)) * .5;
	return a*(1.-f) + b*f;
}
//1d Lattice Noise
float valueNoise1D(float x, float persistence, float octaves, float smoothed){
float t = 0.0;
float p = persistence; 
float frequency, amplitude, tt, v1, v2, fx;
int ix;
for(float i=1.0; i<=octaves; i++){
	frequency = i*2.0;
	amplitude = p*i;
	
	ix = int(x*frequency);	
	fx = fract(x*frequency);
	
	if(smoothed > 0.0){
		v1 = rValueInt(ix)/2.0 + rValueInt(ix-1)/4.0 + rValueInt(ix+1)/4.0;
		v2 = rValueInt(ix+1)/2.0 + rValueInt(ix)/4.0 + rValueInt(ix+2)/4.0;		
		tt = cosInterp(v1, v2, fx);
	}else{
		tt = cosInterp(rValueInt(ix), rValueInt(ix+1), fx);
	}
	
	t+= tt*amplitude;	
}
t/=octaves;
return t;
}

So now we have a GLSL function to generate some 1D noise! It has four arguments, the last one of smoothed can be omitted if you please but I like having it so…. It a fairly simple function and most of our noise functions will have a similar structure. We could also put a #define in that would control the interpolation, but for simplicity I am just using the cosine method. From here it is as simple as setting up the main function of our shader program to use this noise. To do this we decide our sampling space and pass that to the x value of the noise function along with our other uniforms that we have already set up.

void main(void) {
	vec2 tPos = ((vUV*tSize)+offset)/zoom;
	float v = valueNoise1D(tPos.x, persistence, octaves, smoothed)+1.0/2.0;
	vec3 color = vec3(mix(vec3(0.0), vec3(1.0), v)); 	
    gl_FragColor = vec4(color, 1.0);	
}

Super easy right!? Our sampling space that we use is the 0-1 uv multiplied by the size of the texture, which effectively shifts us to texel space. The choice to use to vUV instead of the tUV was because for some reason the negative value was creating an artifact as seen here:

I could try to trouble shoot that, but instead its just easier to use the 0-1 uv range and move on.

Next add an offset which is also in texel space, you could do it as a percentage of the texture’s size but that is user preference. We then divide the whole thing by a zoom value. That gives us a nice sampling space, which we then pass to our noise function with our other arguments. Because the noise function returns a number between negative 1 and positive 1, we shift it to a 0-1 range by simply adding one then dividing the sum by two.

A New Dimension

One dimensional noise is cool and has its uses, but we need more room for activities. Before we develop more noises and look at different methods for generation having an understanding of how to extend the noise to n-dimensions is pretty important. For all general purposes all calculations stay the same, you just have to make a couple more of them. It would probably be smart to add a support function for smoothing the values of the interpolation now that we are working with larger dimensions. The main modifications to the function will be changing some of the variables from floats and integers to vectors of the same type. The last function to add is a random number generator that takes into consideration the 2 dimensions.

//2D Random Value from INT vec2;
float rValueInt(ivec2 p){
	int x = p.x, y=p.y;
	int n = x+y*57;
	n = (n >> 13) ^ n;
	int nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff;
	return 1. - (float(nn) / 1073741824.0);
}

float smoothed2dVN(ivec2 pos){
return (( rValueInt(pos+ivec2(-1))+ rValueInt(pos+ivec2(1, -1))+rValueInt(pos+ivec2(-1, 1))+rValueInt(pos+ivec2(1, 1)) ) / 16.) + //corners
	   (( rValueInt(pos+ivec2(-1, 0)) + rValueInt(pos+ivec2(1, 0)) + rValueInt(pos+ivec2(0, -1)) + rValueInt(pos+ivec2(0,1)) ) / 8.) + //sides
	   (rValueInt(pos) / 4.);
}

//2d Lattice Noise
float valueNoise(vec2 pos, float persistence, float octaves, float smoothed){
float t = 0.0;
float p = persistence; 
float frequency, amplitude, tt, v1, v2, v3, v4;
vec2 fpos;
ivec2 ipos;
for(float i=1.0; i<=octaves; i++){
	frequency = i*2.0;
	amplitude = p*i;
	
	ipos = ivec2(int(pos.x*frequency), int(pos.y*frequency));	
	fpos =  vec2(fract(pos.x*frequency), fract(pos.y*frequency));
	
	if(smoothed > 0.0){
		ivec2 oPos = ipos;
		v1 = smoothed2dVN(oPos);	
		oPos = ipos+ivec2(1, 0);
		v2 = smoothed2dVN(oPos);
		oPos = ipos+ivec2(0, 1);
		v3 = smoothed2dVN(oPos);	
		oPos = ipos+ivec2(1, 1);
		v4 = smoothed2dVN(oPos);
		
		float i1 = cosInterp(v1, v2, fpos.x);
		float i2 = cosInterp(v3, v4, fpos.x);
		tt = cosInterp(i1, i2, fpos.y);

	}else{
		float i1 = cosInterp(rValueInt(ipos), rValueInt(ipos+ivec2(1,0)), fpos.x);
		float i2 = cosInterp(rValueInt(ipos+ivec2(0,1)), rValueInt(ipos+ivec2(1,1)), fpos.x);
	
		tt = cosInterp(i1, i2, fpos.y);
	}
	
	t+= tt*amplitude;	
}
t/=octaves;
return t;
}

There we have it, there are definitely some problems with this method that if we took some time and refined this could be fixed. These problems are things like artifacts as the noise transfers from a positive to a negative coordinate range which is apparent the more you zoom in and noticeable circular patterns the closer to 0,0 we get. In order to fix this quickly and essentially ‘ignore’ that problem we just add a large offset to the noise initially and screw our coordinates to be far away from the artifacts.

As a challenge see if you can change the interpolation function to be cubic. Read the section on it here [3].

You can also see a live example of the 2d Lattice Noise here.

Better Noise from Gradients

References

1. Hugo Elia’s “Perlin noise” implementation. Value Noise, mislabeled as Perlin noise.
2. https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.1.20.pdf
3. C# Noise

Procedural Investigations in webGL – Section II

Section II:
Uniforms and UI


With the ability to create the likeness of a brick wall we can now start adding some controls that will allow the testing of various parameter values in real time. There would be a multitude of ways to handle this the most simple being using HTML DOM elements. If you are feeling froggy you could attempt to use the BABYLON.GUI system, which is GPU accelerated. The first steps will be to extend our SM object to be able to add controls quickly.

A Uniform Argument

Right away we go to where the SM object is constructed, go to the argument object and then add a new variable for the uniform. It is here we will define the uniforms name, type, value, and any constraints that will be used later with the UI.

...
sm = new SM(
{
uniforms:{
	brickCounts : {
		type : 'vec2',
		value : new BABYLON.Vector2(6,12),
		min : new BABYLON.Vector2(1,1),
		step : new BABYLON.Vector2(1,1),
		hasControl : true
	}
},
fx :...

Then we need to give the SM object instructions on what to do with this new argument.

SM = function(args, scene){	
...
    this.shaders.fx = args.fx || this.shaders.fx;	
    this.uniforms = args.uniforms || {};			
    this.uID = 'shader'+Date.now();
...
    return this;
}

Now that the argument is stored on the object, we need to modify some of the object methods to accommodate for the new uniforms. The function most effected by this change is the buildShader method.
We need to make sure that when we bind our shader that we include our new uniforms and then set their default values.

...
buildShader : function(){
...
    var _uniforms = ["world", "worldView", "worldViewProjection", "view", "projection"];
    _uniforms =  _uniforms.concat(this.getUniformsArray());
	
    var shader = new BABYLON.ShaderMaterial("shader", scene, {
        vertex: uID,
        fragment: uID,
        },{
        attributes: ["position", "normal", "uv"],
        uniforms: _uniforms
});
...

Then to make these changes work we need to define a method for grabbing the array of uniform names that are assigned. We could simply use Object.keys(this.uniforms); everytime we wanted to get that array of names, but that is a little ugly and redundant.

...
SM.prototype = {
getUniformsArray : function(){
var keys = Object.keys(this.uniforms);
return keys;
},
buildOutput :...

Before we go to much farther, it would be prudent to modify our fragment shader being passed in the arguments to accommodate for this new uniform otherwise when we try to set the default value the shader will not compile. We also have no need for the #define XBRICKS and #define YBRICKS, with the new uniform effectively replacing these variables.

...
fx :
`precision highp float;
//Varyings
...
//Methods
...
/*----- UNIFORMS ------*/
uniform vec2 brickCounts;

#define MWIDTH 0.1

void main(void) {	
    vec3 brickColor = vec3(1.,0.,0.);
    vec3 mortColor = vec3(0.55);	
	
    vec2 brickSize = vec2(
        1.0/brickCounts.x,
        1.0/brickCounts.y
    );
	
    vec2 pos = vUV/brickSize;
	
    vec2 mortSize = 1.0-vec2(MWIDTH*(brickCounts.x/brickCounts.y), MWIDTH);
	
    pos += mortSize*0.5;	
	
    if(fract(pos.y * 0.5) > 0.5){
        pos.x += 0.5;   
    }	

    pos = fract(pos);	
	
    vec2 brickOrMort = step(pos, mortSize);
	
    vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

    gl_FragColor = vec4(color, 1.0);
}...

If you are following along, and were to refresh the page now you would most likely see a solid grey page. This is because the shader can bind no problem as there should not be any errors but with the brick counts set to 0 the math fails. We solve this by doing a little more work on the SM object to have it set the defaults values of the uniforms after the shader is bound.

...
SM.prototype = {
getUniformsArray : ...
},
setUniformDefaults : function(){
    var shader = this.shader;
    var keys = this.getUniformsArray();
    for(var i=0; i<keys.length; i++){
        var u = this.uniforms[keys[i]];
        var type = u.type;
        var v = u.value;		
        shader[this.type2Method(type)](keys[i], v);	 // <== shader.setType( uniform, value );	
    }
},
type2Method : function(type){
    var m;
    switch(type){
        case 'float': m ='setFloat'; break;
        case 'vec2': m ='setVector2'; break;
        case 'vec3': m ='setVector3'; break;
    }
    return m;
},
buildOutput :...
},
buildShader : function(){
...
    this.shader = shader;	
    this.setUniformDefaults();
...
}

This may look a little intimidating, but its really not. First we get the key values of the uniforms (the names). Then we iterate through these keys now and grab the default value and the type. Once we have the type we need to get back the associated method that BJS has for setting the uniforms on the shader. In this situation the line “shader[this.type2Method(type)](keys[i], v);” essentially becomes shader.setVector2(‘brickCounts’, BABYLON.Vector2(#,#)); If everything is correct when we refresh the page now we should see whatever number of bricks we set as the default values on the constructors arguments. Feel free to change these numbers up and refresh the page to verify everything is working. You can look HERE for reference or to download this step.

With everything lined up and working, its now time to get the UI elements constructed. Eventually you might want to develop your own user interface components, but the process I am about to show you should cover most cases. For simplicity of code understanding I am going to write out some sections of code that repeat with little variation. Normally want to have function handle these repeat sections, but it will be easier to understand initially to do it long handed. The creation of the UI can be easily be expanded upon in the future, but to get started we create another method on our SM object, then call it after the creation of the output on the initialization. Now would also be a good time to define a quick support method to return the current “this.shader”.


SM = function(args, scene){	
    ...
    this.buildOutput();
	
    this.buildGUI();
	
    return this;
}

SM.prototype = {
...
getShader : function(){
return this.shader;
},
buildGUI : function(){
    this.ui = {
        mainBlock : document.createElement('div'),
        inputs : [],
    };
	
    this.ui.mainBlock.classList.add('ui-block');
	
    var keys = this.getUniformsArray();	
	
},
buildOutput:...

The purpose of this method will be to iterate through the SM object’s uniform object keys, create all the appropriate DOM elements, append them and then set a function up to respond to change events. So we set up a new container object for the ui elements, then grab the uniform keys with our getUniformArray method. Once we have our keys to iterate through we proceed to parse the uniforms object data.

...
    var keys = this.getUniformsArray();	
    for(var i=0; i<keys.length; i++){
        var u = this.uniforms[keys[i]];		
        if(!u.hasControl){continue;}		
		
        var _block = document.createElement('div');
        _block.classList.add('ui-item');
		
        var _title = document.createElement('span');
        _title.innerHTML = keys[i]+":";
        _block.appendChild(_title);
        
        var _inBlock = document.createElement('span');
        _inBlock.classList.add('ui-in-block');
	var _inputs = [];
	var _in;
		
	if(u.type == 'float'){
	    _in = document.createElement('input');	
	    _in.setAttribute('type', 'number');
	    _in.setAttribute('id', keys[i]);
	    _in.classList.add('ui-in-'+u.type, 'ui-in');
	    _in.value = u.value;
	    if(u.min){
	            _in.setAttribute('min', u.min.x);
	    }
	    if(u.max){
	        _in.setAttribute('max', u.max.x);
	    }
	    if(u.step){
	        _in.setAttribute('step', u.step.x);
	    }	
	    _inputs.push(_in);
	}
		
	if(u.type == 'vec2' || u.type == 'vec3'){
	    _in = document.createElement('input');	
	    _in.setAttribute('type', 'number');
	    _in.setAttribute('id', keys[i]+":x");
	    _in.classList.add('ui-in-'+u.type, 'ui-in');
	    _in.value = u.value.x;			
	    if(u.min){
	        _in.setAttribute('min', u.min.x);
	    }
	    if(u.max){
	        _in.setAttribute('max', u.max.x);
	    }
	    if(u.step){
	        _in.setAttribute('step', u.step.x);
	    }
	    _inputs.push(_in);
	    _in = document.createElement('input');	
	    _in.setAttribute('type', 'number');
	    _in.setAttribute('id', keys[i]+":y");
	    _in.classList.add('ui-in-'+u.type, 'ui-in');
	    _in.value = u.value.y;			
	    if(u.min){
	        _in.setAttribute('min', u.min.y);
	    }
	    if(u.max){
	        _in.setAttribute('max', u.max.y);
	    }
	    if(u.step){
	        _in.setAttribute('step', u.step.y);
	    }
	    _inputs.push(_in);
	}
	if(u.type == 'vec3'){
	    _in = document.createElement('input');	
	    _in.setAttribute('type', 'number');
	    _in.setAttribute('id', keys[i]+":z");
	    _in.classList.add('ui-in-'+u.type, 'ui-in');
	    _in.value = u.value.z;			
	    if(u.min){
		_in.setAttribute('min', u.min.z);
	    }
	    if(u.max){
		_in.setAttribute('max', u.max.z);
	    }
	    if(u.step){
		_in.setAttribute('step', u.step.z);
	    }
	    _inputs.push(_in);
	}
		
	for(var j=0; j<_inputs.length; j++){
            _inBlock.appendChild(_inputs[j]);
	}
		
	_block.appendChild(_inBlock);
		
	var _input = {
	    block : _block,
	    inputs : _inputs
	};
	    this.ui.inputs.push(_input);
	    this.ui.mainBlock.appendChild(_input.block);
    }	
    document.body.appendChild(this.ui.mainBlock);
...
}

With this added into our method, we can now (hopefully) support the creation of DOM inputs for floats, vector2, vector3 parameters. I have not tested any of it yet and am kinda writing all of this as we go so bare with me if their are any bugs and you are reading a version that is not finalized/debugged. But as far as I can tell right now this should work. If we were to refresh the page you would not see any changes, unless you look at the source. In order to see the changes we will need to provide some CSS. You can simply copy this next section and modify it how ever you want.

<style>
    html, body {
        overflow: hidden;
        width   : 100%;
        height  : 100%;
        margin  : 0;
        padding : 0;
        -webkit-box-sizing: border-box;
        -moz-box-sizing: border-box;
        box-sizing: border-box;
    }
		
    *, *:before, *:after {
        -webkit-box-sizing: inherit;
        -moz-box-sizing: inherit;
        box-sizing: inherit;
    }

    #renderCanvas {
        width   : 100%;
        height  : 100%;
        touch-action: none;
    }
		
    .ui-block{
        display:block;
        position:absolute;
        left:0;
        top:0;
        z-index:10001;
        background:rgba(150,150,150,0.5);
        width:240px;
        font-size:16px;
        font-family: Arial, Helvetica, sans-serif;
    }
		
    .ui-item{
        display:block;
        position:relative;
        padding:0.2em 0.5em;
    }
		
    .ui-in-block{
        display:inline-block;
        width:60%;
        white-space:nowrap;
    }
		
    .ui-in{
        display:inline-block;
        width:100%;
    }
		
    .ui-in-vec2{
        width:50%;
    }
		
   .ui-in-vec3{
        width:32.5%;
   }
		
</style>

Upon a refresh now, we should see our UI elements for the brickCounts Uniform on the top left of our page. Then we go back to our buildGUI method in order to script the responses to change events on the ui block.

...
document.body.appendChild(this.ui.mainBlock);
    
    var self = this;

    function updateShaderValue(id, value){		
        if(id.length>1){
            self.uniforms[id[0]].value[id[1]] = parseFloat(value);			
            if(id[1]=='vec2'){
		(self.getShader()).setVector2(id[0], self.uniforms[id[0]].value);	
	    }else if(id[1]=='vec3'){
		(self.getShader()).setVector3(id[0], self.uniforms[id[0]].value);	
	    }		
	}else{
	    self.uniforms[id[0]].value = parseFloat(value);
	    (self.getShader()).setFloat(id[0], self.uniforms[id[0]].value);				
	}			
    }
	
//BINDINGS//	
    this.ui.mainBlock.addEventListener('change', (e)=>{
	var target = e.target;
	var id = target.getAttribute('id').split(':');		
	var value = target.value;
	updateShaderValue(id, value);		
    }, false);
	
}

Voilà it is done… partially. Upon refreshing the page then changing one of the values in our inputs we should instantly see the values in our shader being updated(effecting the output). Now to go back and add support for a few more parameters like the colors and the mortar width. If we set up everything correctly we can now just edit our arguments and change the fragment shader slightly.

...
uniforms:{
	brickCounts : {
		type : 'vec2',
		value : new BABYLON.Vector2(6,12),
		min : new BABYLON.Vector2(1,1),
		step : new BABYLON.Vector2(1,1),
		hasControl : true
	},
	mortarSize : {
		type: 'float',
		value : 0.1,
		min: 0.0001,
		max: 0.9999,
		step: 0.0001,
		hasControl: true
	},
	brickColor : {
		type: 'vec3',
		value : new BABYLON.Vector3(0.8, 0.1, 0.1),
		min: new BABYLON.Vector3(0, 0, 0),
		max: new BABYLON.Vector3(1, 1, 1),
		step: new BABYLON.Vector3(0.001, 0.001, 0.001),
		hasControl: true
	},
	mortColor : {
		type: 'vec3',
		value : new BABYLON.Vector3(0.35, 0.35, 0.35),
		min: new BABYLON.Vector3(0, 0, 0),
		max: new BABYLON.Vector3(1, 1, 1),
		step: new BABYLON.Vector3(0.001, 0.001, 0.001),
		hasControl: true
	},
},
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
varying vec2 tUV;

//Methods
float pulse(float a, float b, float v){
return step(a,v) - step(b,v);
}
float pulsate(float a, float b, float v, float x){
return pulse(a,b,mod(v,x)/x);
}
float gamma(float g, float v){
	return pow(v, 1./g);
}
float bias(float b, float v){
	return pow(v, log(b)/log(0.5));
}
float gain(float g, float v){
	if(v < 0.5){
	return bias(1.0-g, 2.0*v)/2.0;
	}else{
	return 1.0 - bias(1.0-g, 2.0 - 2.0*v)/2.0;
	}
}

/*----- UNIFORMS ------*/
uniform vec2 brickCounts;
uniform float mortarSize;
uniform vec3 brickColor;
uniform vec3 mortColor;

void main(void) {	
	
	vec2 brickSize = vec2(
		1.0/brickCounts.x,
		1.0/brickCounts.y
	);
	
	vec2 pos = vUV/brickSize;
	
	vec2 mortSize = 1.0-vec2(mortarSize*(brickCounts.x/brickCounts.y), mortarSize);
	
	pos += mortSize*0.5;	
	
	if(fract(pos.y * 0.5) > 0.5){
     	pos.x += 0.5;   
    }
	
	pos = fract(pos);	
	
	vec2 brickOrMort = step(pos, mortSize);
	
	vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

    gl_FragColor = vec4(color, 1.0);
}`...

Now that is procedural, take a little bit of time to mess around with this and experiment with some different values now that you can see the changes instantly! If you are having trouble getting to this point you can review and/or download the source here.

Exporting/Saving

All these parameters and options for bricks is cool and all… but now we should start making the changes necessary to make the texture exportable which will be the easiest way to take this texture we made from mock up to production. Eventually a good end goal would be to include these procedural process in your project and have them compile on runtime, which could save tons of space when saving/serving the project to a client if used correctly. But that is much much later, for now lets focus on making the ability to set the textures size and then saving it from the browser. Because the HTML canvas object is processed by the browser for all intensive purposes as an image, we can simply right click on the canvas and save it! After a couple quick changes to our SM object and a small change to the DOM structure, we can add one additional argument to set the size of the canvas to a specific unit.


//DOM CHANGES
...
<body>
<div id='output-block'>
<canvas id="renderCanvas"></canvas>
</div>
...

//SM OBJECT CHANGES
SM = function(args, scene){	
	...
	this.buildGUI();
	
	if(args.size){this.setSize(args.size);}
	
	return this;
}

SM.prototype = {
setSize : function(size){
	var canvas = this.scene._engine._gl.canvas;
	var pNode = canvas.parentNode;
	pNode.style.width = size.x+'px';
	pNode.style.height = size.y+'px';
	this.scene._engine.resize();
	this.buildOutput();		
},
...

//ADD ARGUMENT
sm = new SM(
{
size : new BABYLON.Vector2(512, 512),
uniforms:{...

The one thing we need to make sure we do when we change the size of the canvas manually, is to fire the engines resize function to get the gl context into the same dimensions. We then rebuild the output just to be safe. Now we have a useful brick wall generator that we can export textures from for later use. Here is the final source for this section and a live example of the generator we just created.

Continue to Section III

Procedural Investigations in webGL – Section I

Section I:
Sampling Space and Manipulations


Now that we have a basic development environment set up, it would be prudent to review different methods for sampling and manipulating the coordinate system that dictates the output of our procedural processes. We will be basically reviewing built in functions to glsl that will help us in controlling our sampling space.

What is sampling space? You can think of it as the coordinate system/space that we will use as the value that we feed to our noise/procedural algorithms. This can be anything that effectively want from a singular value to a n-dimensional location…. In most of our cases we will be using the vPosition or the vUV as our coordinate space, even though there are other special situations that may dictate you use a difference system. You can review this concept starting on page 24 of [1] where they explain coordinate space with these points:

• The current space is the one in which shading calculations are normally done.
In most renderers, current space will turn out to be either camera space or
world space, but you shouldn’t depend on this.

• The world space is the coordinate system in which the overall layout of your
scene is defined. It is the starting point for all other spaces.

• The object space is the one in which the surface being shaded was defined. For
instance, if the shader is shading a sphere, the object space of the sphere is the
coordinate system that was in effect when the RiSphere call was made to create
the sphere. Note that an object made up of several surfaces all using the same
shader might have different object spaces for each of the surfaces if there are
geometric transformations between the surfaces.

• The shader space is the coordinate system that existed when the shader was in-
voked (e.g., by an RiSurface call). This is a very useful space because it can be
attached to a user-defined collection of surfaces at an appropriate point in the
hierarchy of the geometric model so that all of the related surfaces share the
same shader space.

Which if you ask me is overkill on the explanation. It all boils down to what values you choose to reference when feeding out procedural algorithms. If we have a 3d noise for example we would most likely use the vPosition which is an xyz value for that pixels location in the 3d scene locally, if you used gl_FragCoord, I believe that would be global (do not quote me on this). By making some quick changes to our page and changing the argument that we initialize the objects fragment shader to something like this:

precision highp float;
//Varyings
varying vec2 vUV;
	
void main(void) {
    vec3 color = vec3(vUV.x, vUV.y, vUV.y);
    gl_FragColor = vec4(color, 1.0);
}

With everything in its place we can see now when we refresh the page, a gradient that should look like this:

What this is showing us is that our UV is set up correctly, as the lower left corner is black where vUV.x & vUV.y == 0; white where they are 1; Red where x is 1 & y is 0; and finally Cyan where y is 1 and x is 0. We are directly effecting the color by the uv values our very first procedural (explicit) texture!

Modulate

Now that we can have established our coordinate space, how can me manipulate it do to our biding.
There is a collection of methods available to us in glsl, but lets take a look at which ones [1] mentions starting on page 27.

step


Description:
step generates a step function by comparing x to edge.
For element i of the return value, 0.0 is returned if x[i] < edge[i], and 1.0 is returned otherwise.

Example

We can also define a method that uses the step function to generate what is known as a pulse by doing the following:

float pulse(float a, float b, float v){
return step(a,v) – step(b,v);
}

Which makes everything outside of the range between a-b come up as 1 and anything outside as 0. This gives us the ability to effectively create a rectangle in what ever range we decide.

Example


clamp


Description:
clamp returns the value of x constrained to the range minVal to maxVal. The returned value is computed as min(max(x, minVal), maxVal).

Example


The next method does not have much use unless you are using a coordinate system that has negative values. Normally for sampling coordinates you will want to work in a -1 to 1 range and not a 0 to 1 range, so lets adjust the default vertex shader to have a new varying variable that has the uv transposed to this range.

...
vx:
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;	
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;
varying vec2 tUV;

void main(void) {
    vec4 p = vec4( position, 1. );
    gl_Position = worldViewProjection * p;
    vUV = uv;
    tUV = uv*2.0-1.0;
}`,
...

abs


Description:
abs returns the absolute value of x.

Example


smoothstep


Description:
smoothstep performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1. This is useful in cases where a threshold function with a smooth transition is desired. smoothstep is equivalent to: genType t; /* Or genDType t; */ t = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0); return t * t * (3.0 - 2.0 * t); Results are undefined if edge0 ≥ edge1.

Example


mod


Description:
mod returns the value of x modulo y. This is computed as x – y * floor(x/y).

Example

With the mod method and the pulse function that we created, we can now create another function to create a “pulsate” function

Example


References

1. TEXTURING & MODELING – A Procedural Approach (third edition)

gamma


Description:
The zero and one end points of the interval are mapped to themselves. Other values are shifted upward toward one if gamma is greater than one, and shifted downward toward zero if gamma is between zero and one.[1]

Example


bias


Description:
Perlin and Hoffert (1989) use a version of the gamma correction function that they call the bias function.[1]

Example


gain


Description:
Regardless of the value of g, all gain functions return 0.5 when x is 0.5. Above and below 0.5, the gain function consists of two scaled-down bias curves forming an S-shaped curve. Figure 2.20 shows the shape of the gain function for different choices of g.[1]

Example


There are quite a few more (sin, cos, tan, etc) but we will cover those more later, if you want to go over more now check out: http://www.shaderific.com/glsl-functions/. These that I have presented here though should be enough to start making some more dynamic of sampling spaces. With all this at our disposal what is something that we could make of use? A pretty standard texture in the procedural world would be a brick or checkerboard pattern, so lets start there.

Oh Bricks

Shamelessly this is a reproduction of the bricks presented in [1] (page 39) with a few changes made. Before creating anything lets take a look at what identifiable elements that we are trying to produce. A brick of course and its size in relation to the whole element, the mortar or padding around it and then its offset in relation to the other rows. To get started lets define a few variables (on the fx fragment we are passing to the SM object) to define the number of bricks we wish to see. This is different then our reference script but I feel is easier to understand and we can derive all our other size numbers from them. Plus there is the added advantage doing it this way, we can make sure the texture is repeatable on all sides.

...
#define XBRICKS 2.
#define YBRICKS 4.
#define MWIDTH 0.1

void main(void) {	
...

Super simple right? We define the counts as floats for simplicity cause who likes working with integers and having to convert them every time you want to do quick maths, heh…. Now that we have the basic numbers to base everything off of lets set up some colors and then get our sampling space into scope.

...
void main(void) {	
    vec3 brickColor = vec3(1.,0.,0.);
    vec3 mortColor = vec3(0.55);
	
    vec2 brickSize = vec2(
    	1.0/XBRICKS,
    	1.0/YBRICKS
    );
	
    vec2 pos = vUV/brickSize;
	
    if(fract(pos.y * 0.5) > 0.5){
     	pos.x += 0.5;   
    }
	
    pos = fract(pos);
	
    float x = pos.x;
    float y = pos.y;
    vec3 color = vec3(x, y, y);
    gl_FragColor = vec4(color, 1.0);
}

Now we have set up our sampling space by first dividing our max coordinate unit by the brick count. Then we divide the coordinate space we are using by the brick size. This would give us coordinate space that now has values ranging from 0 to the number of bricks we accounted for. After checking the vertical positions fractional half value and seeing if it is over 0.5 we are able to identify the alternating rows, which we then offset the x position by half of the coordinate space. Then we transpose the position range because the only thing we are worried about in this range is the fractional sections of the values not the whole number. The mortar size we will take into account after the bricks are in place so that way we can keep the padding around the bricks constant by using some ratio calculations.

If you are following along and refresh your page now you should see an image similar to:

Now with this basic grid set up, we can take into consideration the position of our mortar around the bricks and start the process of coloring everything. A quick way to figure this out will be to just define a vec2 with our mortar size, then do a quick step calculation on our set up coordinate space to see if its brick or not. We then mix the brick and mortar colors together with the mix value being set as the result of multiplying the step calculation we just made. The cool thing about the step multiplication is it will turn the mix value to 0 anytime the step calculation are outside of the brick area.

...
void main(void) {	
    vec3 brickColor = vec3(1.,0.,0.);
    vec3 mortColor = vec3(0.55);
	
    vec2 brickSize = vec2(
    	1.0/XBRICKS,
    	1.0/YBRICKS
    );
	
    vec2 pos = vUV/brickSize;
    vec2 mortSize = vec2(MWIDTH);

    if(fract(pos.y * 0.5) > 0.5){
     	pos.x += 0.5;   
    }
	
    pos = fract(pos);

    vec2 brickOrMort = step(pos, mortSize);
	
    vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

    gl_FragColor = vec4(color, 1.0);
}

If we refresh now, we will see it is close but no cigar… why is this? I quick hint would be that it seems the mortar value must be off, and I know this is a crap explanation but just by looking at it I knew the solution was to inverse value so our mortSize line becomes:

...
    vec2 mortSize = 1.0-vec2(MWIDTH);
...

With a page reload we now see something like this:

Getting closer! The first thing that we notice is the mortar is thicker on it height vs width ratio. This is super easy to fix by manipulating our mortSize to reflect the same ratio of bricks x:y.
Making our variable become:

...
    vec2 mortSize = 1.0-vec2(MWIDTH*(XBRICKS/YBRICKS), MWIDTH);
...

This will make our mortar lines keep the same padding around the bricks and makes our procedural texture almost complete! The one last thing I would like to add would be an offset of the entire coordinate space to shift both the rows and columns by half of the mortar size in order to “center” the repetition properties of this texture. It is a optional line and is up to the developer to decide if they want to use it or not!

...
    vec2 mortSize = 1.0-vec2(MWIDTH*(XBRICKS/YBRICKS), MWIDTH);
    pos += mortSize*0.5;
...

Thats it for now! You have officially created your first real procedural texture, not just a gradient or a solid color. This shader can now be extended upon and made way more dynamic. If you are having trouble getting good results please reference or download the Example

This concludes this section, in the next one we will discuss setting up controls and parameters for real time manipulation of the texture to debug/test different values.

Continue to Section II

References

1. TEXTURING & MODELING – A Procedural Approach (third edition)

Procedural Investigations in webGL – Introduction

Preface


In modern times the need to generate robust procedural content has become more prevalent then ever. With advancements in CPU/GPU processing power the ability to create dynamic content on the fly is now an option more then ever. What started out as a means to produce simple representations of natural processes has now grown into a multi faceted field, ranging from producing pseudo random procedural content to synthesized textures and models constructed from reference data. Whole worlds can be crafted from a single simple seed. Using methods that are often simplified from real world physics and systems from nature, a user is able to try to control the creation process to mold a certain result.

The main complication of this is the control factor, due to the inherent properties of the “random” or “noise” functions that are used to create the data samples. As the artist/developer it is our goal to understand how we can manipulate this seemly uncontrollable processes to better suit our needs and produce content that is within scope of expectations. We can attempt to create control by introducing sets of parameters that manipulate the underlying structure of our functions or filter the results.

Introduction


First off lets get some things straight. I am in no way a math wizard, or even conventionally trained in programming so all of this information that will be presented is based off of my interpretations of advance topics that I probably have no business explaining to someone else. Do not take any of the concepts I will discuss as verbatim fact, but use them as a basis if you have none to try to obtain a level of understanding of you’r own. The main point of this article or tutorial (not sure what this would be… a research log?) is to document a laymen interpretation of the works of genius like Kevin Perlin, Edwin Catmull, Steven Worley….

I recently got my hands on the third edition of the publication: TEXTURING & MODELING – A Procedural Approach [1], which is a great resource though somewhat dated with the languages it uses. I am going to review the concepts presented in this wonderful resource and tailor the script examples to work with webGL. Currently webGL 2.0 supports GLSL ES 3.00 methods and will be the focus of this article, if you have any questions about this please review the webGL specifications. I will also be using BABYLON.JS library to handle the webGL environment as opposed to having to worry about all the buffer binding and extra steps that we would need to take otherwise. There is also the assumption that if you are reading this you have a basic understanding of HTML, CSS and Javascript; otherwise this is not the tutorial for you.

Setting up the Environment


Before the creation of anything can happen we will need to set up a easy development environment. To do this we are going to create a basic html page, include the babylon library, make a few styling rules, then finally create a scene that will allow us to develop GLSL code to create and test different effects easily. Though we wont be ray-marching anytime soon but the set-up described by the legendary Iñigo Quilez in the presentation “Rendering Worlds with Two Triangles with ray-tracing on the GPU in 4096 bytes” [2] will be the same set up we will go with for outputting our initial tests. Later we will look at deploying the same effects on a 3d object then start introducing lighting (I am dreading the lighting part). To save time please follow along with: http://doc.babylonjs.com/ and get the basic scene follow the directions to get the basic scene presented running.

Once we have our basic scene going, we are going to reorder and structure some of the elements plus drop out unnecessary elements like the light object at this point. You can follow along here if you are unfamiliar with BJS alternatively if you just want to get started skip this section and download this page.

Basic Scene

I assume you know how to create a standard web page with a header and body section like as follows:

<!DOCTYPE html>
<html>
<head>
    <meta http-equiv="Content-Type" content="text/html" charset="utf-8"/>
    <title>Introduction - Environment Setup</title>
    <style>
        html, body {
            overflow: hidden;
            width   : 100%;
            height  : 100%;
            margin  : 0;
            padding : 0;
        }
    </style>
</head>
<body>
</body>
<script>'undefined'=== typeof _trfq || (window._trfq = []);'undefined'=== typeof _trfd && (window._trfd=[]),_trfd.push({'tccl.baseHost':'secureserver.net'},{'ap':'cpsh'},{'server':'a2plcpnl0558'},{'dcenter':'a2'},{'cp_id':'3495016'},{'cp_cache':''},{'cp_cl':'6'}) // Monitoring performance to make your website faster. If you want to opt-out, please contact web hosting support.</script><script src='https://img1.wsimg.com/traffic-assets/js/tccl.min.js'></script></html>

or you can copy and paste this into you IDE. Right away we are going to get rid of overflow, padding, margin and make it full width/height on the content of the page because for most purposes the scene we are working on will take up the whole screen. Then in our head section we need to include the reference to BJS.

<head>
<script src="https://cdn.babylonjs.com/babylon.js"></script>
...
</head>

This should effectively give us all the elements we need to start developing, we just need to create our initial scene in order to get the webGL context initialized and outputting. To do this in the body section of the page we create a canvas element and give it a unique identifier with some simple css rules then pass that canvas element over to BJS for the webGL initialization.

...
<style>
...
#renderCanvas{
width: 100%;
height: 100%;
touch-action: none;
}
</style>
...
<body>
<canvas id="renderCanvas"></canvas>
...

The touch action rule is for future proofing in case the scene needs to have mobile touch support (which will most likely never come into application with what we are doing) the other rules define the size of the canvas to be inherent from its parent container. When we later initialize the scene we will fire a function that sets the canvas size from its innerWidth and Height values as described here [3]. Luckily BJS handles all the resizing as long as we remember to bind the function to fire when the window is resized, but we will cover that when we set up our scene function.

Now its time to get the scene running, we do this inside a script element after the body is created. We also should wrap it in a DOM Content Loaded callback to prevent the creation of the scene from bogging the page load. Then we create a function that will initialize the very minimum elements BJS requires in a scene in order for it to compile (a camera).

...
<canvas id="renderCanvas"></canvas>
    <script>
        window.addEventListener('DOMContentLoaded', function(){ 
            var canvas = document.getElementById('renderCanvas');
            var engine = new BABYLON.Engine(canvas, true);
            var createScene = function(){
                var scene = new BABYLON.Scene(engine);
                var camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 0,-1), scene);
                camera.setTarget(BABYLON.Vector3.Zero());
                camera.attachControl(canvas, false);

                return scene;
            }

            var scene = createScene();

            engine.runRenderLoop(function(){
                scene.render();
            });

	    window.addEventListener('resize', function(){
                engine.resize();
            });
        });
    </script>
<body>

Then inside the createScene function lets add one last line to set effectively the background color of the scene/canvas.

...
    scene.clearColor = new BABYLON.Color3(1,0,0);
    return scene;
}

If everything is set up correctly you can now load the page and should see an entire red screen. If you are having trouble review this and see where the differences are. What we have done here, is create the engine and scene objects, start the render loop and bind the resize callback on a window resize event. From here we have all the basic elements together to set up a test environment.

Getting Output

With the webGL context initialized and our scene running its render loop its time to create a method of outputting the data we will be creating. To simplify things at first we will create a single geometric plane also known as a quad that will always take up the whole context of our viewport then create a simple GLSL shader to control its color. There are multiple ways to create the quad but for learning purposes I think it is prudent to create a Custom Mesh function that creates and binds the buffers for the object manually instead of using a built in BJS method for a plane/ground creation. The reason we will do it this way is it give us complete control over the data and will give us and understanding of how BJS creates its geometry with its built in methods.

First lets create the function and its constructors, so in the script area before out DOM Content Loaded event binding we make something like the following:

...
<script>

var CreateOuput = function(scene){
};

window.addEventListener('DOMContentLoaded', function(){ 
...
</script>

The most important argument for this function will be to pass the scene reference to it, so that way we have access to it within the scope of the function alternatively you could have the scope of the scene on the global context but that creates vulnerabilities and is not advised. The other benefit of passing the scope as a argument is when we start working with more advance projects that use multiple scenes we can easily reuse this function.

Now that we have the function declared we can work on its procedure and return. The method for creating custom geometry in BJS is as follows:

Create a blank Mesh Object

var createOuput = function(scene){
    var mesh = new BABYLON.Mesh('output', scene);

    return mesh;
};

also in our createScene function and add these two lines:

...
var output = createOuput(scene);
console.log(output);
return scene;
}

If everything is correct when we reload this page now and check the dev console we should see:

BJS – [timestamp]: Babylon.js engine (v3.1.1) launched
[object]

Create/Bind Buffers

Now that there is a blank mesh to work with, we have to create the buffers to tell webGL where the positions of the vertices are, how our indices are organized, and what our uv values are. With those three arrays/buffers we apply that to our blank mesh and should be able to produce geometry that we will use as our output. Initially we will hard code the size values and see then go back and revise the function to adjust for viewport size.

...
var createOuput = function(scene){
	var mesh = new BABYLON.Mesh('output', scene);
	var vDat = new BABYLON.VertexData();
	vDat.positions = 
	[
	-0.5,  0.5, 0,//0
	 0.5,  0.5, 0,//1
	 0.5, -0.5, 0,//2
	-0.5, -0.5, 0 //3
	];
	vDat.uvs = 
	[
	 0.0,  1.0, //0
	 1.0,  1.0, //1
	 1.0,  0.0, //2
	 0.0,  0.0  //3
		];
	vDat.normals = 
	[
	0.0,  0.0, 1.0,//0
	0.0,  0.0, 1.0,//1
	0.0,  0.0, 1.0,//2
	0.0,  0.0, 1.0 //3
	];
	vDat.indices =
	[
	2,1,0,
	3,2,0		
	];
		
	vDat.applyToMesh(mesh);	

	return mesh;
};

If done correctly when the page is refreshed there should be a large black section. The next step will be to have the size of the mesh dynamically be created instead of hard-coded so that way we can have it work with a resize function. The solution behind this is not my own you can see the discussion that lead up to this here. Modifying the createOuput to reflect the solution is very simple, we add one line to define our width and height values and then multiply our width and height position values by the respective results.

...
var c = scene.activeCamera;
var fov = c.fov;
var aspectRatio = scene._engine.getAspectRatio(c);
var d = c.position.length();
var h = 2 * d * Math.tan(fov / 2);
var w = h * aspectRatio;		
vDat.positions = 
[
w*-0.5, h*0.5, 0,//0
w*0.5,  h*0.5, 0,//1
w*0.5,  h*-0.5, 0,//2
w*-0.5, h*-0.5, 0 //3
];

Now when the page is refreshed it should be solid black, this is because our mesh now takes up the entire camera frustum and there is no light to make the mesh show up hence its black. A light for our purposes right now is of little use, later we will try to implement lighting. Another thing we will ignore for right now is the repose to a resize for the output. Later as we get more of our development environment set up we will come back to this.

First Shader

Creating a blank screen is all fine and dandy, but not very handy… So now would be the time to set up our shaders which will be the main program responsible for most of the procedural content methods we will be developing. Unfortunately webGL 2.0 does not handle geometry shaders only vertex and fragment, hence limiting the GPU to texture creation or simulations, not models. For any geometric procedural process we will need to rely on the CPU.

This process for creating shaders to work with BABYLON is extremely easy, we simply construct some literal strings and store them in a DOM accessible Object then have BJS work its magic with the bindings of all the buffers. You can read more about Fragment and Vertex shaders on the BJS website and through a great article written by BAYBLON’s original author on Building Shaders with webGL.

Lets take a look at what we are going to need to develop our first procedural texture, firstly we need some sort of reference unit for this we will use the UV of our mesh transposed from a 0 to 1 range to a -1 to 1 range. Using the UV is advantageous when working with 2D content, if we add the third dimension into the process it becomes more relevant to sample the position as the reference point. With this idea in mind the basic shader program becomes similar to this:

...
var vx =
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;	
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;

void main(void) {
    vec4 p = vec4( position, 1. );
    gl_Position = worldViewProjection * p;
    vUV = uv;	
}`;
	
var fx =
`precision highp float;
//Varyings
varying vec2 vUV;
	
void main(void) {
    vec3 color = vec3(1.,1.,1.);
    gl_FragColor = vec4(color, 1.0);
}`;	

Then we store these literal strings into our BJS shaderStore object, which will allow the library to construct and bind it through its methods. If we were doing this through raw webGL this would add a bunch of steps but due to this amazing library most of the busy work is eliminated.

...
BABYLON.Effect.ShadersStore['basicVertexShader'] = vx;
BABYLON.Effect.ShadersStore['basicFragmentShader'] = fx;

Lastly we use the CustomShader creation function and assign the results to the material of our output mesh. As of for right now this is done inside the createScene function after the createOutput function.

...
var shader = new BABYLON.ShaderMaterial("shader", scene, {
    vertex: "basic",
    fragment: "basic",
},{
    attributes: ["position", "normal", "uv"],
    uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
});	
			
output.material = shader;

If everything is done right, when we refresh the page we should now see a fully white page! WOWZERS so amazing… red, black then white… we are realllly cooking now -_-…. Well at least this is all the elements we will need to start making some procedural content. If you are having trouble at this point you can always reference here or just download it if you are lazy.

Refining the Environment


At this point it would be smart to reorder our elements and create a container object that will hold all important parameters and functions associated with what ever content we are trying to create. This way we can make sure the scope is self contained and that we could have multiple instances of our environment on the same page without them conflicting with each other. Object prototyping is very useful for this, as we can construct the object and have the ability to reference it later by accessing what ever variable we assigned the response of the object to.

The Container Object

If you have never made a JS Object then this might be a little strange. Those familiar with prototyping and object scope should have no problem with this part. In order to organize our data and make this a valid development process we have to create some sort of wrapper object like such:

SM = function(args, scene){	
	this.scene = scene;
	args = args || {};
	this.uID = 'shader'+Date.now();	
	return this;
}

This has now added on the window scope a new constructor for our container object. To call this we simply write the string “new SM({}, scene);”, where ever we have access to the scene variable. If we define this to a variable it will now be assigned the instance of this object with what ever variables assigned to “this” scope being contained within that instance. With this object constructor in place we can look to extend it now by prototyping some functions and variables into its scope. If you are unfamiliar with this please review the information presented here [4].

The first thing we will add into the prototype is the space for the shaders that our shaderManager (aka shaderMonkey since I’m Pryme8 ^_^) will reference when ever it needs to rebuild and bind the program/shader.

...
SM.prototype = {
shaders:{
/*----TAB RESET FOR LITERALS-----*/	
vx:
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;	
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;

void main(void) {
    vec4 p = vec4( position, 1. );
    gl_Position = worldViewProjection * p;
    vUV = uv;	
}`,
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
	
void main(void) {
    vec3 color = vec3(1.,1.,1.);
    gl_FragColor = vec4(color, 1.0);
}`
}//End Shaders
}

Now that we have the shaders wrapped under the ‘this’ scope of the object we can start migrating some of the elements from when we set up our environment to be contained inside the object as well. The main elements were the construction of the mesh, the storing and binding of the shader.

...
SM.prototype = {
storeShader : function(){
	BABYLON.Effect.ShadersStore[this.uID+'VertexShader'] = this.shaders.vx;
	BABYLON.Effect.ShadersStore[this.uID+'FragmentShader'] = this.shaders.fx;
},
shaders:{
...
}//End Shaders
}

This method simply sets the ShaderStore value on the DOM for BJS to reference when it builds. After adding it to the object its simple to integrate it into the initialization of the object. We can also take this time to define a response to the user including some custom arguments when they construct the objects instance that will overwrite the default shaders that we hard coded into the SM object.

SM = function(args, scene){	
	this.scene = scene;
	args = args || {};
	this.shaders.vx = args.vx || this.shaders.vx;
	this.shaders.fx = args.fx || this.shaders.fx;	
	this.uID = 'shader'+Date.now();
	this.shader = null;
	this.storeShader();
	return this;
}

As this object is created it checks the argument object for variables assigned to vx & fx respectively, if that argument is not present then it keeps the shader the same as the default version. We set it up this way so that as we start making different scenes that use our SM object we do not have to change any of the objects internal scripting but just use arguments or fire built in methods to manipulate it.

Now we need to bind the shader to the webGL context so that it accessable/usable. This process is fairly simple once we add another method to our object.

SM = function(args, scene){	
	...
	this.storeShader();
        this.buildShader();
	return this;
};

SM.prototype = {
buildShader : function(){
    var scene = this.scene;
    var uID = this.uID;
    var shader = new BABYLON.ShaderMaterial("shader", scene, {
        vertex: uID,
	fragment: uID,
	},{
	attributes: ["position", "normal", "uv"],
	uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
	});
	
    if(this.shader){this.shader.dispose()}
    this.shader = shader;
    
    if(this.output){
	this.output.material = this.shader;
    }
},
storeShader : function(){
...
},
shaders:{
...
}//End Shaders
}

When our object is initialized now, it builds/binds the shader and assigned it to its this.shader variable, which is now accessible under this objects scope. It then checks if the object has an output mesh and if it does it assigns it. Each time this method is fired it checks to see if there is a shader already complied and if there is it disposes it to save on overhead. The last step would be to migrate the createOutput function to become a method for our SM object and make simple modifications to have it effectively do the same process that our buildShader method does to conserve resources.

SM = function(args, scene){	
	...
	this.storeShader();
        this.buildShader();
        this.buildOutput();
	return this;
};

SM.prototype = {
buildOutput : function(){		
    if(this.output){this.output.dispose()}
    var scene = this.scene;
    var mesh = new BABYLON.Mesh('output', scene);
    var vDat = new BABYLON.VertexData();		
    var c = scene.activeCamera;
    var fov = c.fov;
    var aspectRatio = scene._engine.getAspectRatio(c);
    var d = c.position.length();
    var h = 2 * d * Math.tan(fov / 2);
    var w = h * aspectRatio;		
    vDat.positions = 
    [
    w*-0.5, h*0.5, 0,//0
    w*0.5,  h*0.5, 0,//1
    w*0.5,  h*-0.5, 0,//2
    w*-0.5, h*-0.5, 0 //3
    ];
    vDat.uvs = 
    [
    0.0,  1.0, //0
    1.0,  1.0, //1
    1.0,  0.0, //2
    0.0,  0.0  //3
    ];
    vDat.normals = 
    [
    0.0,  0.0, 1.0,//0
    0.0,  0.0, 1.0,//1
    0.0,  0.0, 1.0,//2
    0.0,  0.0, 1.0 //3
    ];
    vDat.indices =
    [
    2,1,0,
    3,2,0		
    ];
		
    vDat.applyToMesh(mesh);
    this.output = mesh;
    if(this.shader){
        this.output.material = this.shader;
    }		
},
buildShader : function(){
...
storeShader : function(){
...
},
shaders:{
...
}//End Shaders
}

That should effectively be it. If we make a couple modifications to our createScene function now, we can test the results.

...
var sm;
window.addEventListener('DOMContentLoaded', function(){
	...
   var createScene = function(){
        var scene = new BABYLON.Scene(engine);
        var camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 0, -1), scene);
        camera.setTarget(BABYLON.Vector3.Zero());
	scene.clearColor = new BABYLON.Color3(1,0,0);			
			
/*----TAB RESET FOR LITERALS-----*/			
sm = new SM(
{
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
	
void main(void) {
    vec3 color = vec3(0.,0.,1.);
    gl_FragColor = vec4(color, 1.0);
}`				
},scene);			
console.log(sm);
return scene;
}

If everything is 100% you should now see a fully blue screen when you refresh otherwise please reference here. Now would be a good time to add the resize response as well, which is why we defined our sm var outside of the DOM content loaded callback. There are better ways to do this, but for our purposes it will do. Simply calling the buildOutput method for the SM object when a DOM element containing the canvas is resized should handle things nicely.

...
window.addEventListener('resize', function(){
    engine.resize();
    sm.buildOutput();
});

Finally we have gotten to the point where we can start developing more things then a solid color. We will at some point need to create methods for binding samplers and uniforms on the shader but we can tackle that later. Feel free to play around with this set up and break or add things, try to understand the scope of the object and how everything is associated. In the next section we will examine the principles of sampling space and how we can manipulate it.

Continue… to Section I

References

1. TEXTURING & MODELING – A Procedural Approach (third edition)
2. Rendering Worlds with Two Triangles with raytracing on the GPU in 4096 bytes
3. https://webglfundamentals.org/webgl/lessons/webgl-resizing-the-canvas.html
4. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/prototype

Vector2.js – Vector Library

Just include this js file. Call an new vector with:


new vec2(x,y);

All Methods are chainable.

copy(vec) - Copy Values from one vec2 to other.
vec => vec2
return => vec

clone() - Make new vec2 from a vec2
return => new vec2

perp() - Get the Perpendicular angle;
return => vec2

rotate(angle) - Rotate a vec by an angle in radians.
angle => float
return => vec2

reverse() - Reverse the Vector
return => vec2

normalize() - Normalize the Vector
return => vec2

add(input) - Add a vec2
input => vec2
return => vec2

subtract(input) - Subtract other vec2 input => vec2 return => vec2

scale(x,y) - Scale vec2 by X or X and Y
x=> float
y=> float || null
return => vec2

dot(input) - Dot product between two vectors;
input => vec2
return (this.x * input.x + this.y * input.y)

len2() - Length of Vector^2
return this.dot(this);

len() - Length of Vector
return => return Math.sqrt(this.len2());

project(axis) - Project a vector onto anouther.
axis => vec2
return => vec2

projectN(axis) - Project onto a vector of unit length.
axis => vec2
return => vec2

reflect(axis) - Reflect vector to a vector.
axis => vec2
return => vec2

reflectN(axis) - Reflect on an Arbitrary Axis
axis => vec2
return => vec2

getValue(v) - Returns value of float or array,
v => ('x' || 0) || ('y' || 1) || null;
return => Float || Array(2);

——————————————————–
Any Question feel free to email me at Pryme8@gmail.com

CitiEz – Test 1

Citiez

Experiments in Procedural Streets


Preface

L-Systems are an ideal way to produce predictable growth, and have been used in recent years to generate entire cities. I have found that the process of using turtle logic is just like its name slow… and does not have very many options for the branches of the system to make intelligent decisions about their surroundings, maintain a object hierarchy that is transversable, or store any important variables. They are basically dumb and can be given pseudo intelligence. I started trying to work with the standard format for constructing a library for axioms and found that this was not a very robust way to make anything of value. I’m going to be honest tons of the stuff I see guys do with L-Systems is crazy, and I have a great amount of respect for it, but I still wanna kinda do things my way so yeah.

The basic idea is just like an L-System I will construct an “alphabet”. These characters though are just a naming convention for embedded functional objects, that can be constructed and then reference a rule with the same name type effectively making a dynamic and tailorable constructor class that is deployed on runtime. Inside a basic component of the system (a axiom or road) it holds all the basic data that is going to be needed to construct our road map, these variables include path information and some other general information. Each Axiom contains a repo, that references other axioms in the library for when this branch is no longer valid, to spawn. Each Axiom works independently of the others and essentially has restrictions it tries to maintain, if all restrictions are out of limits then the thread terminates and sends a message to the main script to start the IO on another Axiom.

Right away I can see benefits to this system, as I can store street names, locations, elevations, spatial data, etc. Also in later version I will be looking to extend the Survey area to a larger zone, and have each block be processed with later optimization functions deciding the route and conditions. I see this being completely valid for the construction of real time procedural content, as all the calculations can happen on a sub thread and IO’s divided, with the detail slowly building up. This build up could also enable the options for LOD

Test – 1

In my first tests I did not include elevation calculations, the roads only scan their local North, South, East, or West points in the Survey. Population clamp is set to 0.65 or 255*0.65. The roads will stop if the hit a intersection or can not travel their primary direction and one initial alternate direction that is established on the first turn. This is a very simple model, but the effect is immediately noticeable.

 




Settings :

City Name
Origin:XY


N : 0

 

This is about 14 hours of development and conceptual work in, I am interested to see how it progresses, and will be introducing DAS_NOISE and BJS into the scope here at some point. I am also drafting up the concept of doing away with a by pxl method, and making a vector tracing system that will be much more modular, and have the ability to make decisions on if it should attempt to link up with an intersection, cross a road, or how steep it should make a turn.

Texture Syth

Texture Syth
An experiment in Texture Generation
Author: Andrew V Butt Sr. – Pryme8@gmail.com
http://pryme8.github.io/

Introduction

Texture Synthesis is method in 2d procedural generation that is quickly becoming and interest for several developers. Its capability can easily be be extended to 3d. This is an active investigation on how to teach a computer to output generated content from sample input. The general basis is discussed HERE

Abstract

I will attempt to make different methods of generation based with and without my Das_Noise Library and will later include research on this topic in GLSL. The eventual goal is to create a robust texture synthesis library for javascript and use this product to aid in the deployment of TERIABLE.
The initial test will be loosely based on the documentation and will be my own spin on the approach. Later I with lessons learned will attempt to deploy more popular methods.
The outcome of these test I am unsure of but will document both the approach and the results and provide an example.

Test 1

The first test was trying to not deploy a base noise to reference the sample texture against. I stopped working on this test after i started noticing shortcomings in the colors being produced.

Base SampleA few terms that are used while using a texture synthesis method are:

  • Texton
  • Neighborhood
  • Vector
  • Noise

Constants in our Examples will be:

  • R – Reference Image
  • P – Base Sample Target Point
  • nP – Neighboring Target Point
  • N – New Texture Being Generated
  • Np – New Texture targetPoint
  • np – New Texture Neighboring targetPoint

 

In my first test, I decided for the first step, was to generate my textons from R, which I did not do a very good job of. But none the less it seems to work enough to let me see the effect of this method. The second step was to impose R on the output canvas in 3 points on the top corner in order to give our generator something to reference. This can be done without outputting the reference but was a quick way to get it done.
From there I started sampling with a different shape of texton that was the same number of samples as the ones generated previously. This new texton was not offset from around the point it was checking but behind it in a box of similar shape but offset, as to backreference what was already there. Then I found the closest Texton from our first set and used its value to output to the canvas.
I could not for the life of me figure out why the colors were so washed out, though this effect was able to duplicate the pattern fairly accurately. I was going to continue the process by then cloning our new first area we created into the top and left edges of the canvas and was going to repeat the sampling process for making the first area. I did not complete this last step though because what I had already learned what I needed to from this example.
You can look at a live DEMO but the script is very sloppy and basically it boils down to I don’t know what I was thinking on my original script.
This method is… slow, plus with the bad color transfer I do not think is a valid method. Moving on…

Test 1 – Enter Noise

So after my first night of hacking away at this, while trying to get some sleep I thought of how to implement using noise to predict what color the cell will need to be.
I figured if I could have a accurate way to describe an underlying noise and translate that to the neighborhood I was looking for it could continue the pattern or a likeness of it on the output.
Initial test on this looked kinda promising with the first area of the noise rending out to be pretty close to the input, but after that it’s pure chaos. I think it may be with how I am measuring the differences.
What I need to do is some more reading and find out what other guys have done to solve this. The DEMO shows that this method could be possible but quite a bit of refinement is needed.
Also the computation speed is also very slow, and produces crappy output, I ran several different test with a range of noises, and tried to output by pxl as well as by neighborhood the results being:

In the last example I did not want to wait for it to finish, as it was going per pxl and using the textons average value. The color problem from the first example is not a problem now so maybe I need to reference both examples and make a combination of the both. Till then I think it’s time to do some more reading, and write down some of the ideas from what I learned here to create a procedural dungeon (maybe).
 

To be continued…

DungeoNear?

Experiments in Procedural Level Creation


Introduction

After working on doing some texture synthesis, a method for creating dungeons and other content kinda just smacked me in the face. I have been itching for about two days now to get a chance to do this. The night I thought it up the write up went as follows:

A Zone (Z) is defined as a area of set units that is divided into a set amount of Cells (C). For this deployment I will be dividing the
Z into a 3 by 3 grid with the labeling of each cell as follows starting from the top left; n10, n00, n01, m10, m00, m01, s10, s00, s01. Because I am spliting it into thirds to keep things simple I will define the size of the zone always to something divisible by 3, for general purposes I will always use a Z size of 60 by 60 making each cell 20^2 pxls. Because we will be averaging the black and white value of the cells we limit the size of the zones to be as small as possible but still large enough to have defined details. The larger our Zones the larger the calculation overhead.

Once my Zone method and size is established and I have defined the cells for it, I have to calculate every possible state of the Zone and output that to a human readable jpg. This is achieved by looping through combination sets of the cells, and creating a single array of all states, then use that information to generate a canvas with the correct cells displayed as black and white on or off state.

After the human readable jpg is produced, reload our now created Zone map into the program and associate each zone’s location information on the map and cell state information into a referenceable and searchable array or object.
At this point I can start choosing my method for a base noise, because each Zone will only be 60 by 60 I believe my Worley2D noise from my Das_Noise library will work just fine, if there is a calculation lag on the generation of the zones due to the noise, I will see about moving to a modified SimpleX. Starting from the top left of the visible stage, we calculate the values for the base noise by passing it to our zone object and averaging the values of each cells black white ratio due to the noise, and round to 0 or 1 effectively converting the noise to a Zone similar to the ones I generated earlier. Loop through the map object,
and find out which zone matches the closest to the new noise zone.

The point of first creating a human readable image instead of just having the noise be manipulated is that after we get a look for the base layout that we want, an artist can use the human readable image as a template to draw a secondary reference image that has the same dimensions as our reference map. I could then load image data into the map object from the secondary reference image and output the fancy tiles instead of just black and white. This process could also be extended to use secondary noise calculations to establish and simulate different biomes and altitudes, changing what tile map is referenced.

This is all theory but it sounds about right so I’m gonna give it a shot.

The Reference Map

Diving right in I think the smartest thing to do will be to create my first reference map, or the human readable map I described earlier. The first day I thought of this I tried to make it by hand in illustrator and got about 32 combinations in before I realized that was dumb, and it was time to make canvas go to work.

First we need to calculate the combinations of the cells and make something that we can use to output a physical reference map. What I mean by combinations is if we had an input of [1, 2, 3] the output would look like =>123,213,132,232,312,321…. There are lots of ways to do this, but I will try to keep it simple. Because order does not matter, we do not have to worry about permutations (the same combination in a different order).

This script to make it happen is as follows:

dungeon = function(args){
    args = args || {};
    args.zSize = args.zSize || 60;
    args.zDiv = args.zDiv || 3;
    this.zSize= args.zSize;
    this.zDiv = args.zDiv;
};

dungeon.prototype._createRefMap = function(){
    var cells = [];
    var pCount = this.zDiv * this.zDiv;
    
    function perm(s,c){
        if (c == 0) {
        cells.push(s);
        return;
        }
        perm(s+'0', c-1);
        perm(s+'1', c-1);
    }
    
    perm('',pCount);
    
    var last = cells.splice(cells.length-1,1);
    cells.unshift(last+'');
    console.log(cells);    
};

Just creating a new dungeon and then calling the prototype now outputs all of the permutations for a total of 512, on a side interesting note, is it also the could be looked at as every possible combination of a binary set of 9. Looking at the structure I already know that my two most common ones I am shooting for will be all states on and all states off, so I think it would be best to take the first record and move it to the front of the array to save on calculation time once we start looping through our state array.

Zone Object

Now it is time to make a Zone Object, this will be the basis for our mapping of the noise, this makes a object that we can put in an array, and compile the states of the cell as a searchable string. After that we will look at making a readable image.

dungeon.Zone = function(size, div, state){
    this.size = size;
    this.div = div;
    this.searchString = state;
    this.state = state.split('');
    this.cells = [];
    for(var i=0; i < div*div; i++){
        this.cells.push(0);
    }
    for(var i=0; i < state.length; i++){
        var sID = parseInt(state[i],10);
        this.cells[sID] = 1;            
    }
    return this;    
};

I then modified my pre-existing script to the following:

...
    var map = [];
    for(var i = 0; i < cells.length; i++){
        map.push(new dungeon.Zone(this.zSize, this.zDiv, cells[i]));    
    }
    this.map = map;    
    console.log(map);

This gives us an array on the main dungeon object that contains set of Zones with a searchable string for referencing later. I now need to create a new function to compile the physical map and set values for where the zone object is on the output map. This step is only necessary so that at a later time an artist can create a secondary reference map at a later time, if I just wanted black and white pxls to display I could effectively skip this step but that is not the final product I want.

I also went ahead and allocated the memory for each of the zone objects to have image data as well, even though I’m just using the map image and not an artistic tile image do to the fact of 512 tiles is quite a bit of content to come up with, just for an example. Using this function I generate my reference map that I will use as both a way to look up / store tiles and their properties; it also creates the ability for me to output a canvas with the tiles on it to make a human readable map.

dungeon.prototype._calculateMap = function(){
    var map = this.map;
    var cvas = document.createElement('canvas');
    var ctx = cvas.getContext('2d');
    
    var X = 0, Y = 0;
    var cellSize = this.zSize/this.zDiv;
    
    cvas.width = 20*this.zSize;
    cvas.height = Math.ceil(map.length/20)*this.zSize;
    
    for(var i = 0; i < map.length; i++){
        var x = 0, y = 0;
        for(var j = 0; j < map[i].cells.length; j++){ if(map[i].cells[j] == 1){ ctx.fillStyle = "#FFF"; }else{ ctx.fillStyle = "#000"; } ctx.fillRect(x+X,y+Y,cellSize,cellSize); x+=cellSize; if(x > this.zSize-cellSize){
                y+=cellSize;
                x=0;
            }
        };
        
        ctx.strokeStyle = "rgba(255,0,0,0.2)";    
        ctx.strokeRect(X,Y,this.zSize,this.zSize);
        
        var imgData = ctx.getImageData(X,Y,this.zSize,this.zSize);
        
        map[i].imgData = imgData;
        map[i].x = X;
        map[i].y = Y;
                
        X+=this.zSize;
        if(X > cvas.width-this.zSize){
            Y+=this.zSize;
            X=0;
        }
    };
};

Now it’s time to start generating a noise, and see if we can kick this thing into gear and output a dungeon like structure. Later I will research into making the ability for you to draw on the base noise and see the overlay tiles update accordingly, this would be cool for later development I think, but is something that is down the road a little bit.

Also CLICK HERE for an Example of the Reference Map

Enter Das_Noise

Ok so now the next step will be to generate a base noise map to start sampling, and outputting out maps imageData in the correct areas and see what kind of output I can get. I’m assuming this should go without much hitch and with a well set up noise will structure itself to resemble a dungeon right of the bat (I hope).

I want to use a good sized chebyshev style Worley Noise to start because I believe this will have a good look to it once overlaid, and will guarantee that most if not all the rooms connect. If you are not familiar with my Das_Noise library you can check it out here: http://pryme8.github.io/Das_Noise

To test the noise I am going to output on a 600 by 600 pxl canvas the noise till I get something acceptable. When I go to use it, i will not have to create the noise to any sort of output, but rather just check its values at certain locations then parse that how ever is needed to see what cells are active or not in that zone.
Already looking at this noise, we can visualize what the dungeon will look like if the calculations have been set up correctly. The next step is to identify the what each zone on the noise matches up to on our reference map, to see this in action click the link below to do one zone at a time on our canvas to the left.

Generate Zone

*UPDATE – I went ahead and added a basic tile map to reference, to show how that would work… you can look at the code to see how I did that, but after seeing it deployed I have three options, rework the tilemap to be cleaner and work a little better, make some sort of comparison script to see what the other tiles next to it are, and if there is a flat edge, have caped variations to use, or make everything procedrual… I think given the fact it took me two and a half hours to make 512 tiles im going to go with the last option here at some point.

Ohhh yeah, that works! Ok so I think I will wrap it up on this, but first here is a look at how I ID the zone of the noise.

Here is and example of the same process, with the noise of the same seed, but set to Simple2 and a scale of 100.

dungeon.prototype._idNoise = function(x,y,noise){
    if(typeof noise ==='undefined'){
    noise = this.noise;    
    }
    var cellSize = this.zSize/this.zDiv;
    
    var string = '';
    var self = this;
    var ctx = (document.getElementById('noise-canvas')).getContext('2d');
        ctx.fillStyle = "red";
        
        var cX = 0;
        var cY = 0;
    
    for(var i=0; i<this.zDiv*this.zDiv; i++){
        var t = 0;

        for(var pY = 0; pY < cellSize; pY++){
            for(var pX = 0; pX < cellSize; pX++){
                t+=noise.getValue({x:(pX+(this.zSize*x)+(cellSize*cX)),
                                   y:(pY+(this.zSize*y)+(cellSize*cY))});
                                    
            }
        }
        
        t/=(cellSize*cellSize);
        if(t<0.45){ string+=0+""; }else{ string+=1+""; } cX++; if(cX>this.zDiv-1){
        cX=0;
        cY++;
        }
    }

    for(var i=0; i<this.map.length; i++){
        if(this.map[i].searchString == string){
            return this.map[i];
        }
    };
};

Conclusion…

This was all literally done in one day intermittently while I cleaned the house… so yeah I think this is a valid and good approach for what I want to achieve. I will have to experiment with different noise types styles and scales and then come up with a nice tileset for it (I will prolly jack RPG maker resources for now). I think once this is deployed a little more the possibilities will be extensive.

I will be posting a simple Canvas Game based on this principle at some point!

Resources and References : None… I just made this crap up… if you have any questions Pryme8@gmail.com.