# Section IVGetting Noisy in Here

Finally the part I have been wanting to get to! One of the power players in the procedural world are methods called noise. Noise is a random (in our case pseudorandom) distribution of values over a range. Normally these values range from -1 to 1, but can have other values. We use these predictably random functions to control our methods. The simplest, yet least useful to use will be a white noise function, which some of you should be familiar with. Picture your old analog TV set to a blank channel or the scene from the movie Poltergeist. Example

There has been quite a bit of advancement in the generation of random values, with the works of Steven Worley and Ken Perlin. We can use a combination of their methods, to achieve some really interesting results. The are two main types of noise that we will be covering are lattice and gradient based noise methods.

#### Lattice Noise/Value Noise

There is a little bit of confusion in the procedural world with what is a value based noise and what is gradient. This is demonstrated by the article we will be referencing calls the process we will be reviewing  Perlin when it is actually Value Noise.

Value Noise is a series of nDimensional points at a set frequency that have a value, we then interpolate between the points which then gives us our values. There are multiple ways to interpolate the values and most do it smoothly but to help you understand the concept lets temporarily do a linear interpolation.

Take for example if we had a 1D noise with our lattice set at every integer value and a linear interpolation we get a graph similar to this: If we were to sample any point now between any of the lattice points we would get a value between the values of the closets points. In this 1d grid it would be the two closest points, in a 2d grid it would be 4 and in a 3d grid it would be 6 (for 2d/3d you can sample more but these are the minimum neighborhoods). So if we were to sample from this 1d Noise at the coordinate x=1.5 we would end up with a value of 0.55 (unless my math sucks).

If we use this process and mix together value noises of increasing frequency and decreasing amplitude we can make some interesting results. Another parameter we can introduce for control is persistence, which has some confusion as well as to its “official” definition. The term was first used by Mandelbrot while developing fractal systems. The simplest way to describe it would be the weighted effect of the values on the sum of the noise functions.

### Random Function

In order to get our noise functions rolling we first need to create a random number generation method. Here is a section of pseudo-code presented in :

```function IntNoise(32-bit integer: x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) &#038; 7fffffff) / 1073741824.0);
end IntNoise function
```

Right away one should notice that this is very close to the code that we used for the white noise generator above. There are many ways to generate a random number but we will convert this one initially and then test other methods to see which are more effective.
The GLSL version of this code would be:

```float rValueInt(int x){
x = (x >> 13) ^ x;
int xx = (x * (x * x * 60493 + 19990303) + 1376312589) & 0x7fffffff;
return 1.0 - (float(xx) / 1073741824.0);
}```

This function requires our input value to be an integer (hence making it a lattice), we then use bit-wise operators as explained in GLSL specs. I have no clue what is really happening with the bit-wise stuff other then we are shifting the number around… sorry I dont know more. The numbers that we used are right from Hugos example  and are prime numbers. You can change these numbers all you want, just make sure you keep them as prime in order to prevent as noticeable of graphic artifacts. From here we just need to decide how we want to interpolate the values between points.

### Its all up to Interpolation…

The simplest way to interpolate is linearly like what we used above is represented by this equation:

Mock CodeGLSL Code

```
function Linear_Interpolate(a, b, x)
return  a*(1-x) + b*x
end of function
```
```
float linearInterp(float a, float b, float x){
return a*(1.-x) + b*x;
}```

This is ok if we want sharp elements, but if we want smoother transitions we can use a cosine interpolation.

```function Cosine_Interpolate(a, b, x)
ft = x * 3.1415927
f = (1 - cos(ft)) * .5
return  a*(1-f) + b*f
end of function
```
```float cosInterp(float a, float b, float x){
float ft = x*3.12159276;
float f = (1.0 - cos(ft)) * .5;
return a*(1.-f) + b*f;
}```

There is also cubic iterp, but we will skip that for now and focus on linear and cosine. The last thing we will want to do, in order to make our noises smoother on their transitions is introduce a you guessed it smoothing function. This function can optionally be used and can be expanded to how ever many dimensions you would need. The smoothing helps reduce the appearance of block artifacts when rendering out to 2+ dimensions. Here is a snip-it of pseudo-code from .

```
//1-dimensional Smooth Noise
function Noise(x)
...    .
end function

function SmoothNoise_1D(x)
return Noise(x)/2  +  Noise(x-1)/4  +  Noise(x+1)/4
end function
```
```
//2-dimensional Smooth Noise
function Noise(x, y)
...
end function

function SmoothNoise_2D(x>, y)

corners = ( Noise(x-1, y-1)+Noise(x+1, y-1)+Noise(x-1, y+1)+Noise(x+1, y+1) ) / 16
sides   = ( Noise(x-1, y)  +Noise(x+1, y)  +Noise(x, y-1)  +Noise(x, y+1) ) /  8
center  =  Noise(x, y) / 4

return corners + sides + center

end function
```

Later in this section we will look at the differences between smoothed and non-smoothed noise. Now we need to start taking all these elements and put them together. Here is the pseudo code as exampled by 

```
function Noise1(integer x)
x = (x<<13) ^ x;
return ( 1.0 - ( (x * (x * x * 15731 + 789221) + 1376312589) &#038; 7fffffff) / 1073741824.0);
end function

function SmoothedNoise_1(float x)
return Noise(x)/2  +  Noise(x-1)/4  +  Noise(x+1)/4
end function

function InterpolatedNoise_1(float x)

integer_X    = int(x)
fractional_X = x - integer_X

v1 = SmoothedNoise1(integer_X)
v2 = SmoothedNoise1(integer_X + 1)

return Interpolate(v1 , v2 , fractional_X)

end function

function PerlinNoise_1D(float x)

total = 0
p = persistence
n = Number_Of_Octaves - 1

loop i from 0 to n

frequency = 2i
amplitude = pi

total = total + InterpolatedNoisei(x * frequency) * amplitude

end of i loop

end function
```

I decided to make a few structural changes to this for the GLSL conversion. In the above example they use four functions to make it happen, we are going to do it with three. I think it will also be relevant to add uniforms (or defines depends on your preference) to control things like octaves, persistence and a smoothness toggle. I will also be using strictly the cos interpolation, this is by personal choice any method can be used though. So following the structure of our SM object, we set up the shader argument as follows:

```
sm = new SM(
{
size : new BABYLON.Vector2(512, 512),
hasTime : false,
//timeFactor : 0.1,
uniforms:{
octaves:{
type : 'float',
value : 4,
min : 0,
max : 124,
step: 1,
hasControl : true
},
persistence:{
type : 'float',
value : 0.5,
min : 0.001,
max : 1.0,
step: 0.001,
hasControl : true
},
smoothed:{
type : 'float',
value : 1.0,
min : 0,
max : 1.0,
step: 1,
hasControl : true
},
zoom:{
type : 'float',
value : 1,
min : 0.001,
step: 0.1,
hasControl : true
},
offset:{
type : 'vec2',
value : new BABYLON.Vector2(0, 0),
step: new BABYLON.Vector2(1, 1),
hasControl : true
},
},
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
varying vec2 tUV;
/*----- UNIFORMS ------*/
uniform float time;
uniform vec2 tSize;
uniform float octaves;
uniform float persistence;
uniform float smoothed;
uniform float zoom;
uniform vec2 offset;
```

This will set up all of our uniforms and the defaults for them. You can do these as defines, but if having the ability to manipulate it on the fly they should be uniforms. I also added a uniform that we will not be manipulating directly ever but letting the size of the canvas/texture set this value when the shader is compiled. With this we need to make some changes to our SM object to accommodate this new uniform.

```SM = function(args, scene){
...
this.buildGUI();

this.setSize(args.size);

return this;
}

SM.prototype = {
...
setSize : function(size){
var canvas = this.scene._engine._gl.canvas;
size = size || new BABYLON.Vector2(canvas.width, canvas.height);
this._size = size;
var pNode = canvas.parentNode;
pNode.style.width = size.x+'px';
pNode.style.height = size.y+'px';
this.scene._engine.resize();
this.buildOutput();
}
...
```

Now the shader will always know what the size of the texture is, because we have made this a inherent feature of the SM object we need to add the uniform for tSize to the default fragment that the shader has built in. This is in the situation that the default shader get bound that it will validate and compile. From here we need to include our random number function, our interpolation function and the noise function itself. I am going to include a lerp function as well in case you want to use this and the interpolation vs cos.

```//Methods
//1D Random Value from INT;
float rValueInt(int x){
x = (x >> 13) ^ x;
int xx = (x * (x * x * 60493 + 19990303) + 1376312589) & 0x7fffffff;
return 1. - (float(xx) / 1073741824.0);
}
//float Lerp
float linearInterp(float a, float b, float x){
return a*(1.-x) + b*x;
}
//float Cosine_Interp
float cosInterp(float a, float b, float x){
float ft = x*3.12159276;
float f = (1.0 - cos(ft)) * .5;
return a*(1.-f) + b*f;
}
//1d Lattice Noise
float valueNoise1D(float x, float persistence, float octaves, float smoothed){
float t = 0.0;
float p = persistence;
float frequency, amplitude, tt, v1, v2, fx;
int ix;
for(float i=1.0; i<=octaves; i++){
frequency = i*2.0;
amplitude = p*i;

ix = int(x*frequency);
fx = fract(x*frequency);

if(smoothed > 0.0){
v1 = rValueInt(ix)/2.0 + rValueInt(ix-1)/4.0 + rValueInt(ix+1)/4.0;
v2 = rValueInt(ix+1)/2.0 + rValueInt(ix)/4.0 + rValueInt(ix+2)/4.0;
tt = cosInterp(v1, v2, fx);
}else{
tt = cosInterp(rValueInt(ix), rValueInt(ix+1), fx);
}

t+= tt*amplitude;
}
t/=octaves;
return t;
}
```

So now we have a GLSL function to generate some 1D noise! It has four arguments, the last one of smoothed can be omitted if you please but I like having it so…. It a fairly simple function and most of our noise functions will have a similar structure. We could also put a #define in that would control the interpolation, but for simplicity I am just using the cosine method. From here it is as simple as setting up the main function of our shader program to use this noise. To do this we decide our sampling space and pass that to the x value of the noise function along with our other uniforms that we have already set up.

```void main(void) {
vec2 tPos = ((vUV*tSize)+offset)/zoom;
float v = valueNoise1D(tPos.x, persistence, octaves, smoothed)+1.0/2.0;
vec3 color = vec3(mix(vec3(0.0), vec3(1.0), v));
gl_FragColor = vec4(color, 1.0);
}
```

Super easy right!? Our sampling space that we use is the 0-1 uv multiplied by the size of the texture, which effectively shifts us to texel space. The choice to use to vUV instead of the tUV was because for some reason the negative value was creating an artifact as seen here: I could try to trouble shoot that, but instead its just easier to use the 0-1 uv range and move on.

Next add an offset which is also in texel space, you could do it as a percentage of the texture’s size but that is user preference. We then divide the whole thing by a zoom value. That gives us a nice sampling space, which we then pass to our noise function with our other arguments. Because the noise function returns a number between negative 1 and positive 1, we shift it to a 0-1 range by simply adding one then dividing the sum by two.

### A New Dimension

One dimensional noise is cool and has its uses, but we need more room for activities. Before we develop more noises and look at different methods for generation having an understanding of how to extend the noise to n-dimensions is pretty important. For all general purposes all calculations stay the same, you just have to make a couple more of them. It would probably be smart to add a support function for smoothing the values of the interpolation now that we are working with larger dimensions. The main modifications to the function will be changing some of the variables from floats and integers to vectors of the same type. The last function to add is a random number generator that takes into consideration the 2 dimensions.

```//2D Random Value from INT vec2;
float rValueInt(ivec2 p){
int x = p.x, y=p.y;
int n = x+y*57;
n = (n >> 13) ^ n;
int nn = (n * (n * n * 60493 + 19990303) + 1376312589) & 0x7fffffff;
return 1. - (float(nn) / 1073741824.0);
}

float smoothed2dVN(ivec2 pos){
return (( rValueInt(pos+ivec2(-1))+ rValueInt(pos+ivec2(1, -1))+rValueInt(pos+ivec2(-1, 1))+rValueInt(pos+ivec2(1, 1)) ) / 16.) + //corners
(( rValueInt(pos+ivec2(-1, 0)) + rValueInt(pos+ivec2(1, 0)) + rValueInt(pos+ivec2(0, -1)) + rValueInt(pos+ivec2(0,1)) ) / 8.) + //sides
(rValueInt(pos) / 4.);
}

//2d Lattice Noise
float valueNoise(vec2 pos, float persistence, float octaves, float smoothed){
float t = 0.0;
float p = persistence;
float frequency, amplitude, tt, v1, v2, v3, v4;
vec2 fpos;
ivec2 ipos;
for(float i=1.0; i<=octaves; i++){
frequency = i*2.0;
amplitude = p*i;

ipos = ivec2(int(pos.x*frequency), int(pos.y*frequency));
fpos =  vec2(fract(pos.x*frequency), fract(pos.y*frequency));

if(smoothed > 0.0){
ivec2 oPos = ipos;
v1 = smoothed2dVN(oPos);
oPos = ipos+ivec2(1, 0);
v2 = smoothed2dVN(oPos);
oPos = ipos+ivec2(0, 1);
v3 = smoothed2dVN(oPos);
oPos = ipos+ivec2(1, 1);
v4 = smoothed2dVN(oPos);

float i1 = cosInterp(v1, v2, fpos.x);
float i2 = cosInterp(v3, v4, fpos.x);
tt = cosInterp(i1, i2, fpos.y);

}else{
float i1 = cosInterp(rValueInt(ipos), rValueInt(ipos+ivec2(1,0)), fpos.x);
float i2 = cosInterp(rValueInt(ipos+ivec2(0,1)), rValueInt(ipos+ivec2(1,1)), fpos.x);

tt = cosInterp(i1, i2, fpos.y);
}

t+= tt*amplitude;
}
t/=octaves;
return t;
}
```

There we have it, there are definitely some problems with this method that if we took some time and refined this could be fixed. These problems are things like artifacts as the noise transfers from a positive to a negative coordinate range which is apparent the more you zoom in and noticeable circular patterns the closer to 0,0 we get. In order to fix this quickly and essentially ‘ignore’ that problem we just add a large offset to the noise initially and screw our coordinates to be far away from the artifacts.

As a challenge see if you can change the interpolation function to be cubic. Read the section on it here .

You can also see a live example of the 2d Lattice Noise here.

# Section IIIAdvance Spaces, Time and Polar

With our new SM object put together we now have the ability to start putting together a collection of generators and other GLSL functions to create more dynamic content. If we go back to our reference book  starting on page 46 it starts reviewing some interesting methods we will recreate now.

### Star of My Eye For practice a great project is to create a star shape. At first glance one might think that it would be tough to generate something like this, but once we shift our sampling space to be in polar coordinates through cos/sin (sinusoidal) calculations. Again my version will be a variation of a segment of script meant for Renderman. I will show the RSL (Renderman Shader Language) version from  next to my GLSL version then review the differences. I would recommend going through line by line and try to recreate this on your own.

```
surface star(
uniform float Ka = 1;
uniform float Kd = 1;
uniform color starcolor = color (1.0000,0.5161,0.0000);
uniform float npoints = 5;
uniform float sctr = 0.5;
uniform float tctr = 0.5;
){
point Nf = normalize(faceforward(N, I));
color Ct;
float ss, tt, angle, r, a, in_out;
uniform float rmin = 0.07, rmax = 0.2;
uniform float starangle = 2*PI/npoints;
uniform point pO = rmax*(cos(0),sin(0), 0);
uniform point pi = rmin*(cos(starangle/2),sin(starangle/2),0);
uniform point d0 = pi - p0; point d1;
ss = s - sctr; tt=t- tctr;
angle = atan(ss, tt) + PI;
r = sqrt(ss*ss + tt*tt);
a = mod(angle, starangle)/starangle;
if (a >= 0.5) a = 1 - a;
dl = r*(cos(a), sin(a),0) - p0;
in_out = step(0, zcomp(d0^d1) );
Ct = mix(Cs, starcolor, in_out);
/* diffuse (“matte”) shading model */
Oi = Os;
Ci = Os * Ct * (Ka * ambient() + Kd * diffuse(Nf));
}```
```
precision highp float;
//Varyings
varying vec2 vUV;
varying vec2 tUV;
//Methods
/*----- UNIFORMS ------*/
#define PI 3.14159265359
uniform vec3 starColor;
uniform float nPoints;
uniform float rmin;
uniform float rmax;
uniform float aaValue;
uniform vec3 bgColor;

void main(void) {
vec2 offsetFix = vec2(0.5);
float ss, tt, angle, r, a;
vec3 color = bgColor;
float starAngle = 2.*PI/nPoints;
vec3 p0 = rmax*vec3(cos(0.),sin(0.), 0.);
vec3 p1 = rmin*vec3(cos(starAngle/2.),sin(starAngle/2.), 0.);
vec3 d0 = p1 - p0;
vec3 d1;

ss = vUV.x - offsetFix.x; tt = (1.0 - vUV.y) - offsetFix.y;
angle = atan(ss, tt) + PI;
r = sqrt(ss*ss + tt*tt);
a = mod(angle, starAngle)/starAngle;

if (a >= 0.5){a = 1.0 - a;}

d1 = r*vec3(cos(a), sin(a), 0.) - p0;

float in_out = smoothstep(0., aaValue, cross(d0 , d1).z);

color = mix(color, starColor, in_out);

gl_FragColor = vec4(color, 1.0);
}```

Some of the values in the RSL version are irrelevant to us, like Ka, Kd, Nf, Oi, Ci and any other variables accosted with a light model. Things that are important to us are the number of points the star will have, its size limits, and our colors. Lets go through the GLSL version line by line and understand what is going on.

First we have our precision mode, which we want as accurate as floats as possible so we keep it as highp. The varying section is pretty standard, we could remove the tUV as its not used but we will keep it as you may want to sample in the -1 to 1 range instead of 0 to 1 in some instances. No additional methods need to be defined. The uniforms section includes a definition for PI, because we are going to be working in polar space and be using calculations dependent on circular/spherical values. GLSL does not define this value inherently and so it is up to us to make sure we have a value we can reference; the cool part about this is we can experiment with funky values and see how that effects our calculations (PI = 4, for example).

The main function of the program starts with up setting a value for the offset fix of the star, which we will use later to move the sample into a scope that will ‘center’ the star. Then we define a few floats that will be used later, they could be defined at time of execution of the line this just makes things more readable. I cant even lie, I do not understand this math in the slightest… I understand some of it, but for the most part I just translated it from RSL to GLSL. Even the explanation in  is kinda crap as well. If any math buffs are reading this and want to do a break down of wtf is going on with these numbers and can send me an email, I will love you long time.

At the very least here is a snippet of the summary from :

To test whether (r,a) is inside the star, the shader finds the vectors d0 from the
tip of the star point to the rmin vertex and d1 from the tip of the star point to the
sample point. Now we use a handy trick from vector algebra. The cross product of
two vectors is perpendicular to the plane containing the vectors, but there are two
directions in which it could point. If the plane of the two vectors is the (x, y) plane,
the cross product will point along the positive z-axis or along the negative z-axis.
The direction in which it points is determined by whether the first vector is to the left
or to the right of the second vector. So we can use the direction of the cross product
to decide which side of the star edge d0 the sample point is on.

Yeah… what that says… Its pretty much a distance function, anyways one improvement I included was the ability to anti-alias the edges. This is very simple, we just change out the step calculation for a smoothstep one with a decently low value to represent the tolerance.

There are other ways to go about this but for now this will do. Mess around with this a little bit see what you can figure out. For a live example you can go here.

### Head in the Clouds & Introducing Time

Another common procedural processes would be the creation 2d/3d clouds. There are way to many solutions for this then I could count, but a very simple implementation would be to layer multiple sinusoidal functions at different frequencies. I think now would be a good time to implement some time shifting to our shader as well. We will use this time shift to animate the clouds. Once again lets take a look at the RSL version provided in  and compare it to my GLSL solution.

```
#define NTERMS 5
surface cloudplane(
color cloudcolor = color (1,1,1);
)
{
color Ct;
point Psh;
float i, amplitude, f;
float x, fx, xfreq, xphase;
float y, fy, yfreq, yphase;
uniform float offset = 0.5;
uniform float xoffset = 13;
uniform float yoffset = 96;
x = xcomp(Psh) + xoffset;
y = ycomp(Psh) + yoffset;
xphase = 0.9; /* arbitrary */
yphase = 0.7; /* arbitrary */
xfreq = 2 * PI * 0.023;
yfreq = 2 * PI * 0.021;
amplitude = 0.3;
f = 0;
for (i = 0; i < NTERMS; i += 1) {
fx = amplitude * (offset + cos(xfreq * (x + xphase)));
fy = amplitude * (offset + cos(yfreq * (y + yphase)));
f += fx * fy;
xphase = PI/2 * 0.9 * cos (yfreq * y);
yphase = PI/2 * 1.1 * cos (xfreq * x);
xfreq *= 1.9+i* 0.1; /* approximately 2 */
yfreq *= 2.2-i* 0.08; /* approximately 2 */
amplitude *= 0.707;
}
f = clamp(f, 0, 1);
Ct = mix(Cs, cloudcolor, f);
Oi = Os;
Ci = Os * Ct;
}
}```
```
precision highp float;

uniform float time;
//Varyings
varying vec2 vUV;
varying vec2 tUV;

//Methods

/*----- UNIFORMS ------*/
#define PI 3.14159265359
uniform vec3 cloudColor;
uniform vec3 bgColor;

uniform float zoom;
uniform float octaves;
uniform float amplitude;

uniform vec2 offsets;

void main(void) {
float f = 0.0;
vec2 phase = vec2(0.9*time, 0.7);
vec2 freq = vec2(2.0*PI*0.023, 2.0*PI*0.021);

float offset = 0.5;
vec2 pos = vec2(vUV.x+offsets.x, vUV.y+offsets.y);

float scale = 1.0/zoom;

pos.x = pos.x*scale + offset + time;
pos.y = pos.y*scale + offset - sin(time*0.32);

float amp = amplitude;

for(float i = 0.0; i < octaves; i++){
float fx = amp * (offset + cos(freq.x * (pos.x + phase.x)));
float fy = amp * (offset + cos(freq.y * (pos.y + phase.y)));
f += fx * fy;
phase.x = PI/2.0 * 0.9 * cos(freq.y * pos.y);
phase.y = PI/2.0 * 1.1 * cos(freq.x * pos.x);
amp *= 0.602;
freq.x *= 1.9 + i * .01;
freq.y *= 2.2 - i * 0.08;
}

f = clamp(f, 0., 1.);
vec3 color = mix(bgColor, cloudColor, f);

gl_FragColor = vec4(color, 1.0);
}```

This is a very specific form of procedural generation that relies on a method called Spectral Synthesis. This process is described by the theory of Fourier analysis which states that functions can be represented as a sum several sinusoidal terms. We sample these functions at different frequencies and phases to generate a result. The main struggle with this method is preventing tiling or noticeable patterns which ruin the effect. The implementation of this is very limited as it relies on quite a few “magic numbers” and is not as customization as more modern solutions using noise algorithms.

The major difference with the GLSL version that I have introduced here is the animation aspect. We achieve this first by making some modifications to our SM object to accommodate.

```SM = function(args, scene){
...

this.hasTime = args.hasTime || false;
this.timeFactor = args.timeFactor || 1;
this._time = 0;

...
};

SM.prototype = {
setTime : function(delta){
this._time += delta*this.timeFactor;
}
},

...

engine.runRenderLoop(function(){
scene.render();
if(sm.hasTime){
var d = scene.getAnimationRatio();
sm.setTime(d);
}
});
...
```

Then we add the arguments to when we call our new SM object.

```...
sm = new SM(
{
size : new BABYLON.Vector2(512, 512),
hasTime : true,
timeFactor : 0.1,
uniforms:{
...
```

We could simply add a value to the time variable, but in order to sync it between different clients we use BJS method of scene.getAnimationRatio(). This should keep the shaders time coordinates at the same value if they started at the same time but have different thread speeds. Mess around with this generator and try different stuff out just to get more comfortable with what is going on.

For a live example you can go here.

Continue to Section IV

# Section II:Uniforms and UI

With the ability to create the likeness of a brick wall we can now start adding some controls that will allow the testing of various parameter values in real time. There would be a multitude of ways to handle this the most simple being using HTML DOM elements. If you are feeling froggy you could attempt to use the BABYLON.GUI system, which is GPU accelerated. The first steps will be to extend our SM object to be able to add controls quickly.

### A Uniform Argument

Right away we go to where the SM object is constructed, go to the argument object and then add a new variable for the uniform. It is here we will define the uniforms name, type, value, and any constraints that will be used later with the UI.

```...
sm = new SM(
{
uniforms:{
brickCounts : {
type : 'vec2',
value : new BABYLON.Vector2(6,12),
min : new BABYLON.Vector2(1,1),
step : new BABYLON.Vector2(1,1),
hasControl : true
}
},
fx :...```

Then we need to give the SM object instructions on what to do with this new argument.

```SM = function(args, scene){
...
this.uniforms = args.uniforms || {};
...
return this;
}```

Now that the argument is stored on the object, we need to modify some of the object methods to accommodate for the new uniforms. The function most effected by this change is the buildShader method.
We need to make sure that when we bind our shader that we include our new uniforms and then set their default values.

```...
...
var _uniforms = ["world", "worldView", "worldViewProjection", "view", "projection"];
_uniforms =  _uniforms.concat(this.getUniformsArray());

vertex: uID,
fragment: uID,
},{
attributes: ["position", "normal", "uv"],
uniforms: _uniforms
});
...```

Then to make these changes work we need to define a method for grabbing the array of uniform names that are assigned. We could simply use Object.keys(this.uniforms); everytime we wanted to get that array of names, but that is a little ugly and redundant.

```...
SM.prototype = {
getUniformsArray : function(){
var keys = Object.keys(this.uniforms);
return keys;
},
buildOutput :...```

Before we go to much farther, it would be prudent to modify our fragment shader being passed in the arguments to accommodate for this new uniform otherwise when we try to set the default value the shader will not compile. We also have no need for the #define XBRICKS and #define YBRICKS, with the new uniform effectively replacing these variables.

```...
fx :
`precision highp float;
//Varyings
...
//Methods
...
/*----- UNIFORMS ------*/
uniform vec2 brickCounts;

#define MWIDTH 0.1

void main(void) {
vec3 brickColor = vec3(1.,0.,0.);
vec3 mortColor = vec3(0.55);

vec2 brickSize = vec2(
1.0/brickCounts.x,
1.0/brickCounts.y
);

vec2 pos = vUV/brickSize;

vec2 mortSize = 1.0-vec2(MWIDTH*(brickCounts.x/brickCounts.y), MWIDTH);

pos += mortSize*0.5;

if(fract(pos.y * 0.5) > 0.5){
pos.x += 0.5;
}

pos = fract(pos);

vec2 brickOrMort = step(pos, mortSize);

vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

gl_FragColor = vec4(color, 1.0);
}...```

If you are following along, and were to refresh the page now you would most likely see a solid grey page. This is because the shader can bind no problem as there should not be any errors but with the brick counts set to 0 the math fails. We solve this by doing a little more work on the SM object to have it set the defaults values of the uniforms after the shader is bound.

```...
SM.prototype = {
getUniformsArray : ...
},
setUniformDefaults : function(){
var keys = this.getUniformsArray();
for(var i=0; i<keys.length; i++){
var u = this.uniforms[keys[i]];
var type = u.type;
var v = u.value;
}
},
type2Method : function(type){
var m;
switch(type){
case 'float': m ='setFloat'; break;
case 'vec2': m ='setVector2'; break;
case 'vec3': m ='setVector3'; break;
}
return m;
},
buildOutput :...
},
...
this.setUniformDefaults();
...
}
```

This may look a little intimidating, but its really not. First we get the key values of the uniforms (the names). Then we iterate through these keys now and grab the default value and the type. Once we have the type we need to get back the associated method that BJS has for setting the uniforms on the shader. In this situation the line “shader[this.type2Method(type)](keys[i], v);” essentially becomes shader.setVector2(‘brickCounts’, BABYLON.Vector2(#,#)); If everything is correct when we refresh the page now we should see whatever number of bricks we set as the default values on the constructors arguments. Feel free to change these numbers up and refresh the page to verify everything is working. You can look HERE for reference or to download this step.

With everything lined up and working, its now time to get the UI elements constructed. Eventually you might want to develop your own user interface components, but the process I am about to show you should cover most cases. For simplicity of code understanding I am going to write out some sections of code that repeat with little variation. Normally want to have function handle these repeat sections, but it will be easier to understand initially to do it long handed. The creation of the UI can be easily be expanded upon in the future, but to get started we create another method on our SM object, then call it after the creation of the output on the initialization. Now would also be a good time to define a quick support method to return the current “this.shader”.

```
SM = function(args, scene){
...
this.buildOutput();

this.buildGUI();

return this;
}

SM.prototype = {
...
},
buildGUI : function(){
this.ui = {
mainBlock : document.createElement('div'),
inputs : [],
};

var keys = this.getUniformsArray();

},
buildOutput:...
```

The purpose of this method will be to iterate through the SM object’s uniform object keys, create all the appropriate DOM elements, append them and then set a function up to respond to change events. So we set up a new container object for the ui elements, then grab the uniform keys with our getUniformArray method. Once we have our keys to iterate through we proceed to parse the uniforms object data.

```...
var keys = this.getUniformsArray();
for(var i=0; i<keys.length; i++){
var u = this.uniforms[keys[i]];
if(!u.hasControl){continue;}

var _block = document.createElement('div');

var _title = document.createElement('span');
_title.innerHTML = keys[i]+":";
_block.appendChild(_title);

var _inBlock = document.createElement('span');
var _inputs = [];
var _in;

if(u.type == 'float'){
_in = document.createElement('input');
_in.setAttribute('type', 'number');
_in.setAttribute('id', keys[i]);
_in.value = u.value;
if(u.min){
_in.setAttribute('min', u.min.x);
}
if(u.max){
_in.setAttribute('max', u.max.x);
}
if(u.step){
_in.setAttribute('step', u.step.x);
}
_inputs.push(_in);
}

if(u.type == 'vec2' || u.type == 'vec3'){
_in = document.createElement('input');
_in.setAttribute('type', 'number');
_in.setAttribute('id', keys[i]+":x");
_in.value = u.value.x;
if(u.min){
_in.setAttribute('min', u.min.x);
}
if(u.max){
_in.setAttribute('max', u.max.x);
}
if(u.step){
_in.setAttribute('step', u.step.x);
}
_inputs.push(_in);
_in = document.createElement('input');
_in.setAttribute('type', 'number');
_in.setAttribute('id', keys[i]+":y");
_in.value = u.value.y;
if(u.min){
_in.setAttribute('min', u.min.y);
}
if(u.max){
_in.setAttribute('max', u.max.y);
}
if(u.step){
_in.setAttribute('step', u.step.y);
}
_inputs.push(_in);
}
if(u.type == 'vec3'){
_in = document.createElement('input');
_in.setAttribute('type', 'number');
_in.setAttribute('id', keys[i]+":z");
_in.value = u.value.z;
if(u.min){
_in.setAttribute('min', u.min.z);
}
if(u.max){
_in.setAttribute('max', u.max.z);
}
if(u.step){
_in.setAttribute('step', u.step.z);
}
_inputs.push(_in);
}

for(var j=0; j<_inputs.length; j++){
_inBlock.appendChild(_inputs[j]);
}

_block.appendChild(_inBlock);

var _input = {
block : _block,
inputs : _inputs
};
this.ui.inputs.push(_input);
this.ui.mainBlock.appendChild(_input.block);
}
document.body.appendChild(this.ui.mainBlock);
...
}```

With this added into our method, we can now (hopefully) support the creation of DOM inputs for floats, vector2, vector3 parameters. I have not tested any of it yet and am kinda writing all of this as we go so bare with me if their are any bugs and you are reading a version that is not finalized/debugged. But as far as I can tell right now this should work. If we were to refresh the page you would not see any changes, unless you look at the source. In order to see the changes we will need to provide some CSS. You can simply copy this next section and modify it how ever you want.

```<style>
html, body {
overflow: hidden;
width   : 100%;
height  : 100%;
margin  : 0;
-webkit-box-sizing: border-box;
-moz-box-sizing: border-box;
box-sizing: border-box;
}

*, *:before, *:after {
-webkit-box-sizing: inherit;
-moz-box-sizing: inherit;
box-sizing: inherit;
}

#renderCanvas {
width   : 100%;
height  : 100%;
touch-action: none;
}

.ui-block{
display:block;
position:absolute;
left:0;
top:0;
z-index:10001;
background:rgba(150,150,150,0.5);
width:240px;
font-size:16px;
font-family: Arial, Helvetica, sans-serif;
}

.ui-item{
display:block;
position:relative;
}

.ui-in-block{
display:inline-block;
width:60%;
white-space:nowrap;
}

.ui-in{
display:inline-block;
width:100%;
}

.ui-in-vec2{
width:50%;
}

.ui-in-vec3{
width:32.5%;
}

</style>```

Upon a refresh now, we should see our UI elements for the brickCounts Uniform on the top left of our page. Then we go back to our buildGUI method in order to script the responses to change events on the ui block.

```...
document.body.appendChild(this.ui.mainBlock);

var self = this;

if(id.length>1){
self.uniforms[id].value[id] = parseFloat(value);
if(id=='vec2'){
}else if(id=='vec3'){
}
}else{
self.uniforms[id].value = parseFloat(value);
}
}

//BINDINGS//
var target = e.target;
var id = target.getAttribute('id').split(':');
var value = target.value;
}, false);

}
```

Voilà it is done… partially. Upon refreshing the page then changing one of the values in our inputs we should instantly see the values in our shader being updated(effecting the output). Now to go back and add support for a few more parameters like the colors and the mortar width. If we set up everything correctly we can now just edit our arguments and change the fragment shader slightly.

```...
uniforms:{
brickCounts : {
type : 'vec2',
value : new BABYLON.Vector2(6,12),
min : new BABYLON.Vector2(1,1),
step : new BABYLON.Vector2(1,1),
hasControl : true
},
mortarSize : {
type: 'float',
value : 0.1,
min: 0.0001,
max: 0.9999,
step: 0.0001,
hasControl: true
},
brickColor : {
type: 'vec3',
value : new BABYLON.Vector3(0.8, 0.1, 0.1),
min: new BABYLON.Vector3(0, 0, 0),
max: new BABYLON.Vector3(1, 1, 1),
step: new BABYLON.Vector3(0.001, 0.001, 0.001),
hasControl: true
},
mortColor : {
type: 'vec3',
value : new BABYLON.Vector3(0.35, 0.35, 0.35),
min: new BABYLON.Vector3(0, 0, 0),
max: new BABYLON.Vector3(1, 1, 1),
step: new BABYLON.Vector3(0.001, 0.001, 0.001),
hasControl: true
},
},
fx :
`precision highp float;
//Varyings
varying vec2 vUV;
varying vec2 tUV;

//Methods
float pulse(float a, float b, float v){
return step(a,v) - step(b,v);
}
float pulsate(float a, float b, float v, float x){
return pulse(a,b,mod(v,x)/x);
}
float gamma(float g, float v){
return pow(v, 1./g);
}
float bias(float b, float v){
return pow(v, log(b)/log(0.5));
}
float gain(float g, float v){
if(v < 0.5){
return bias(1.0-g, 2.0*v)/2.0;
}else{
return 1.0 - bias(1.0-g, 2.0 - 2.0*v)/2.0;
}
}

/*----- UNIFORMS ------*/
uniform vec2 brickCounts;
uniform float mortarSize;
uniform vec3 brickColor;
uniform vec3 mortColor;

void main(void) {

vec2 brickSize = vec2(
1.0/brickCounts.x,
1.0/brickCounts.y
);

vec2 pos = vUV/brickSize;

vec2 mortSize = 1.0-vec2(mortarSize*(brickCounts.x/brickCounts.y), mortarSize);

pos += mortSize*0.5;

if(fract(pos.y * 0.5) > 0.5){
pos.x += 0.5;
}

pos = fract(pos);

vec2 brickOrMort = step(pos, mortSize);

vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

gl_FragColor = vec4(color, 1.0);
}`...```

Now that is procedural, take a little bit of time to mess around with this and experiment with some different values now that you can see the changes instantly! If you are having trouble getting to this point you can review and/or download the source here.

### Exporting/Saving

All these parameters and options for bricks is cool and all… but now we should start making the changes necessary to make the texture exportable which will be the easiest way to take this texture we made from mock up to production. Eventually a good end goal would be to include these procedural process in your project and have them compile on runtime, which could save tons of space when saving/serving the project to a client if used correctly. But that is much much later, for now lets focus on making the ability to set the textures size and then saving it from the browser. Because the HTML canvas object is processed by the browser for all intensive purposes as an image, we can simply right click on the canvas and save it! After a couple quick changes to our SM object and a small change to the DOM structure, we can add one additional argument to set the size of the canvas to a specific unit.

```
//DOM CHANGES
...
<body>
<div id='output-block'>
<canvas id="renderCanvas"></canvas>
</div>
...

//SM OBJECT CHANGES
SM = function(args, scene){
...
this.buildGUI();

if(args.size){this.setSize(args.size);}

return this;
}

SM.prototype = {
setSize : function(size){
var canvas = this.scene._engine._gl.canvas;
var pNode = canvas.parentNode;
pNode.style.width = size.x+'px';
pNode.style.height = size.y+'px';
this.scene._engine.resize();
this.buildOutput();
},
...

sm = new SM(
{
size : new BABYLON.Vector2(512, 512),
uniforms:{...
```

The one thing we need to make sure we do when we change the size of the canvas manually, is to fire the engines resize function to get the gl context into the same dimensions. We then rebuild the output just to be safe. Now we have a useful brick wall generator that we can export textures from for later use. Here is the final source for this section and a live example of the generator we just created.

Continue to Section III

# Section I:Sampling Space and Manipulations

Now that we have a basic development environment set up, it would be prudent to review different methods for sampling and manipulating the coordinate system that dictates the output of our procedural processes. We will be basically reviewing built in functions to glsl that will help us in controlling our sampling space.

What is sampling space? You can think of it as the coordinate system/space that we will use as the value that we feed to our noise/procedural algorithms. This can be anything that effectively want from a singular value to a n-dimensional location…. In most of our cases we will be using the vPosition or the vUV as our coordinate space, even though there are other special situations that may dictate you use a difference system. You can review this concept starting on page 24 of  where they explain coordinate space with these points:

```• The current space is the one in which shading calculations are normally done.
In most renderers, current space will turn out to be either camera space or
world space, but you shouldn’t depend on this.

• The world space is the coordinate system in which the overall layout of your
scene is defined. It is the starting point for all other spaces.

• The object space is the one in which the surface being shaded was defined. For
instance, if the shader is shading a sphere, the object space of the sphere is the
coordinate system that was in effect when the RiSphere call was made to create
the sphere. Note that an object made up of several surfaces all using the same
shader might have different object spaces for each of the surfaces if there are
geometric transformations between the surfaces.

• The shader space is the coordinate system that existed when the shader was in-
voked (e.g., by an RiSurface call). This is a very useful space because it can be
attached to a user-defined collection of surfaces at an appropriate point in the
hierarchy of the geometric model so that all of the related surfaces share the

Which if you ask me is overkill on the explanation. It all boils down to what values you choose to reference when feeding out procedural algorithms. If we have a 3d noise for example we would most likely use the vPosition which is an xyz value for that pixels location in the 3d scene locally, if you used gl_FragCoord, I believe that would be global (do not quote me on this). By making some quick changes to our page and changing the argument that we initialize the objects fragment shader to something like this:

```precision highp float;
//Varyings
varying vec2 vUV;

void main(void) {
vec3 color = vec3(vUV.x, vUV.y, vUV.y);
gl_FragColor = vec4(color, 1.0);
}
```

With everything in its place we can see now when we refresh the page, a gradient that should look like this: What this is showing us is that our UV is set up correctly, as the lower left corner is black where vUV.x & vUV.y == 0; white where they are 1; Red where x is 1 & y is 0; and finally Cyan where y is 1 and x is 0. We are directly effecting the color by the uv values our very first procedural (explicit) texture!

### Modulate

Now that we can have established our coordinate space, how can me manipulate it do to our biding.
There is a collection of methods available to us in glsl, but lets take a look at which ones  mentions starting on page 27.

### step

Description:
step generates a step function by comparing x to edge.
For element i of the return value, 0.0 is returned if x[i] < edge[i], and 1.0 is returned otherwise.

We can also define a method that uses the step function to generate what is known as a pulse by doing the following:

float pulse(float a, float b, float v){
return step(a,v) – step(b,v);
}

Which makes everything outside of the range between a-b come up as 1 and anything outside as 0. This gives us the ability to effectively create a rectangle in what ever range we decide.

### clamp

Description:
clamp returns the value of x constrained to the range minVal to maxVal. The returned value is computed as min(max(x, minVal), maxVal).

The next method does not have much use unless you are using a coordinate system that has negative values. Normally for sampling coordinates you will want to work in a -1 to 1 range and not a 0 to 1 range, so lets adjust the default vertex shader to have a new varying variable that has the uv transposed to this range.

```...
vx:
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;
varying vec2 tUV;

void main(void) {
vec4 p = vec4( position, 1. );
gl_Position = worldViewProjection * p;
vUV = uv;
tUV = uv*2.0-1.0;
}`,
...```

### abs

Description:
abs returns the absolute value of x.

### smoothstep

Description:
smoothstep performs smooth Hermite interpolation between 0 and 1 when edge0 < x < edge1. This is useful in cases where a threshold function with a smooth transition is desired. smoothstep is equivalent to: genType t; /* Or genDType t; */ t = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0); return t * t * (3.0 - 2.0 * t); Results are undefined if edge0 ≥ edge1.

### mod

Description:
mod returns the value of x modulo y. This is computed as x – y * floor(x/y).

With the mod method and the pulse function that we created, we can now create another function to create a “pulsate” function

### gamma

Description:
The zero and one end points of the interval are mapped to themselves. Other values are shifted upward toward one if gamma is greater than one, and shifted downward toward zero if gamma is between zero and one.

### bias

Description:
Perlin and Hoffert (1989) use a version of the gamma correction function that they call the bias function.

### gain

Description:
Regardless of the value of g, all gain functions return 0.5 when x is 0.5. Above and below 0.5, the gain function consists of two scaled-down bias curves forming an S-shaped curve. Figure 2.20 shows the shape of the gain function for different choices of g.

There are quite a few more (sin, cos, tan, etc) but we will cover those more later, if you want to go over more now check out: http://www.shaderific.com/glsl-functions/. These that I have presented here though should be enough to start making some more dynamic of sampling spaces. With all this at our disposal what is something that we could make of use? A pretty standard texture in the procedural world would be a brick or checkerboard pattern, so lets start there.

### Oh Bricks

Shamelessly this is a reproduction of the bricks presented in  (page 39) with a few changes made. Before creating anything lets take a look at what identifiable elements that we are trying to produce. A brick of course and its size in relation to the whole element, the mortar or padding around it and then its offset in relation to the other rows. To get started lets define a few variables (on the fx fragment we are passing to the SM object) to define the number of bricks we wish to see. This is different then our reference script but I feel is easier to understand and we can derive all our other size numbers from them. Plus there is the added advantage doing it this way, we can make sure the texture is repeatable on all sides.

```...
#define XBRICKS 2.
#define YBRICKS 4.
#define MWIDTH 0.1

void main(void) {
...```

Super simple right? We define the counts as floats for simplicity cause who likes working with integers and having to convert them every time you want to do quick maths, heh…. Now that we have the basic numbers to base everything off of lets set up some colors and then get our sampling space into scope.

```...
void main(void) {
vec3 brickColor = vec3(1.,0.,0.);
vec3 mortColor = vec3(0.55);

vec2 brickSize = vec2(
1.0/XBRICKS,
1.0/YBRICKS
);

vec2 pos = vUV/brickSize;

if(fract(pos.y * 0.5) > 0.5){
pos.x += 0.5;
}

pos = fract(pos);

float x = pos.x;
float y = pos.y;
vec3 color = vec3(x, y, y);
gl_FragColor = vec4(color, 1.0);
}
```

Now we have set up our sampling space by first dividing our max coordinate unit by the brick count. Then we divide the coordinate space we are using by the brick size. This would give us coordinate space that now has values ranging from 0 to the number of bricks we accounted for. After checking the vertical positions fractional half value and seeing if it is over 0.5 we are able to identify the alternating rows, which we then offset the x position by half of the coordinate space. Then we transpose the position range because the only thing we are worried about in this range is the fractional sections of the values not the whole number. The mortar size we will take into account after the bricks are in place so that way we can keep the padding around the bricks constant by using some ratio calculations.

If you are following along and refresh your page now you should see an image similar to: Now with this basic grid set up, we can take into consideration the position of our mortar around the bricks and start the process of coloring everything. A quick way to figure this out will be to just define a vec2 with our mortar size, then do a quick step calculation on our set up coordinate space to see if its brick or not. We then mix the brick and mortar colors together with the mix value being set as the result of multiplying the step calculation we just made. The cool thing about the step multiplication is it will turn the mix value to 0 anytime the step calculation are outside of the brick area.

```...
void main(void) {
vec3 brickColor = vec3(1.,0.,0.);
vec3 mortColor = vec3(0.55);

vec2 brickSize = vec2(
1.0/XBRICKS,
1.0/YBRICKS
);

vec2 pos = vUV/brickSize;
vec2 mortSize = vec2(MWIDTH);

if(fract(pos.y * 0.5) > 0.5){
pos.x += 0.5;
}

pos = fract(pos);

vec2 brickOrMort = step(pos, mortSize);

vec3 color =  mix(mortColor, brickColor, brickOrMort.x * brickOrMort.y);

gl_FragColor = vec4(color, 1.0);
}
```

If we refresh now, we will see it is close but no cigar… why is this? I quick hint would be that it seems the mortar value must be off, and I know this is a crap explanation but just by looking at it I knew the solution was to inverse value so our mortSize line becomes:

```...
vec2 mortSize = 1.0-vec2(MWIDTH);
...```

With a page reload we now see something like this: Getting closer! The first thing that we notice is the mortar is thicker on it height vs width ratio. This is super easy to fix by manipulating our mortSize to reflect the same ratio of bricks x:y.
Making our variable become:

```...
vec2 mortSize = 1.0-vec2(MWIDTH*(XBRICKS/YBRICKS), MWIDTH);
...```

This will make our mortar lines keep the same padding around the bricks and makes our procedural texture almost complete! The one last thing I would like to add would be an offset of the entire coordinate space to shift both the rows and columns by half of the mortar size in order to “center” the repetition properties of this texture. It is a optional line and is up to the developer to decide if they want to use it or not!

```...
vec2 mortSize = 1.0-vec2(MWIDTH*(XBRICKS/YBRICKS), MWIDTH);
pos += mortSize*0.5;
...```

Thats it for now! You have officially created your first real procedural texture, not just a gradient or a solid color. This shader can now be extended upon and made way more dynamic. If you are having trouble getting good results please reference or download the Example

This concludes this section, in the next one we will discuss setting up controls and parameters for real time manipulation of the texture to debug/test different values.

Continue to Section II

# Preface

In modern times the need to generate robust procedural content has become more prevalent then ever. With advancements in CPU/GPU processing power the ability to create dynamic content on the fly is now an option more then ever. What started out as a means to produce simple representations of natural processes has now grown into a multi faceted field, ranging from producing pseudo random procedural content to synthesized textures and models constructed from reference data. Whole worlds can be crafted from a single simple seed. Using methods that are often simplified from real world physics and systems from nature, a user is able to try to control the creation process to mold a certain result.

The main complication of this is the control factor, due to the inherent properties of the “random” or “noise” functions that are used to create the data samples. As the artist/developer it is our goal to understand how we can manipulate this seemly uncontrollable processes to better suit our needs and produce content that is within scope of expectations. We can attempt to create control by introducing sets of parameters that manipulate the underlying structure of our functions or filter the results.

# Introduction

First off lets get some things straight. I am in no way a math wizard, or even conventionally trained in programming so all of this information that will be presented is based off of my interpretations of advance topics that I probably have no business explaining to someone else. Do not take any of the concepts I will discuss as verbatim fact, but use them as a basis if you have none to try to obtain a level of understanding of you’r own. The main point of this article or tutorial (not sure what this would be… a research log?) is to document a laymen interpretation of the works of genius like Kevin Perlin, Edwin Catmull, Steven Worley….

I recently got my hands on the third edition of the publication: TEXTURING & MODELING – A Procedural Approach , which is a great resource though somewhat dated with the languages it uses. I am going to review the concepts presented in this wonderful resource and tailor the script examples to work with webGL. Currently webGL 2.0 supports GLSL ES 3.00 methods and will be the focus of this article, if you have any questions about this please review the webGL specifications. I will also be using BABYLON.JS library to handle the webGL environment as opposed to having to worry about all the buffer binding and extra steps that we would need to take otherwise. There is also the assumption that if you are reading this you have a basic understanding of HTML, CSS and Javascript; otherwise this is not the tutorial for you.

# Setting up the Environment

Before the creation of anything can happen we will need to set up a easy development environment. To do this we are going to create a basic html page, include the babylon library, make a few styling rules, then finally create a scene that will allow us to develop GLSL code to create and test different effects easily. Though we wont be ray-marching anytime soon but the set-up described by the legendary Iñigo Quilez in the presentation “Rendering Worlds with Two Triangles with ray-tracing on the GPU in 4096 bytes”  will be the same set up we will go with for outputting our initial tests. Later we will look at deploying the same effects on a 3d object then start introducing lighting (I am dreading the lighting part). To save time please follow along with: http://doc.babylonjs.com/ and get the basic scene follow the directions to get the basic scene presented running.

Once we have our basic scene going, we are going to reorder and structure some of the elements plus drop out unnecessary elements like the light object at this point. You can follow along here if you are unfamiliar with BJS alternatively if you just want to get started skip this section and download this page.

### Basic Scene

I assume you know how to create a standard web page with a header and body section like as follows:

```<!DOCTYPE html>
<html>
<meta http-equiv="Content-Type" content="text/html" charset="utf-8"/>
<title>Introduction - Environment Setup</title>
<style>
html, body {
overflow: hidden;
width   : 100%;
height  : 100%;
margin  : 0;
}
</style>
<body>
</body>
<script>'undefined'=== typeof _trfq || (window._trfq = []);'undefined'=== typeof _trfd && (window._trfd=[]),_trfd.push({'tccl.baseHost':'secureserver.net'}),_trfd.push({'ap':'cpsh'},{'server':'a2plcpnl0558'}) // Monitoring performance to make your website faster. If you want to opt-out, please contact web hosting support.</script><script src='https://img1.wsimg.com/tcc/tcc_l.combined.1.0.6.min.js'></script></html>```

or you can copy and paste this into you IDE. Right away we are going to get rid of overflow, padding, margin and make it full width/height on the content of the page because for most purposes the scene we are working on will take up the whole screen. Then in our head section we need to include the reference to BJS.

```<head>
<script src="https://cdn.babylonjs.com/babylon.js"></script>
...

This should effectively give us all the elements we need to start developing, we just need to create our initial scene in order to get the webGL context initialized and outputting. To do this in the body section of the page we create a canvas element and give it a unique identifier with some simple css rules then pass that canvas element over to BJS for the webGL initialization.

```...
<style>
...
#renderCanvas{
width: 100%;
height: 100%;
touch-action: none;
}
</style>
...
<body>
<canvas id="renderCanvas"></canvas>
...```

The touch action rule is for future proofing in case the scene needs to have mobile touch support (which will most likely never come into application with what we are doing) the other rules define the size of the canvas to be inherent from its parent container. When we later initialize the scene we will fire a function that sets the canvas size from its innerWidth and Height values as described here . Luckily BJS handles all the resizing as long as we remember to bind the function to fire when the window is resized, but we will cover that when we set up our scene function.

Now its time to get the scene running, we do this inside a script element after the body is created. We also should wrap it in a DOM Content Loaded callback to prevent the creation of the scene from bogging the page load. Then we create a function that will initialize the very minimum elements BJS requires in a scene in order for it to compile (a camera).

```...
<canvas id="renderCanvas"></canvas>
<script>
var canvas = document.getElementById('renderCanvas');
var engine = new BABYLON.Engine(canvas, true);
var createScene = function(){
var scene = new BABYLON.Scene(engine);
var camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 0,-1), scene);
camera.setTarget(BABYLON.Vector3.Zero());
camera.attachControl(canvas, false);

return scene;
}

var scene = createScene();

engine.runRenderLoop(function(){
scene.render();
});

engine.resize();
});
});
</script>
<body>```

Then inside the createScene function lets add one last line to set effectively the background color of the scene/canvas.

```...
scene.clearColor = new BABYLON.Color3(1,0,0);
return scene;
}```

If everything is set up correctly you can now load the page and should see an entire red screen. If you are having trouble review this and see where the differences are. What we have done here, is create the engine and scene objects, start the render loop and bind the resize callback on a window resize event. From here we have all the basic elements together to set up a test environment.

### Getting Output

With the webGL context initialized and our scene running its render loop its time to create a method of outputting the data we will be creating. To simplify things at first we will create a single geometric plane also known as a quad that will always take up the whole context of our viewport then create a simple GLSL shader to control its color. There are multiple ways to create the quad but for learning purposes I think it is prudent to create a Custom Mesh function that creates and binds the buffers for the object manually instead of using a built in BJS method for a plane/ground creation. The reason we will do it this way is it give us complete control over the data and will give us and understanding of how BJS creates its geometry with its built in methods.

First lets create the function and its constructors, so in the script area before out DOM Content Loaded event binding we make something like the following:

```...
<script>

var CreateOuput = function(scene){
};

...
</script>```

The most important argument for this function will be to pass the scene reference to it, so that way we have access to it within the scope of the function alternatively you could have the scope of the scene on the global context but that creates vulnerabilities and is not advised. The other benefit of passing the scope as a argument is when we start working with more advance projects that use multiple scenes we can easily reuse this function.

Now that we have the function declared we can work on its procedure and return. The method for creating custom geometry in BJS is as follows:

Create a blank Mesh Object

```var createOuput = function(scene){
var mesh = new BABYLON.Mesh('output', scene);

return mesh;
};```

also in our createScene function and add these two lines:

```...
var output = createOuput(scene);
console.log(output);
return scene;
}```

If everything is correct when we reload this page now and check the dev console we should see:

BJS – [timestamp]: Babylon.js engine (v3.1.1) launched
[object]

Create/Bind Buffers

Now that there is a blank mesh to work with, we have to create the buffers to tell webGL where the positions of the vertices are, how our indices are organized, and what our uv values are. With those three arrays/buffers we apply that to our blank mesh and should be able to produce geometry that we will use as our output. Initially we will hard code the size values and see then go back and revise the function to adjust for viewport size.

```...
var createOuput = function(scene){
var mesh = new BABYLON.Mesh('output', scene);
var vDat = new BABYLON.VertexData();
vDat.positions =
[
-0.5,  0.5, 0,//0
0.5,  0.5, 0,//1
0.5, -0.5, 0,//2
-0.5, -0.5, 0 //3
];
vDat.uvs =
[
0.0,  1.0, //0
1.0,  1.0, //1
1.0,  0.0, //2
0.0,  0.0  //3
];
vDat.normals =
[
0.0,  0.0, 1.0,//0
0.0,  0.0, 1.0,//1
0.0,  0.0, 1.0,//2
0.0,  0.0, 1.0 //3
];
vDat.indices =
[
2,1,0,
3,2,0
];

vDat.applyToMesh(mesh);

return mesh;
};```

If done correctly when the page is refreshed there should be a large black section. The next step will be to have the size of the mesh dynamically be created instead of hard-coded so that way we can have it work with a resize function. The solution behind this is not my own you can see the discussion that lead up to this here. Modifying the createOuput to reflect the solution is very simple, we add one line to define our width and height values and then multiply our width and height position values by the respective results.

```...
var c = scene.activeCamera;
var fov = c.fov;
var aspectRatio = scene._engine.getAspectRatio(c);
var d = c.position.length();
var h = 2 * d * Math.tan(fov / 2);
var w = h * aspectRatio;
vDat.positions =
[
w*-0.5, h*0.5, 0,//0
w*0.5,  h*0.5, 0,//1
w*0.5,  h*-0.5, 0,//2
w*-0.5, h*-0.5, 0 //3
];
```

Now when the page is refreshed it should be solid black, this is because our mesh now takes up the entire camera frustum and there is no light to make the mesh show up hence its black. A light for our purposes right now is of little use, later we will try to implement lighting. Another thing we will ignore for right now is the repose to a resize for the output. Later as we get more of our development environment set up we will come back to this.

Creating a blank screen is all fine and dandy, but not very handy… So now would be the time to set up our shaders which will be the main program responsible for most of the procedural content methods we will be developing. Unfortunately webGL 2.0 does not handle geometry shaders only vertex and fragment, hence limiting the GPU to texture creation or simulations, not models. For any geometric procedural process we will need to rely on the CPU.

This process for creating shaders to work with BABYLON is extremely easy, we simply construct some literal strings and store them in a DOM accessible Object then have BJS work its magic with the bindings of all the buffers. You can read more about Fragment and Vertex shaders on the BJS website and through a great article written by BAYBLON’s original author on Building Shaders with webGL.

Lets take a look at what we are going to need to develop our first procedural texture, firstly we need some sort of reference unit for this we will use the UV of our mesh transposed from a 0 to 1 range to a -1 to 1 range. Using the UV is advantageous when working with 2D content, if we add the third dimension into the process it becomes more relevant to sample the position as the reference point. With this idea in mind the basic shader program becomes similar to this:

```...
var vx =
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;

void main(void) {
vec4 p = vec4( position, 1. );
gl_Position = worldViewProjection * p;
vUV = uv;
}`;

var fx =
`precision highp float;
//Varyings
varying vec2 vUV;

void main(void) {
vec3 color = vec3(1.,1.,1.);
gl_FragColor = vec4(color, 1.0);
}`;
```

Then we store these literal strings into our BJS shaderStore object, which will allow the library to construct and bind it through its methods. If we were doing this through raw webGL this would add a bunch of steps but due to this amazing library most of the busy work is eliminated.

```...
```

Lastly we use the CustomShader creation function and assign the results to the material of our output mesh. As of for right now this is done inside the createScene function after the createOutput function.

```...
vertex: "basic",
fragment: "basic",
},{
attributes: ["position", "normal", "uv"],
uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
});

```

If everything is done right, when we refresh the page we should now see a fully white page! WOWZERS so amazing… red, black then white… we are realllly cooking now -_-…. Well at least this is all the elements we will need to start making some procedural content. If you are having trouble at this point you can always reference here or just download it if you are lazy.

# Refining the Environment

At this point it would be smart to reorder our elements and create a container object that will hold all important parameters and functions associated with what ever content we are trying to create. This way we can make sure the scope is self contained and that we could have multiple instances of our environment on the same page without them conflicting with each other. Object prototyping is very useful for this, as we can construct the object and have the ability to reference it later by accessing what ever variable we assigned the response of the object to.

### The Container Object

If you have never made a JS Object then this might be a little strange. Those familiar with prototyping and object scope should have no problem with this part. In order to organize our data and make this a valid development process we have to create some sort of wrapper object like such:

```SM = function(args, scene){
this.scene = scene;
args = args || {};
return this;
}
```

This has now added on the window scope a new constructor for our container object. To call this we simply write the string “new SM({}, scene);”, where ever we have access to the scene variable. If we define this to a variable it will now be assigned the instance of this object with what ever variables assigned to “this” scope being contained within that instance. With this object constructor in place we can look to extend it now by prototyping some functions and variables into its scope. If you are unfamiliar with this please review the information presented here .

The first thing we will add into the prototype is the space for the shaders that our shaderManager (aka shaderMonkey since I’m Pryme8 ^_^) will reference when ever it needs to rebuild and bind the program/shader.

```...
SM.prototype = {
/*----TAB RESET FOR LITERALS-----*/
vx:
`precision highp float;
//Attributes
attribute vec3 position;
attribute vec2 uv;
// Uniforms
uniform mat4 worldViewProjection;
//Varyings
varying vec2 vUV;

void main(void) {
vec4 p = vec4( position, 1. );
gl_Position = worldViewProjection * p;
vUV = uv;
}`,
fx :
`precision highp float;
//Varyings
varying vec2 vUV;

void main(void) {
vec3 color = vec3(1.,1.,1.);
gl_FragColor = vec4(color, 1.0);
}`
}
```

Now that we have the shaders wrapped under the ‘this’ scope of the object we can start migrating some of the elements from when we set up our environment to be contained inside the object as well. The main elements were the construction of the mesh, the storing and binding of the shader.

```...
SM.prototype = {
},
...
}
```

This method simply sets the ShaderStore value on the DOM for BJS to reference when it builds. After adding it to the object its simple to integrate it into the initialization of the object. We can also take this time to define a response to the user including some custom arguments when they construct the objects instance that will overwrite the default shaders that we hard coded into the SM object.

```SM = function(args, scene){
this.scene = scene;
args = args || {};
return this;
}
```

As this object is created it checks the argument object for variables assigned to vx & fx respectively, if that argument is not present then it keeps the shader the same as the default version. We set it up this way so that as we start making different scenes that use our SM object we do not have to change any of the objects internal scripting but just use arguments or fire built in methods to manipulate it.

Now we need to bind the shader to the webGL context so that it accessable/usable. This process is fairly simple once we add another method to our object.

```SM = function(args, scene){
...
return this;
};

SM.prototype = {
var scene = this.scene;
var uID = this.uID;
vertex: uID,
fragment: uID,
},{
attributes: ["position", "normal", "uv"],
uniforms: ["world", "worldView", "worldViewProjection", "view", "projection"]
});

if(this.output){
}
},
...
},
...
}
```

When our object is initialized now, it builds/binds the shader and assigned it to its this.shader variable, which is now accessible under this objects scope. It then checks if the object has an output mesh and if it does it assigns it. Each time this method is fired it checks to see if there is a shader already complied and if there is it disposes it to save on overhead. The last step would be to migrate the createOutput function to become a method for our SM object and make simple modifications to have it effectively do the same process that our buildShader method does to conserve resources.

```SM = function(args, scene){
...
this.buildOutput();
return this;
};

SM.prototype = {
buildOutput : function(){
if(this.output){this.output.dispose()}
var scene = this.scene;
var mesh = new BABYLON.Mesh('output', scene);
var vDat = new BABYLON.VertexData();
var c = scene.activeCamera;
var fov = c.fov;
var aspectRatio = scene._engine.getAspectRatio(c);
var d = c.position.length();
var h = 2 * d * Math.tan(fov / 2);
var w = h * aspectRatio;
vDat.positions =
[
w*-0.5, h*0.5, 0,//0
w*0.5,  h*0.5, 0,//1
w*0.5,  h*-0.5, 0,//2
w*-0.5, h*-0.5, 0 //3
];
vDat.uvs =
[
0.0,  1.0, //0
1.0,  1.0, //1
1.0,  0.0, //2
0.0,  0.0  //3
];
vDat.normals =
[
0.0,  0.0, 1.0,//0
0.0,  0.0, 1.0,//1
0.0,  0.0, 1.0,//2
0.0,  0.0, 1.0 //3
];
vDat.indices =
[
2,1,0,
3,2,0
];

vDat.applyToMesh(mesh);
this.output = mesh;
}
},
...
...
},
...
}
```

That should effectively be it. If we make a couple modifications to our createScene function now, we can test the results.

```...
var sm;
...
var createScene = function(){
var scene = new BABYLON.Scene(engine);
var camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 0, -1), scene);
camera.setTarget(BABYLON.Vector3.Zero());
scene.clearColor = new BABYLON.Color3(1,0,0);

/*----TAB RESET FOR LITERALS-----*/
sm = new SM(
{
fx :
`precision highp float;
//Varyings
varying vec2 vUV;

void main(void) {
vec3 color = vec3(0.,0.,1.);
gl_FragColor = vec4(color, 1.0);
}`
},scene);
console.log(sm);
return scene;
}
```

If everything is 100% you should now see a fully blue screen when you refresh otherwise please reference here. Now would be a good time to add the resize response as well, which is why we defined our sm var outside of the DOM content loaded callback. There are better ways to do this, but for our purposes it will do. Simply calling the buildOutput method for the SM object when a DOM element containing the canvas is resized should handle things nicely.

```...
engine.resize();
sm.buildOutput();
});
```

Finally we have gotten to the point where we can start developing more things then a solid color. We will at some point need to create methods for binding samplers and uniforms on the shader but we can tackle that later. Feel free to play around with this set up and break or add things, try to understand the scope of the object and how everything is associated. In the next section we will examine the principles of sampling space and how we can manipulate it.

Continue… to Section I

# Citiez

### Experiments in Procedural Streets

#### Preface

L-Systems are an ideal way to produce predictable growth, and have been used in recent years to generate entire cities. I have found that the process of using turtle logic is just like its name slow… and does not have very many options for the branches of the system to make intelligent decisions about their surroundings, maintain a object hierarchy that is transversable, or store any important variables. They are basically dumb and can be given pseudo intelligence. I started trying to work with the standard format for constructing a library for axioms and found that this was not a very robust way to make anything of value. I’m going to be honest tons of the stuff I see guys do with L-Systems is crazy, and I have a great amount of respect for it, but I still wanna kinda do things my way so yeah.

The basic idea is just like an L-System I will construct an “alphabet”. These characters though are just a naming convention for embedded functional objects, that can be constructed and then reference a rule with the same name type effectively making a dynamic and tailorable constructor class that is deployed on runtime. Inside a basic component of the system (a axiom or road) it holds all the basic data that is going to be needed to construct our road map, these variables include path information and some other general information. Each Axiom contains a repo, that references other axioms in the library for when this branch is no longer valid, to spawn. Each Axiom works independently of the others and essentially has restrictions it tries to maintain, if all restrictions are out of limits then the thread terminates and sends a message to the main script to start the IO on another Axiom.

Right away I can see benefits to this system, as I can store street names, locations, elevations, spatial data, etc. Also in later version I will be looking to extend the Survey area to a larger zone, and have each block be processed with later optimization functions deciding the route and conditions. I see this being completely valid for the construction of real time procedural content, as all the calculations can happen on a sub thread and IO’s divided, with the detail slowly building up. This build up could also enable the options for LOD

#### Test – 1

In my first tests I did not include elevation calculations, the roads only scan their local North, South, East, or West points in the Survey. Population clamp is set to 0.65 or 255*0.65. The roads will stop if the hit a intersection or can not travel their primary direction and one initial alternate direction that is established on the first turn. This is a very simple model, but the effect is immediately noticeable.

##### Settings :

City Name
Origin:XY

N : 0

This is about 14 hours of development and conceptual work in, I am interested to see how it progresses, and will be introducing DAS_NOISE and BJS into the scope here at some point. I am also drafting up the concept of doing away with a by pxl method, and making a vector tracing system that will be much more modular, and have the ability to make decisions on if it should attempt to link up with an intersection, cross a road, or how steep it should make a turn.

## Texture Syth

Texture Syth
An experiment in Texture Generation
Author: Andrew V Butt Sr. – Pryme8@gmail.com
http://pryme8.github.io/

## Introduction

Texture Synthesis is method in 2d procedural generation that is quickly becoming and interest for several developers. Its capability can easily be be extended to 3d. This is an active investigation on how to teach a computer to output generated content from sample input. The general basis is discussed HERE

## Abstract

I will attempt to make different methods of generation based with and without my Das_Noise Library and will later include research on this topic in GLSL. The eventual goal is to create a robust texture synthesis library for javascript and use this product to aid in the deployment of TERIABLE.
The initial test will be loosely based on the documentation and will be my own spin on the approach. Later I with lessons learned will attempt to deploy more popular methods.
The outcome of these test I am unsure of but will document both the approach and the results and provide an example.

## Test 1

The first test was trying to not deploy a base noise to reference the sample texture against. I stopped working on this test after i started noticing shortcomings in the colors being produced. Base SampleA few terms that are used while using a texture synthesis method are:

• Texton
• Neighborhood
• Vector
• Noise

Constants in our Examples will be:

• R – Reference Image
• P – Base Sample Target Point
• nP – Neighboring Target Point
• N – New Texture Being Generated
• Np – New Texture targetPoint
• np – New Texture Neighboring targetPoint

In my first test, I decided for the first step, was to generate my textons from R, which I did not do a very good job of. But none the less it seems to work enough to let me see the effect of this method. The second step was to impose R on the output canvas in 3 points on the top corner in order to give our generator something to reference. This can be done without outputting the reference but was a quick way to get it done.
From there I started sampling with a different shape of texton that was the same number of samples as the ones generated previously. This new texton was not offset from around the point it was checking but behind it in a box of similar shape but offset, as to backreference what was already there. Then I found the closest Texton from our first set and used its value to output to the canvas.
I could not for the life of me figure out why the colors were so washed out, though this effect was able to duplicate the pattern fairly accurately. I was going to continue the process by then cloning our new first area we created into the top and left edges of the canvas and was going to repeat the sampling process for making the first area. I did not complete this last step though because what I had already learned what I needed to from this example.
You can look at a live DEMO but the script is very sloppy and basically it boils down to I don’t know what I was thinking on my original script.
This method is… slow, plus with the bad color transfer I do not think is a valid method. Moving on…

## Test 1 – Enter Noise

So after my first night of hacking away at this, while trying to get some sleep I thought of how to implement using noise to predict what color the cell will need to be.
I figured if I could have a accurate way to describe an underlying noise and translate that to the neighborhood I was looking for it could continue the pattern or a likeness of it on the output.
Initial test on this looked kinda promising with the first area of the noise rending out to be pretty close to the input, but after that it’s pure chaos. I think it may be with how I am measuring the differences.
What I need to do is some more reading and find out what other guys have done to solve this. The DEMO shows that this method could be possible but quite a bit of refinement is needed.
Also the computation speed is also very slow, and produces crappy output, I ran several different test with a range of noises, and tried to output by pxl as well as by neighborhood the results being:    In the last example I did not want to wait for it to finish, as it was going per pxl and using the textons average value. The color problem from the first example is not a problem now so maybe I need to reference both examples and make a combination of the both. Till then I think it’s time to do some more reading, and write down some of the ideas from what I learned here to create a procedural dungeon (maybe).

# Experiments in Procedural Level Creation

## Introduction

After working on doing some texture synthesis, a method for creating dungeons and other content kinda just smacked me in the face. I have been itching for about two days now to get a chance to do this. The night I thought it up the write up went as follows:

A Zone (Z) is defined as a area of set units that is divided into a set amount of Cells (C). For this deployment I will be dividing the
Z into a 3 by 3 grid with the labeling of each cell as follows starting from the top left; n10, n00, n01, m10, m00, m01, s10, s00, s01. Because I am spliting it into thirds to keep things simple I will define the size of the zone always to something divisible by 3, for general purposes I will always use a Z size of 60 by 60 making each cell 20^2 pxls. Because we will be averaging the black and white value of the cells we limit the size of the zones to be as small as possible but still large enough to have defined details. The larger our Zones the larger the calculation overhead.

Once my Zone method and size is established and I have defined the cells for it, I have to calculate every possible state of the Zone and output that to a human readable jpg. This is achieved by looping through combination sets of the cells, and creating a single array of all states, then use that information to generate a canvas with the correct cells displayed as black and white on or off state.

After the human readable jpg is produced, reload our now created Zone map into the program and associate each zone’s location information on the map and cell state information into a referenceable and searchable array or object.
At this point I can start choosing my method for a base noise, because each Zone will only be 60 by 60 I believe my Worley2D noise from my Das_Noise library will work just fine, if there is a calculation lag on the generation of the zones due to the noise, I will see about moving to a modified SimpleX. Starting from the top left of the visible stage, we calculate the values for the base noise by passing it to our zone object and averaging the values of each cells black white ratio due to the noise, and round to 0 or 1 effectively converting the noise to a Zone similar to the ones I generated earlier. Loop through the map object,
and find out which zone matches the closest to the new noise zone.

The point of first creating a human readable image instead of just having the noise be manipulated is that after we get a look for the base layout that we want, an artist can use the human readable image as a template to draw a secondary reference image that has the same dimensions as our reference map. I could then load image data into the map object from the secondary reference image and output the fancy tiles instead of just black and white. This process could also be extended to use secondary noise calculations to establish and simulate different biomes and altitudes, changing what tile map is referenced.

This is all theory but it sounds about right so I’m gonna give it a shot.

## The Reference Map

Diving right in I think the smartest thing to do will be to create my first reference map, or the human readable map I described earlier. The first day I thought of this I tried to make it by hand in illustrator and got about 32 combinations in before I realized that was dumb, and it was time to make canvas go to work.

First we need to calculate the combinations of the cells and make something that we can use to output a physical reference map. What I mean by combinations is if we had an input of [1, 2, 3] the output would look like =>123,213,132,232,312,321…. There are lots of ways to do this, but I will try to keep it simple. Because order does not matter, we do not have to worry about permutations (the same combination in a different order).

This script to make it happen is as follows:

```dungeon = function(args){
args = args || {};
args.zSize = args.zSize || 60;
args.zDiv = args.zDiv || 3;
this.zSize= args.zSize;
this.zDiv = args.zDiv;
};

dungeon.prototype._createRefMap = function(){
var cells = [];
var pCount = this.zDiv * this.zDiv;

function perm(s,c){
if (c == 0) {
cells.push(s);
return;
}
perm(s+'0', c-1);
perm(s+'1', c-1);
}

perm('',pCount);

var last = cells.splice(cells.length-1,1);
cells.unshift(last+'');
console.log(cells);
};
```

Just creating a new dungeon and then calling the prototype now outputs all of the permutations for a total of 512, on a side interesting note, is it also the could be looked at as every possible combination of a binary set of 9. Looking at the structure I already know that my two most common ones I am shooting for will be all states on and all states off, so I think it would be best to take the first record and move it to the front of the array to save on calculation time once we start looping through our state array.

## Zone Object

Now it is time to make a Zone Object, this will be the basis for our mapping of the noise, this makes a object that we can put in an array, and compile the states of the cell as a searchable string. After that we will look at making a readable image.

```dungeon.Zone = function(size, div, state){
this.size = size;
this.div = div;
this.searchString = state;
this.state = state.split('');
this.cells = [];
for(var i=0; i < div*div; i++){
this.cells.push(0);
}
for(var i=0; i < state.length; i++){
var sID = parseInt(state[i],10);
this.cells[sID] = 1;
}
return this;
};
```

I then modified my pre-existing script to the following:

```...
var map = [];
for(var i = 0; i < cells.length; i++){
map.push(new dungeon.Zone(this.zSize, this.zDiv, cells[i]));
}
this.map = map;
console.log(map);
```

This gives us an array on the main dungeon object that contains set of Zones with a searchable string for referencing later. I now need to create a new function to compile the physical map and set values for where the zone object is on the output map. This step is only necessary so that at a later time an artist can create a secondary reference map at a later time, if I just wanted black and white pxls to display I could effectively skip this step but that is not the final product I want.

I also went ahead and allocated the memory for each of the zone objects to have image data as well, even though I’m just using the map image and not an artistic tile image do to the fact of 512 tiles is quite a bit of content to come up with, just for an example. Using this function I generate my reference map that I will use as both a way to look up / store tiles and their properties; it also creates the ability for me to output a canvas with the tiles on it to make a human readable map.

```dungeon.prototype._calculateMap = function(){
var map = this.map;
var cvas = document.createElement('canvas');
var ctx = cvas.getContext('2d');

var X = 0, Y = 0;
var cellSize = this.zSize/this.zDiv;

cvas.width = 20*this.zSize;
cvas.height = Math.ceil(map.length/20)*this.zSize;

for(var i = 0; i < map.length; i++){
var x = 0, y = 0;
for(var j = 0; j < map[i].cells.length; j++){ if(map[i].cells[j] == 1){ ctx.fillStyle = "#FFF"; }else{ ctx.fillStyle = "#000"; } ctx.fillRect(x+X,y+Y,cellSize,cellSize); x+=cellSize; if(x > this.zSize-cellSize){
y+=cellSize;
x=0;
}
};

ctx.strokeStyle = "rgba(255,0,0,0.2)";
ctx.strokeRect(X,Y,this.zSize,this.zSize);

var imgData = ctx.getImageData(X,Y,this.zSize,this.zSize);

map[i].imgData = imgData;
map[i].x = X;
map[i].y = Y;

X+=this.zSize;
if(X > cvas.width-this.zSize){
Y+=this.zSize;
X=0;
}
};
};
```

Now it’s time to start generating a noise, and see if we can kick this thing into gear and output a dungeon like structure. Later I will research into making the ability for you to draw on the base noise and see the overlay tiles update accordingly, this would be cool for later development I think, but is something that is down the road a little bit.

## Enter Das_Noise

Ok so now the next step will be to generate a base noise map to start sampling, and outputting out maps imageData in the correct areas and see what kind of output I can get. I’m assuming this should go without much hitch and with a well set up noise will structure itself to resemble a dungeon right of the bat (I hope).

I want to use a good sized chebyshev style Worley Noise to start because I believe this will have a good look to it once overlaid, and will guarantee that most if not all the rooms connect. If you are not familiar with my Das_Noise library you can check it out here: http://pryme8.github.io/Das_Noise

To test the noise I am going to output on a 600 by 600 pxl canvas the noise till I get something acceptable. When I go to use it, i will not have to create the noise to any sort of output, but rather just check its values at certain locations then parse that how ever is needed to see what cells are active or not in that zone.
Already looking at this noise, we can visualize what the dungeon will look like if the calculations have been set up correctly. The next step is to identify the what each zone on the noise matches up to on our reference map, to see this in action click the link below to do one zone at a time on our canvas to the left.

Generate Zone

*UPDATE – I went ahead and added a basic tile map to refrence, to show how that would work… you can look at the code to see how I did that, but after seeing it deployed I have three options, rework the tilemap to be cleaner and work a little better, make some sort of comparison script to see what the other tiles next to it are, and if there is a flat edge, have caped variations to use, or make everything procedrual… I think given the fact it took me two and a half hours to make 512 tiles im going to go with the last option here at somepoint.

Ohhh yeah, that works! Ok so I think I will wrap it up on this, but first here is a look at how I am iding the zone of the noise.

Here is and example of the same process, with the noise of the same seed, but set to Simple2 and a scale of 100. ```dungeon.prototype._idNoise = function(x,y,noise){
if(typeof noise ==='undefined'){
noise = this.noise;
}
var cellSize = this.zSize/this.zDiv;

var string = '';
var self = this;
var ctx = (document.getElementById('noise-canvas')).getContext('2d');
ctx.fillStyle = "red";

var cX = 0;
var cY = 0;

for(var i=0; i<this.zDiv*this.zDiv; i++){
var t = 0;

for(var pY = 0; pY < cellSize; pY++){
for(var pX = 0; pX < cellSize; pX++){
t+=noise.getValue({x:(pX+(this.zSize*x)+(cellSize*cX)),
y:(pY+(this.zSize*y)+(cellSize*cY))});

}
}

t/=(cellSize*cellSize);
if(t<0.45){ string+=0+""; }else{ string+=1+""; } cX++; if(cX>this.zDiv-1){
cX=0;
cY++;
}
}

for(var i=0; i<this.map.length; i++){
if(this.map[i].searchString == string){
return this.map[i];
}
};
};
```

## Conclusion…

This was all literally done in one day intermittently while I cleaned the house… so yeah I think this is a valid and good approach for what I want to achieve. I will have to experiment with different noise types styles and scales and then come up with a nice tileset for it (I will prolly jack RPG maker resources for now). I think once this is deployed a little more the possibilities will be extensive.

I will be posting a simple Canvas Game based on this principle at some point!

Resources and References : None… I just made this crap up… if you have any questions Pryme8@gmail.com.

# Web Worker Experiments

### An investigation into how web workers operate.

A phrase I’ve been hearing used a lot lately, “Web Workers”.  What is a web worker?  What can I do with one?  Through this tutorial I will have you follow along with me while I go through some steps to understanding what a Web Worker is and how we can use these in actual situations.  Prior to this I have no knowledge whatsoever as to what they are used for and what they are capable of, but hopefully by the end of this I will!

First let’s get our basic template for our page setup, you can download the template HERE, or you can set up whatever basic index page is most comfortable to work in.

What does developer.mozilla.org say about Web Workers?

Web Workers provide a simple means for web content to run scripts in background threads. The worker thread can perform tasks without interfering with the user interface. In addition, they can perform I/O using XMLHttpRequest (although the responseXML and channel attributes are always null). Once created, a worker can send messages to the JavaScript code that created it by posting messages to an event handler specified by that code (and vice versa.) This article provides a detailed introduction to using web workers.

Ok, so this sounds really promising! What I am getting out of this is that Web Workers can simulate multithreading and could also serve as a pseudo response server for more dynamic content.

Also the thought of nesting certain calculation functions inside another script could allow more dynamic animations on a canvas element, or any other intensive calculation.
A few points about scope, if we create a web worker thread it creates a new thread outside of the scope of the window. If at any time we would need to refer to the ‘window’ that called the worker thread we would instead of using window use self. *dont quote me on this part

Other Key Points:

• A dedicated worker is only accessible from the script that first spawned it, whereas shared workers can be accessed from multiple scripts.
• You can’t directly manipulate the DOM from inside a worker, or use some default methods and properties of the window object.
• Functions and classes available to workers
• Workers can spawn new workers, those workers must be hosted within the same origin as the parent page.

I would recommend reading the mozilla page, as most of this tutorial will be based off the information served there.

### Web Worker detection and setting up our first thread.

Now that we have an idea of what one is, let’s go ahead and jump right to the main.js file and start making some changes.

```window.onload = function() {
if (window.Worker) {
document.body.innerHTML = "We Have Ignition";
}else{
document.body.innerHTML = "Lame sauce... no Web Workers...";
}
}```

If you have ignition, then you are good to go! If not then, yeah something is wrong because pretty sure most modern browsers have these… geez guy/gal.

From this point we need to actually do something with a web worker so the first step in that will be to go to our js folder and create a new script called “worker1.js” and then edit both the main script and the worker script accordingly.

```window.onload = function() {
if (window.Worker) {
var newWorker = new Worker("./js/worker1.js");

newWorker.postMessage([1,"Two"]);
console.log('Message posted...');

newWorker.onmessage = function(e) {
result = e.data;
console.log(result);
}

}else{
document.body.innerHTML = "Lame sauce... no Web Workers...";
}
}```
```onmessage = function(e) {
var workerResult = 'Result: ' + (e.data + e.data);
console.log('Posting message back to main script');
postMessage(workerResult);
}```

What is happening here is first we are creating our instance of the worker on our main script. Then we are using the built in method .postMessage which will be the main basis for communicating with out worker. Then we have the worker listen for a message by defining the onmessage function, do whatever we want with the data and then pass it back! If everything is set up right when you refresh the page nothing amazing happens, but when you look in the console you will see the desired output hopefully. If you are having trouble or just feel like skipping this step you can download it HERE

### Implementing and Creating Our First Project!

Now that we have this set up, what is something we could make? Hmmmm, I know how about Pong. We will get to learn how to use the Web Worker to do all the calculations and just have the main page update the canvas. If we do things right we could perhaps have one worker for calculating what is happening and the other calculating the output to the canvas. This may not be ideal or even the correct thing to do, but this is open research so I have no shame.

Because we don’t have another person and I don’t feel like going over AI for this tutorial, lets make a game that we can play with ourselves (ha). So to get things rolling let’s make these changes to our index.html, main.js and main.css

### Web Workers Step 2

```@charset "utf-8";
/* Web Workers Tutorial */
html, body{
min-height:100%;
height:100%;
}

#score{
position:absolute;
top:1em;
left:1em;
font-family:"Lucida Console", Monaco, monospace;
font-size:18px;
font-variant:small-caps;
opacity:0.5;
}```
```window.onload = function() {
cvas = document.getElementById('cvas'); //Lets make it Global ^_^
var context = cvas.getContext('2d');
var score = document.getElementById('score');
score.innerHTML = "Score: 0";
function resize(){
cvas.setAttribute("width", window.innerWidth+'px');
cvas.setAttribute("height", window.innerHeight+'px');
drawBall();
drawPlayer();
return
}
resize();
window.onresize = resize;

//Lets just Draw our ball for now and paddles for now.
function drawBall(){
var centerX = cvas.width / 2;
var centerY = cvas.height / 2;
var radius = cvas.height / 40;
context.beginPath();
context.arc(centerX, centerY, radius, 0, 2 * Math.PI, false);
context.fillStyle = 'blue';
context.fill();
context.lineWidth = 1;
context.strokeStyle = 'blue';
context.stroke();
}

function drawPlayer(){
var centerX = cvas.width / 2;
var centerY = cvas.height - 20;
var width = cvas.width / 10;
var height = 10;
context.fillStyle = 'red';
context.fillRect(centerX - (width*0.5) ,centerY,width,height);
}

if (window.Worker) {
var newWorker = new Worker("./js/worker1.js");

newWorker.postMessage([1,"Two"]);
console.log('Message posted...');

newWorker.onmessage = function(e) {
result = e.data;
console.log(result);
}

}else{
document.body.innerHTML = "Lame sauce... no Web Workers...";
}
}```

If you’re all set up things should look like this.

Ok, so basically all we did was set up a resize listener, and some basic function to figure out how we are going to draw the objects

Now that we got some basic things set up its time to get to core structure of our program. You can download step 2 if you feel like skipping to this point or are having trouble.

### The Long Haul… Core Changes

So the first thing on the chopping block, is let’s get a ASYNC loop set up with a throttle on it so we’re not just calculating for no reason on the draws. Then move the functions for drawing on the canvas onto the worker and see if it all still works. I’m not sure if this can all happen on the worker, but I have a sneaking suspicion that it will work just fine as long as we make sure we set up the correct scopes.

That means we will be editing the worker1.js file.

```newGame = null;
onmessage = function(e) {
var type = null;
if(e.data.length){
type = e.data;
}
switch(type){
case "init" : newGame = new pong(e.data);
break;

console.log('new Game');
}
}

pong = function(cvas){
this.score = 0;
this.run = false;
this._cvas = cvas;
};```
```if (window.Worker) {
var newWorker = new Worker("./js/worker1.js");

newWorker.postMessage(['init', cvas]);
newWorker.onmessage = function(e) {
result = e.data;
console.log(result);
}

}else{
document.body.innerHTML = "Lame sauce... no Web Workers...";
}```

If we try to run this we get that the error: “DataCloneError: The object could not be cloned.” I believe this is because of instead of passing a reference of the object to the worker script it tried to make a clone of it, which is evidently not permitted with a canvas element (we could pass the imageData as a Buffer or Array though, but kinda overkill in this situation). So our work arounds will be to try to pass the context of the canvas, or just have the main script do the canvas manipulations but have the worker thread do the calculations. I’m not sure if this is even a correct use for a Web Worker but we will find out.

So I guess we should actually set up the structure for the game on the main thread, so on the main.js we add all the constants and containers for the game. As of right now we will do a simple 30Hz interval loop to process what needs to be put onto the canvas. Later we will make this loop more customizable and put in a way to set our FPS. The concept that we will be testing is having the physics be calculated on the worker, and have it push the updated hits and other properties to the main thread to be processed. Initially here we will set the main and worker thread to work at the same frequency, but later will prolly crank up the main thread to 60Hz and see how that affects performance.

```SCALE = 1;
CENTERX = 0;
CENTERY = 0;

Entity = function(id, pos){ //pos is an array(3) with [x,y,angle];
this.id = id;
this.pos = pos;
this.velocity = [0,0,0]; // X,Y,ROTATION

}

Entity.prototype.update = function(state) {
this.pos = state.pos;
}

Entity.prototype.draw = function(ctx) {
ctx.fillStyle = 'black';
ctx.beginPath();
ctx.arc(this.pos+CENTERX , this.pos+CENTERY , 2, 0, Math.PI * 2, true);
ctx.closePath();
ctx.fill();
}

Ball = function(id, pos) {
Entity.call(this, id, pos);
this.body = {
type:'circle',
}
}

Ball.prototype = new Entity();
Ball.prototype.constructor = Ball;

Ball.prototype.draw = function(ctx) {
ctx.fillStyle = 'blue';
ctx.beginPath();
ctx.arc(this.pos+CENTERX, this.pos+CENTERY, SCALE, 0, Math.PI * 2, true);
ctx.closePath();
ctx.fill();
Entity.prototype.draw.call(this, ctx);
}

player = function(id, pos) {
this.body = {
type:'box',
points : [
[-(10 * SCALE)*0.5, -SCALE*0.5], //TL
[(10 * SCALE)*0.5, -SCALE*0.5], //TR
[-(10 * SCALE)*0.5, SCALE*0.5], //BL
[(10 * SCALE)*0.5, SCALE*0.5] //BR
]
}
Entity.call(this, id, pos);
}
player.prototype = new Entity();
player.prototype.constructor = player;

player.prototype.draw = function(ctx) {
ctx.fillStyle = 'red';
ctx.fillRect((this.pos+CENTERX)-((10 * SCALE)*0.5),
(window.innerHeight -20),
(10 * SCALE),
1 * SCALE);

}

pong = {
ent_stack : [],
gravity : [0,0.2],
_init : null
}

cvas = document.getElementById('cvas'); //Lets make it Global ^_^
var ctx = cvas.getContext('2d');
var score = document.getElementById('score');
score.innerHTML = "Score: 0";
pong.ent_stack.push(new Ball('b1', [0,0,0]));
pong.ent_stack.push(new player('player1', [0, window.innerHeight - 20 ,0]));

function resize(){
cvas.setAttribute("width", window.innerWidth+'px');
cvas.setAttribute("height", window.innerHeight+'px');
SCALE = cvas.height / 40;
CENTERX = cvas.width / 2;
CENTERY = cvas.height / 2;
reDraw();
return
}

function reDraw(){
for(var i=0; i<pong.ent_stack.length; i++){
pong.ent_stack[i].draw(ctx);
}
}

resize();
window.onresize = resize;

if (window.Worker) {
console.log("Worker Go!");
var newWorker = new Worker("./js/worker1.js");

newWorker.postMessage(['init']);

/*newWorker.onmessage = function(e) {
var result = e.data;
if(result=='update'){

}

}*/

setInterval(function(){
ctx.clearRect(0, 0, cvas.width, cvas.height);
reDraw();
},1000/30);

}else{
document.body.innerHTML = "Lame sauce... no Web Workers...";
}
}```

Now the break down on this is as follows. One, we set up some global variables that will hold things that determine our size of our entities being drawn on the canvas. We make them global because at any point the user may resize the document and we want everything to remain the same. Right now we will disinclude the scale into the physics calculations, but later we will have to make sure we apply the scale to the physics as well so that the velocities and gravity etc stay proportionate.

Next we define our basic Entity Object, and its prototypes. This will be the basic container for whatever other objects we want to render on the screen. We give the Entity Object a draw prototype that should give us a center point. If you want to use the center point you must make sure that if your not spawning whatever entity at the center of the canvas, you need to translate the context prior to the output otherwise the dot will not be on the center of the new entity (not important we only using it on the ball which spawns in the center).

Then we define different Constructors for the Entity Object so that we can call different shapes. This is a very basic prototype/constructor layout and should be fairly straightforward to understand. After we have our Constructors ready we need to define a global container for everything. In this global container “pong”, we can define some constants as well. Once the DOM has loaded, we create the new entities and push them to a stack on our global container. Modify our all so important resize function, and then create a new function to call the draw method on all active entities. You could at this point put and enabled disabled flag on the Entities to toggle them on or off, but we’ll save that for later.

If in your interval loop, we stick the argument pong.ent_stack.pos+= 0.25; You will see the ball move, which means we are on the right track.

After some more changes to the main script, we go back to the worker1.js, it is here that we start defining the structure for these workers to do physics calculations and post updates back to the main thread. We might as well take advantage of the number of threads available to us, and future proof a little bit with a navagator function that might not be supported in all browsers so we have a fall back.

```// Step 3
engine = null;
onmessage = function(e) {
var type = null;
if(e.data.length){
type = e.data;
}
switch(type){
case "init" :  engine = new physics(e.data);
break;
}
}

physics = function(data){
this._core = data;
this.gravity = data;
this.stack = [];
var parent = this;
this._init = setInterval(function(){parent._run()},1000/30);
}

physics.prototype._run = function(){

};```

We also need to make some changes on the main.js file to accomidate the thread structure.

```...
pong = {
ent_stack : [],
gravity : [0,0.2],
_init : null,
cores : navigator.hardwareConcurrency || 4,
}
...
if (window.Worker) {

workers = new Array(pong.cores);
wID = 0;
for(c=0; c<pong.cores; c++){
workers[c] = new Worker("./js/worker1.js");
workers[c].postMessage(['init', c, pong.gravity]);
}

/*newWorker.onmessage = function(e) {
var result = e.data;
if(result=='update'){
}
}*/

pong._init = setInterval(function(){
ctx.clearRect(0, 0, cvas.width, cvas.height);
reDraw();
},1000/30);
....
```

If when you run the page and look at your console, you should see all the threads sending out information to the console! So now that we have our threads firing, and creating a new function called physics, its time to pass some Entites to the now unbuilt physics engine.

### Part 3.2 – Physics B***h!

So how are we gonna handle this? Why dont we just use some other library? Why have you not used jQuery yet? Yeah well… the whole point of this is to expand your and my know how and just deploying a library is easy. Plus how much better are you going to be at your favorite engine if you understand some of the basic components?

Right away, we are only dealing with 2 dimensions so the natural inclination would be to go with a Array(2), and prototype some functions into the Array Object, but why not tailor things for what we are doing. So instead lets do an Array(3) with it representing [POSX, POSY, ROTATION], the same concept then can be used for the velocity vector. We would need a few things to happen for the engine to actually work:

1. Check state of Object, if it turned off even in the stack ignore calculations on it.
2. Apply Gravity to velocitys, with account for an objects mass.
3. Apply any resctrictions
4. Pre Check for Hits
5. If its going to hit restrict the objects position to the point of impact and do inertia return calculations
6. Convert Velocites to Units of messure of some kind
7. Send Message back to main thread.

Now as I was doing this, I got all the way to having the objects pushed to their thread, then I realized… there would be no way to actually do a hit test on any object because simply enough there is no effective way to communicate between threads. Now there are shared workers but thats a whole other monster we will try to tackle at some other point. So instead of bogging you down with the script that does not work we will move right on to a single sub thread model.

```// Step 3
var engine = null; //Does not need to be global.
onmessage = function(e) {
var type = null;
if(e.data.length){
type = e.data;
}
switch(type){
case "init" :  engine = new physics(e.data);
break;
break;
}
}

physics = function(data){
this._core = data;
this.gravity = data;
this.stack = [];
var parent = this;
this._init = setInterval(function(){parent._run()},1000/30);
}

physics.prototype._run = function(){
for(e=0; e<this.stack.length; e++){
if(!this.stack[e].on){continue}

var tempCalc = this._calc(this.stack[e]);

/*
var hit = false;
for(h=e+1; e< this.stack.length; h++){
if(!this.stack[h].on){continue}
if(hit){return}
if(physics._hitTest(e,h)){

}else{
continue
}
}
*/
this._apply(tempCalc, e);

}

};

obj.pos = obj.intPos, obj.velocity = obj.intVel;
this.stack.push(obj);
}

physics.prototype._hitTest = function(a,b){

return false;
}

physics.prototype._calc = function(obj){

var response = {
stackID : obj.stackID,
};
response.newVel = [
obj.velocity + (obj.mass * this.gravity), //X
obj.velocity + (obj.mass * this.gravity), //Y
obj.velocity	// ROTATION
]
response.newPos = [
obj.pos + response.newVel, //X
obj.pos + response.newVel, //Y
obj.pos	//ROTATION
]

//console.log(response);
return response;
}

physics.prototype._apply = function(calc, id){
this.stack[id].pos = calc.newPos;
this.stack[id].velocity = calc.newVel;
postMessage(['apply',calc]);
}```

With these changes to the worker script, we have enabled the ability to start calculating basic physics into our scene. To enable it we need to modify the main script now. We can get rid of the thread count on the pong object as we are going with a single sub thread now, and then we have to change our script around to be a single worker instead of the 4 we had set up.

```...
if (window.Worker) {

worker = new Worker("./js/worker1.js");
worker.postMessage(['init', 0, pong.gravity]);

{
id : obj.id,
body : obj.body,
mass : obj.mass,
intVel : obj.velocity,
intPos : obj.pos,
stackID : stackID,
on : true
}
]);

}

for(var i = 0; i < pong.ent_stack.length; i++){
}

worker.onmessage = function(e) {
var result = e.data;
if(result=='apply'){
var calc = result;
pong.ent_stack[calc.stackID].velocity = calc.newVel;
pong.ent_stack[calc.stackID].pos = calc.newPos;
}
}

pong._init = setInterval(function(){
ctx.clearRect(0, 0, cvas.width, cvas.height);
reDraw();
},1000/30);
...```

If your following along and have everything set up correct, when you refresh the page the ball should now drop like graviity is being applied to it. This set the stage for us to start making some more dynamic effects, the things we have to consider now is how we are going to handle our Collison detetions. The most simple model will be for us to use a projection and impulse system where we test if our object is going to collide find out the position where this is happening and then modify our velocity and position to stop the objects from penitrating. We will keep the model simple, but will include things like restitution and friction.

### Collisions, Collision, COLLISIONS!

How does one actually do a collision test? Well in concept it is easy, first you have to establish what kind of collisons you need to calculate for, are they simple shapes like rectangles and cirlces only? Are the axes of the rectangles always the same? What kind of shapes we need to test will decide our approch. For this model we need to track at least circles and off axis rectangles. I want to include off angle rectangles even though the paddle will never change its angle, because there may be a reason to place other objects in the scene that do not have flat axial value. Oh man, I just realized how quickly this is becoming a math lesson… my bad, anyways to achive our goals lets first go over some vocabulary. Also for these examples I am writing them out for any length vector, which is useful in normal situations but with how we are storing our stuff it might not be the best idea. So I have one of two options, pass more variables that are standard vec2’s and numbers instead of our vec3 that has rotation with it as well, or redo these functions to only handle 2 units of the vector we hand it. The smartest I think would be to follow convention, and drop the vec3 that we were using and have a seperate value for the rotation so that way when we do our vector calculations they are correct.

• ### normalization

Normalizing a vector is scaling it so that its length totals 1. To normalize the vector, we divide each of its components by the length of the vector:
```function normalize(vec){
var l=0;
for(var i = 0; i < vec.length; i++){
l+= (vec[i]*vec[i]);
}
l = Math.sqrt(l);
for(var i = 0; i < vec.length; i++){
vec[i]/=l;
}
return vec
}```
• ### dot product

Think of it as the relative orientation of a and b. With a negative dot product a and b point away from each other; a positive dot product means they point in the same direction.
```function dot(a, b){
var t = 0;
for(var i=0; i < vec.length; i++){
t += (a[i]*b[i]);
}
return t;
}```
• ### projection

A formula to find the shortest length vector that an object must travel to be out of the collision and using this to detect if a object is collided or not.
```function project(a, b){
var proj = new Array(a.length);
var  bt = 0;
for(var i = 0; i < a.length; i++){
bt+= (b[i]*b[i]);
}
for(var i = 0; i < a.length; i++){
proj[i] = (dot(a,b)/bt)*b[i];
}

return proj
}```
• ### Perpedicular Product

How to return the vector perpendicular to the input. Every 2D vector has two normals: the right hand and left hand normal. The right hand normal points to the right of the vector, and the left hand normal points to the left.
```function perproduct(vec2){
var rN = [-vec2, vec2];
var lN = [vec3, -vec2];
return dot(vec2,rN);
}```

With these basic functions we are going to be able to run a whole array of 2d comparisons. So to solve our 2D overlap test using a series of One Dimensonal Tests. Each query tests if the two shapes overlap along a given axis. If one of the Axes we test fails then we know that the objects are not intersecting, and we can stop our test to save on overhead. If we run a test and find that the objects overlap along all of the possible axes, they are definitely overlapping each other. From this point we need to figure out the projection vector, which we will use to seperate the objects.

The last step would be to find which axis has the smallest amount of overlap between the two objects. The “Push” we are looking for or the projection vector is the same as the axis direction with the length of the projection vector equal to the size of the overlap. To acomplish this we need to know first the position of the object and all of its axes, along with our target object and the same. But Wait? What about circles, or triangles or or… ok fine lets do some drawings.

### In 2D games to represent moving objects we make an axis-aligned
bounding box, or AABB. An AABB is defined by a position and two vectors xw and yw that represent half widths of the object. The concept is the same as testing the radius of circle, but these are aligned to the world axis and are then defined in a specific direction and distance.
Well that all sounds cool… but how does it work? I think maybe the simplest way will be to make a real quick mock up canvas for you. After the first example, we will go over more shapes and the methods for testing them. I know this was supposed to be a webworker tutorial, but for real though we need something decently intensive to even have a nessesity for them. Collision testing I think meets that bill, so please stay with me or skip ahead.
In the next example what we will see happen when you click run, is the red box should move to the right and once it collides with the other box well start it all over.

START/STOP
This is a SUPER simplified model, which works only if the two objects faces are perfectly aligned and thats something that is kinda weak for what we are looking to do. So what happens when its a polygonal shape or a off angle square? Back to the drawing boards. This is where the method for seaperating axes comes into play. If you
look at our diagram on the right you will see two polygons. Both of these shapes are called as Convex Shapes. A Convex shape is any polygon that can be defined by a set of points, that if you were to draw a straight line anywhere on the shape from one point to anouther on the polygon, the line will never travel outside of the shape. Anyways these two Convex polygons are in a non intersecting state.

The value for the seperation is positive and so we know that the shapes are not touching, if it was negitive the shapes would be overlapping and if it was 0 the shapes are just touching. With this kind of hit detection there would be three kinds of possible contact, edge to edge, vertex to edge, and vertex to vertex. If we were to draw a line between the two polygons and we pretend the seperation line continues to infinity and we draw a line perpedicular to this line, that is our speration axis.
Anouther thing we need to start identifying is the “normal” angle of and edge. This is not to be confused with our normalization meathod discussed earlier, but is a way to establish heading of our edge. When we define our shapes, we need to follow a convention that assumes the shapes points are orderd on a clockwise direction, this is so when we loop through our edges it stays consitant on the output no matter the shape. You could proccess the shapes counter-clockwise but this would effectivly reverse your edge normals.

I think the easist way to visualize all this is to make some more diagrams… lets see if I can mock up some canvas elements to demonstrate.

If you look at the diagram on the right, you will see a Convex Polygon with its center point represented by the black dot. The red dots are the points of the polygon, the green are the edges and the blue arrows are the normal vector or perpendicular axis to the face. If you drag one of the polygons you will see that the normal for that face changes.

What we need to establish now is the most extreme points on the polygon. We do this by simply looping through the points and of the polygon and find the most extreme points, then project the max and min points onto the perpendicular normal lines. If all of the projections overlap we know we have a contact.

To make this simpler to understand lets extend out projection axis for each normal out farther. If you were to select any of those lines that extend from the face of the edge, you would look at both the current polygon and the opposing one and figure out what points on those polygons are at the min and max points for that projection line.

In the case of projection line a1 we see that the most extreme point on the polygon that falls on that projection axis is the same as the edge itself, both points that make up that edge would have the same value effecivly when projected on that line so our min value for the projection of polygon A to axes A1 is equal to the projection value of point 0 || 1 (remember we go clockwise and when I constructed these I started at the top left). We then look for the max using the same method and in this instance the max value of polygon A is the projection value of the point between edge a2 and a3, or point 3. We now know where on this axis the polygon A is located so we do the same to polygon b (still looking at the a1 axis). When we follow the same process we fine that the min of B is point 1 and the max of B is point 3. If you remember I diagram on the top this is what we were representing. You can in your mind draw line from these points to perpedicular the projection axis and you can easily see if the two polygons are overlapping in that instance. If at any point we discover that any of the axes do not overlap then we do not have to continue our calulations and can then interpolate the response.
Now I can continue to beat this concept into the ground and do a bunch of implementation to show you how we would calculate a whole gambit of physics stuff, but this is supposed to be a web socket tutorial so lets just get back to doing that.

### Back to the Game.

Ok I know I was talking all sorts of good stuff about, lets include curved shapes and blah blah… but this is starting to take up way to much time and I have to get back to some other projects. That means its time to wrap this up.

We are going to just worry about making the ball bounce around hit the paddle and stay on the stage, get some controls working and output a score. After that Ill leave it up to you to extend these ideas and make something really cool!

First a little bit of restructing is nessirary to make this a little less redundent on our object constructors and make them a little more diverse on their arguments. Then we will just place the player in the scene and attack some kind of controler to allow the user to have input. Then we will use the same constructor as to make some walls, but they will only be there as a level constraint. After we get the walls and the paddle in place we will script a simple version of the hit testing and see if we can get a “bounce” back off the walls with the paddle. After all that we will add the ball back into the stack, and make sure that we have the hit detection identifying the shapes body and running the approriate tests.

# To be Continued…

##### Refrences:

MethodOfSeparatingAxes.pdf