Stage3D / AGAL from scratch. Part VII – Let There Be Light

Let There Be Light

So, we displayed a lot of cubes and triangles together, and we also created a few controls to play around with the 3D Scene. But the very essence of 3D content is light.

For this tutorial, I’ve included a bunch of classes I am currently working one, not as a 3D engine, but as a small toolbax for future experiments. You will find in the given source code those currently “W.I.P” classes :

* Geometry : Stores a geometry, and it’s indexes. Has shorcuts to declare square faces instead of triangles. Can create and upload the buffers for you. Later, will be able to generate the faces normals (more about that later)

* Cube : A Simple class extending Geometry, creating a Cube made of 24 points so that the future faces normals act as intended.

* ColorMaterial : Right now a simple RGB value object like, but should contain the fragment shader soon.

* ArcBallCamera : Not something new, but completely revamped. It’s now really moving onto a circle and using the pointAt method to target the origin. This new method make the drag controler a little bit smarter (dragging to the bottom will only make the object rotate as if the screen would be the X axis.)

When I announced a few weeks ago this tutorial, I compared faces normals (actual normals in 3D language) to normalized vertices.

You will find in the Geometry class a method called “computeNormals” that will give you the first ones, the one we wants. This method is still in progress as right now, the normal can be the opposite of the wanted one if the face is drawn counter clockwise.

I will explain in another article how you can generate basic normals for your models, but keep in mind that this data should ideally be computed by your 3D designer, because it can “smooth” edges for low-poly meshes.

So anyway, by calling the computeNormals method, we will get small vector perpendicular to each face (each triangle).

The Lamberian Factor

The first light we will compute is what we call the diffuse light. The amount of light diffused by a surface depends on the angle between that surface, and the light. this is call the Lamberian Factor, or the Lamberiance Reflectance. Quoting Wikipedia, “The reflection is calculated by taking the dot product of the surface’s normal vector, and a normalized light-direction vector, pointing from the surface to the light source.”

The dot product is an operation we can do using AGAL very simply using the opcode dp3, which stands for Dot Product 3, 3 being the number of components (here, x, y and z).

Just a word about the dot product. The dot product, or scalar product takes two vectors and returns a single number. The only thing you need to remember is this :

  • If two vectors goes toward the “same” direction, the dot product will be a positive number.
  • If the vectors are perpendicular to each other, the dot product will be equal to zero
  • If the Vector are facing each other, the dot product will be a negative number.
Because the dot product also depends on the length of the vector, we will mostly use it with normalized vectors, giving you a result between -1 and 1, wich is very handy especially in light computation.

Allright, let’s now code this. First of all, download the following source code.

The LetThereBeLight class is rather simple. On context creation, I simply get an ArcBallCamera class, a bunch of projection matrix, a model matrix (that will be added to the Geometry class later), and a Cube. The Cube instance will receive a ColorMaterial (not really relevant right now) andcreate the buffers for me :

geometry = new Cube(50);
 geometry.setMaterial(new ColorMaterial(0xFF00FF));
 geometry.createBuffers(context);

This is simple stuff for you now, so let’s move on to the actual shader.

The Shader Constants

As we saw, the Lamberian Factor requires, in order to be calculated, the light direction, and the surface normal. The surface normals are already stored in the vertexBuffer, so we still need the Light Direction. But we also need a bunch of other values :

  • The Light Color. Here I chose a plain white
  • The Ambient Light. The ambient light is the minimum amount of light a surface can receive. It’s a simple technique to simulate the fact that, in the real world, the light is reflected so many time that even when an object side is not under the light, it’s still visible and doesn’t turns completely black.
  • The Light Direction. In this example, the light will always come from the camera, meaning that we will more have the impression of moving the cube under the light than moving around it, but feel free to try other values

All those data will be stored in shader constants, so here we go :

context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.<Number>([0,0,0,0]));
//fc0, for clamping negative values to zero
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.<Number>([0.1,0.1,0.1,0]));
//fc1, ambient lighting (1/4 of full intensity)
 
var p:Vector3D = camera.position;
p.negate();
p.normalize();
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.<Number>([p.x,p.y,p.z,1]));
// Light Direction
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.<Number>([1,1,1,1]));
// Light Color

You may have noticed that all those constants, even if they are mostly vectors, directions, positions, are FRAGMENT constants, since we have no use of them in the vertex shader. Looking at the source, you will see that the color of the cube (here a nice pinkish color) is uploaded as a constant. We saw that already.

OK, so now, everything is in place, we may have a look at the shader AGAL code.

AGAL Time

What we need to do according to the Lamberian Factor :

  1. Calculate the Lamberian Factor using a dot product between the normal (v1) and the light  direction (fc2)
  2. Negate the result : We do this because the Lamberian formula is using the light direction from the surface to the light source. So you can either negate the light direction vector, or negate the dot product result
  3. Clamp any result below 0 : if the angle between the light and the surface normal is higher than 90°, then the dot product will be negative. This could cause unexpected result when computing the output color, so we just set it to 0 (no light).
  4. Multiply the fragment color by the light amount. For a light amount equals to 0, the surface will be black, for a light amount equals to 1, the surface will have it’s regular color.
  5. Multiply the resulted color by the light color. Your red cube might look a little more purple if your light is blue
  6. Add the ambient light. This way, every black surface will become a little brighter.

Here is the corresponding AGAL code :

code = ""+
 "dp3 ft1, fc2, v1 n"+ // dot the transformed normal (v1) with light direction fc2 -> This is the Lamberian Factor
 "neg ft1, ft1 n"+ // Get the "opposite" vector. We could also have uploaded the opposite of the light direction to avoid this step
 "max ft1, ft1, fc0 n"+ // clamp any negative values to 0 // ft1 = lamberian factor
 
 "mul ft2, fc4, ft1 n"+ //multiply fragment color (fc4) by light amount (ft1).
 "mul ft2, ft2, fc3 n"+ //multiply fragment color (ft2) by light color (fc3).
"add oc, ft2, fc1"; //add ambient light and output the color

UPDATE : Thanks to the help of Jean Marc, I discovered the sat opcode that one can use to clamp any value to the range [0,1]. So I should just replace the “max” line with this one :

" sat ft1, ft1 n"+

which allows me to save a constant, so I should  also get rid of fc0.

Also, you now know that copying values to the varying registers (v0, v1) the values are interpolated. That behavior was demonstrated by the color slowly fading between two points in the previous tutorials. Well, as Jean Marc stated, when being interpolated, the normals could not “normalized” anymore, so I should normalize my normals (duh !) in the fragment shader before using them. Thanks Jean Marc !

Compile and run : here it is, your first directional light !

For the posted demo, I added two options that are not in the sources : the first checkbox fix the light at the current position so you can rotate the cube and see the effect of ambient light, and the second one switch the normals to normalized vertices (see two first schemes).

Get Adobe Flash player

As always, have fun with the sources, and tell me what you think ! If you need more explainations or anything, just feel free to ask.

See you !

Watercolor effect

On Friday I found this amazing work from Stamen where they convert in real time an OpenStreetMap map into a wonderful watercolor like drawing. Check it out here !

I really love the result, it’s absolutely gorgeous. Lucky me, they give a few headlights on the whole process on their blog : http://content.stamen.com/watercolor_process

So I decided to create and bench the same effect in Flash. No Stage3D this time, only bitmap manipulation.

First, the demo :

<img data-attachment-id="170" data-permalink="http://blog.norbz.net/2012/04/watercolor-effect/paint_on/" data-orig-file="https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=500%2C114" data-orig-size="500,114" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":""}" data-image-title="paint_on header" data-image-description="" data-medium-file="https://i2.wp.com/blog acheter du viagra en pharmacie.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=300%2C68″ data-large-file=”https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=500%2C114″ class=”alignnone size-full wp-image-170″ title=”paint_on header” src=”https://i2.wp.com/artofnorbz.com/norbz.net/blog/wp-content/uploads/2012/04/paint_on1.jpg?resize=500%2C114″ alt=”” srcset=”https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?w=500 500w, https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?resize=300%2C68 300w” sizes=”(max-width: 500px) 100vw, 500px” data-recalc-dims=”1″ />

First slider change the threshold sensitivity
Second slider change the perlin noise alpha
Use checkboxes to disable the shadows or to view the used mask

Right now the demo is a bit heavy. Almost no optimization was done, and I wonder if some of the computation could be done using pixel bender.

You can find every step image on the Stamen blog post, so I won’t detail them here, but here is the effect code. Feel free to go back and fourth between the two blog to see the filter in action step by step.

     private function filterColor(colorToFilter:uint, textureToApply:Bitmap, sensitivity:int = 90):void {
     var msk:Bitmap = new Bitmap(new BitmapData(_mask.width, _mask.height, true, 0xFF000000));
      // _mask is a generated bitmap we get using a background color, the textfield, and the two vector assets.
      // You can see it by selected the "show mask" checkbox

     msk.bitmapData.lock();
     msk.bitmapData.threshold(_mask.bitmapData, _mask.getBounds(this), new Point(0,0), "==", colorToFilter, 0xFFFFFFFF, 0xFFFFFFFF, false);
     // first threshold separate the given color, for instance pink for the text

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(4.5, 4.5, 2));
     msk.bitmapData.draw(_noise, null, new ColorTransform(.5, .5, .5, _slAlpha.value), BlendMode.NORMAL, null, true);
     // Blur then apply a "noise". _noise is a simple perlin noise bitmap generated on app initialisation We use the same for every layers

     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "", sensitivity, 0xFFFFFFFF, 0x000000FF);
     // those threshold give us a black and white mask wich is a bit deformed by the noise and the blur filter.
     // The higher the sensitivity (which is actually just the color limit of the threshold from 0 to 255), the more the mask shrink, leaving some white space between layers)

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(2, 2, 3));
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", 0x66, 0xFF000000, 0x000000FF);
     // New blur / threshold operation to round a little bit the previous mask

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(1.2, 1.2, 1));
     msk.bitmapData.unlock();
     // small blur to antialiase the mask

     if(_useShadow.selected){
          var shadow:BitmapData = msk.bitmapData.clone();
          shadow.applyFilter(shadow, _mask.getBounds(this), new Point(0,0), new BlurFilter(5, 5, 3));
          shadow.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     }
     // the inner shadow is just the same mask blured again, then cut into by copying the unblurred mask red channel into the blurred mask alpha channel.

     var bmp:Bitmap = new Bitmap();
     bmp.bitmapData = textureToApply.bitmapData.clone();

     bmp.bitmapData.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     // Copy the mask red channel (could have been green or blue since we are working in greyscale) into texture alpha channel.

     if(_useShadow.selected) bmp.bitmapData.draw(shadow, null, new ColorTransform(1, 1, 1, .4), BlendMode.MULTIPLY, null, true);
     // Eventually draw the shadow bitmap onto the texture.

     _container.addChild(bmp);
}

That’s it ! Not as beautiful as the Stamen work, but right now I’m satisfied with the result.

I’m not giving the whole code since it’s embedded into the Agency Framework, so I would have to upload a lot of classes for a single effect, but you can try it by yourself really easily.

Credit goes for Stamen for the idea, and for Stamen again for those wonderful texture I used

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44” AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
renderCube();
 
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 renderCube();
 
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);
renderCube();

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

        /**
         * Render the cube according to it's current parameters ( = modelMatrix)
         */
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
 
            // program
            context.setProgram(cube.program);
 
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
 
            //constants
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
 
            // render
            context.drawTriangles(cube.indexBuffer);
        }

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !

What I am working on right now

While working with some more advanced “camera” effect, I came with the desire of creating a simple Sound Spectrum visualizer in stage3D. Not the idea of the year, but right now I am quite happy about it.

This is still a work in progress so it will evolve a lot more in a near future I hope, as I’d like to come with a more finished “product”, so consider this as a sneak peak.

Right now, I am using the FrequencyAnalyser from Ben Stucki
Music is from a talented French composer, Opti, who gave me the permission to publish my work using his track. Thank you man !

You can change the colors with the top left button, uses Adobe Kuler API to get some palette, drag the camera a little bit with the mouse. If you want to try another music, you can press the spacebar to stop the music, then launch some sound in another browser tab, like a youtube video or google music. You may get an Error from the Flash Virtual Machine, just ignore it right now and you are good.

I sure will explain and share the code later, but right now, the code is really to messy and I just wanted to share something.

Have fun and please give me any feedback you have !

Stage3D / AGAL from scratch. Part IV – Adding some depth

Understanding perspective

I thought this was 3D, so why can’t I use the z coordinate ?

Hopefully, you guys have read my previous article and play with the example class, or with your own. You may have noticed that changing the z coordinate didn’t changed anything. Let me explain why.

Your 3D scene is rendered in 2D, in some area called the clipspace. The clipspace is basically your screen, and every point that is behind your screen needs to be projected into the clipspace so it can be drawn.

I said earlier that x and y coordinates where going from -1 to 1. Well, it’s not true. It’s actually the clipspace coordinate that goes from -1 to 1. Imagine that the clipspace would have the same width and height as your screen, or your browser windows, you would have to compute the coordinate for every screen size, and for every size change ! Having a normalized clipspace is what allows us to forget about screen size and resolutions and focus on our scene coordinate.

Now, by default and without any other instruction, your graphic card project your vertices to your clipspace without any projection or any sense of perspective. That is why if you have a point out of clipspace coordinate, like x=2, you can’t see it.

Since we were only moving each vertex coordinate to the ouput point, here is the equation for any projected point

// mov op va1

xP = x
yP = y

In the following scheme, 3D object 1 and 3D object 2 have the same projected point, since the Z coordinate isn’t part of the equation.
<img title="without perspective" src="https://i2.wp.com/artofnorbz.com/norbz viagra pharmacie andorre.net/blog/wp-content/uploads/2012/01/without_perspective1.gif?resize=550%2C550″ alt=”” data-recalc-dims=”1″ />

The perspective divide

To be able to render 3D on a 2D plan (your screen), we need perspective. Perspective is what makes the borders of a road to look like they are converging when they are actually parallels lines. The equation is actually rather simple : The farther a point is, the closest to the middle it appears. Here is the equation :

xP = K1 * x / z
yP = K2 * y / z

With K1 and K2 some constants depending on things such as the field of view, or the aspect ratio of your clipspace. This is a perspective divide.

You can notice that if you divide by z, then z can’t be equal to 0. We will talk about this later.

When using the perspective divide, here is the result of the projection of 3D object 1 and 3D object 2 from previous scheme

Using Matrices

When we coded our first vertex shader, we just copied each vertex coordinate into the Output Point. You now learned that computing the output point actually defines the position of the projection of a vertex into the clipspace.

To be able to translate, rotate, or scale an object, we won’t be modifying all of his vertices. Why ?

  1. It would be really complex to compute the new position of every vertices when rotating by 45° on the Y axis, then scaling it up to 2.37 times.
  2. We would need to upload the coordinate into the vertex buffer again, which would completely lose any interest in using Stage3D. Remember, if the graphic card can render triangles so fast, it’s because everything is ready in the video ram.

Instead of uploading new coordinates to the V-Ram, we will compute the output point using a Matrix. This Matrix will be uploaded as a constant in every frame. Constant are very fast to update in the V-Ram unlike Vertex Buffers or Texture.

Updating the HelloTriangle example

Now, you can either open you HelloTriangle project, or download the following one. I recommend you to take your last project if you already have it since there is only a few line to add, but if you prefer to take my sources, you should be looking for the HelloMatrix class.

First thing we need to do is to create a Matrix3D, and upload it as a Vertex Constant to the Graphic Card. Go to the render function, on line 194. The HelloTriangle should already have a Matrix3D class member declared called m. So just instantiate it, append a translation to it either on x or y, and use the context.setProgramConstantsFromMatrix method to upload it to the GPU. Here is what I have :

// create a matrix3D, apply a translation on it, then set it as a vertex constant
m = new Matrix3D();
m.appendTranslation(Math.sin(getTimer()/500)*.5, 0, 0);
context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, m, true); // 0 means that we will retrieve the matrix under vc0

I choose to create a translation according to a Timer so you can see the triangle moving on the x axis.

At this point, if you compile your class, you won’t see any change. This is because we need to instruct our GPU how to use the Matrix, and this will be done in the Vertex Shader.

Updating the Vertex Shader

Obviously, updating the Vertex Shader will happen in the AGAL code. Time for us to learn how to invoke constants.

  • vc : Vextex Constant. Called by their first register (ex : vc0). Be carefull, matrix constants take 4 registers, so if you upload a Matrix to vc0, the next constant must be set on vc4.
  • fc : Fragment Constant. Same thing as above, but for fragment shaders.

Locate your Vertex Shader AGAL code, it should be around line 161. What we want to do is to compute the position of each vertices using the Matrix3D we stored as a constant instead of just copying x and y coordinate to clipspace.

To perform a 4×4 Matrix operation on a Vertex, you need to use the opcode m44 using this syntax

m44 destination, vertex, matrix

Where destination is the output point, the vertex is store in Vertex Attribute 0, and the Matrix in Vertex Constant 0. Got it ? Here what you should get :

// VERTEX SHADER
var code:String = "";
code += "m44 op, va0, vc0n"; / Perform a 4x4 matrix operation on each vertices
code += "mov v0, va1n";

That’s it ! Now, on every frame, we will create a new Matrix3D, append a translation to it on x axis between -0.5 and 0.5, upload it to the GPU, then execute the program that will perform a m44 operation on each vertices to reflect the translation we made.

Go on, compile, you should see your triangle moving from left to right.

Back to perspective

Now we know how to use a matrix to transform the final rendering of our triangle. Understanding the Math behind perspective is great but you don’t want to do it every time, don’t you ? Hopefully, Adobe provided a downloadable class, PerspectiveMatrix3D.

This class will let you create a Matrix3D with some intelligible parameters to render perspective.

Now, you can either continue to update your HelloTriangle class, or take the same package as above and look for the “AddingPerspective” class.

The AddingPerspective class is actually drawing a square so that the effect of perspective can be noticed more easily. You know how to draw a triangle, drawing a square is just drawing two triangles. You can have a look in the sources, but we will be back to Quads (squares) on the next article which will deals with indexes. Either way, the following example can be achieved using a triangle or a quad, it doesn’t matter.

The PerspectiveMatrix3D class

Among many thing, the perspectiveMatrix3D allows you to define matrix parameter to render perspective using 4 parameters :

  1. The FoV or Field of View. The FoV, in radians, represent how wide is your field of view. We will set it to 45°
  2. The aspect ratio is the ratio of your backbuffer. We will set it to (width / height).
  3. The zNear is the minimum z coordinate that your eye can see. We will set it to 0.1. A word on that later.
  4. The zFar is the maximum z coordinate that your eye can see. We will set it to 1000.

Go to the render method and instantiate a new PerspectiveMatrix3D object, then apply it the previous parameters.

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);

About the zNear

You may wonder why we won’t render the z from 0, and start at 0.1. Well. Remember the perspective divide was

xP = K1 * x / z
yP = K2 * y / z

As we are dividing by z, and because, I hope you know that, dividing by zero is impossible, we can’t have the zNear parameter equal to 0 because the equation couldn’t be computed for objects with a z coordinate set to 0.

This is actually a kind of problem since our triangle’s vertices z coordinates are set to 0. Hold on, don’t go change the VertexBuffer, we learned how to move an object right ? We can simply append a translation on the z axis to push your object a little forward.

What we need to do now is :

  1. create the PerpectiveMatrix3D as above
  2. Do some rotation on the m Matrix so we can actually notice the effect of perspective
  3. Translate a little bit forward our vertices so that they are behind zNear value
  4. Multiply the first Matrix with the PerspectiveMatrix to add perspective to the final render.

What I get is this :

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);

m = new Matrix3D();
m.appendRotation(getTimer()/30, Vector3D.Y_AXIS);
m.appendRotation(getTimer()/10, Vector3D.X_AXIS);
m.appendTranslation(0, 0, 2);
m.append(projection);

Compile, and here it is ! a rotating triangle with some sense of perspective ! If you took my class, you should see a rotating square instead of a triangle.

Practice !

As always, a little practice on your own is the best way to learn so here is what you can try

  1. Set R, G and B value to [0-255] instead of [0-1], upload a [255,255,255,255] Vector Fragment Constant, then divide your color values before moving it to the Output Color. You may use the div AGAL opcode

This article was less into code, and I think I will keep it that way for now, for 2 reasons :

  1. The less I write the more you code
  2. And it was actually way too long to write the hello triangle article while describing each single lines of code.

Anyway, I will always be giving the class I use as an example, and those class will be documented. If you think that I should go back to something more verbose, just tell me, feedback is always appreciated.

As always, if you have any questions, feel free to ask !

Stage3D / AGAL from scratch. Part III – Hello Triangle

Here we go

So, you’ve read Part I and Part II, and you want things to get dirty. Alright.

You can start by downloading the source code : HelloTriangle Sources

In this exercise, we will draw our first triangle using only Stage3D. There will be a small part or AGAL, which is actually the minimum required to display anything on screen but don’t worry, it’s easier to understand than it seems.

In the last articles, I talked a lot about Vertices. A vertex is a point in space that will be used with 2 other Vertices by our GPU to render our triangle.

A VertexBuffer is a simple list of Number so you have to define a structure that will be the same for each Vertex. Every vertices in the same Buffer needs to follow the same structure. The interessting thing is that you can pass whatever you want in your VertexBuffer, and you will have to decide what data is used for.

There is, however, at least 2 common structures used for VertexBuffer but I will describe the first one, that is used in that example (baby steps !)

As you can see, our VertexBuffer will be structured as a simple list of Number, where the first three numbers are our Vertex coordinate, and the three others our Vertex color. Every Vertex we may add to the buffer needs to follow the same syntax.

 Time for some code

Now if you didn’t already, download the source code, and open the HelloTriangle class in your favorite editor. Then come back to me.

Class constructor is quite simple to understand, so we move on to the __init function.

// wait for Stage3D to provide us a Context3D
stage.stage3Ds[0].addEventListener(Event.CONTEXT3D_CREATE, __onCreate);
stage.stage3Ds[0].requestContext3D();

When working with Stage3D, you are actually not working at all with the Stage3D class. Instead, you are working with a Context3D.

First thing you have to do is to target one of your Stage3D (stage.stage3Ds[0]) and request it for a context. You will be able to access the context when receiving the proper event.

At this step, Flash will try to reach your graphic card. If it can, then you will get an hardware accelerated context, either driven by DirectX for Windows platform, or OpenGL for Linux and MacOS platform.

If Flash can’t reach your graphic card for any reason, you will still get a context3D but actually driven by the new CPU rasterizer called SwiftShader.

Once we get a context, we stock it in a class member, and then configure the back Buffer

private function __onCreate(event:Event):void {
	// // // CREATE CONTEXT // //
	context = stage.stage3Ds[0].context3D;
 
	// By enabling the Error reporting, you can get some valuable information about errors in your shaders
	// But it also dramatically slows down your program.
	// context.enableErrorChecking=true;
 
	// Configure the back buffer, in width and height. You can also specify the antialiasing
	// The backbuffer is the memory space where your final image is rendered.
	context.configureBackBuffer(W, H, 4, true);

The back Buffer is a specific memory space where all your rendering is copied. You will be able then to display on screen what does the back buffer contains.

Setting up buffers and program

Then we create the buffers and the program. Don’t worry too much on the AGAL part right now, ill explain it later.

private function __createBuffers():void {
	// // // CREATE BUFFERS // //
	vertexBuffer = context.createVertexBuffer(3, 6);
	indexBuffer = context.createIndexBuffer(3);
}

Creating buffer is rather simple. The only thing you need to know is :

  1. How much Vertex it will have : 3
  2. How much information each Vertex have : x,y,z,r,g,b = 6

The index buffer needs to know how many instruction he will have. Here 3.

The program part is a bit complicated right now, because we need to see more to understand the AGAL code. Still, we can have a look at the code.

private function __createAndCompileProgram() : void {
	// // // CREATE SHADER PROGRAM // //
	// When you call the createProgram method you are actually allocating some V-Ram space
	// for your shader program.
	program = context.createProgram();
 
	// Create an AGALMiniAssembler.
	// The MiniAssembler is an Adobe tool that uses a simple
	// Assembly-like language to write and compile your shader into bytecode
	var assembler:AGALMiniAssembler = new AGALMiniAssembler();
 
	// VERTEX SHADER
	var code:String = "";
	code += "mov op, va0n"; // Move the Vertex Attribute 0 (va0), which is our Vertex Coordinate, to the Output Point
	code += "mov v0, va1n"; // Move the Vertex Attribute 1 (va1), which is our Vertex Color, to the variable register v0
	 // Variable register are memory space shared between your Vertex Shader and your Fragment Shader
 
	// Compile our AGAL Code into ByteCode using the MiniAssembler
	vertexShader = assembler.assemble(Context3DProgramType.VERTEX, code);
	code = "mov oc, v0n"; // Move the Variable register 0 (v0) where we copied our Vertex Color, to the output color
 
	// Compile our AGAL Code into Bytecode using the MiniAssembler
	fragmentShader = assembler.assemble(Context3DProgramType.FRAGMENT, code);
}

Creating a program is simple. Then we have the simplest AGAL code we can have. Just skip it. Then we use the AGALMiniAssembler tool from Adobe to compile at runtime our AGAL program into ByteCode, which give us two Shaders : one VertexShader, and one FragmentShader.

Uploading Data to the graphic card

Now we need to upload the data and the program to the graphic card.

private function __uploadBuffers():void {
	var vertexData:Vector=Vector.([
	-0.3, -0.3, 0, 1, 0, 0, 	// - 1st vertex x,y,z,r,g,b
	0, 0.3, 0, 0, 1, 0, 		// - 2nd vertex x,y,z,r,g,b
	0.3, -0.3, 0, 0, 0, 1		// - 3rd vertex x,y,z,r,g,b
	]);
 
	vertexBuffer.uploadFromVector(vertexData, 0, 3);
	indexBuffer.uploadFromVector(Vector.([0, 1, 2]), 0, 3);
}

To upload Data to a buffer, the simplest way is to upload a Vector. As you can see, I am creating a Vector or 3 * 6 entries, which correspond to the data of three vertices. The coordinates goes from -1 to 1 for x and y, and we don’t bother with the z part now. The colors component, R, G and B, goes from 0 to 1. So the first line match the top Vertex which will be red, the second line match the bottom left vertex and will be green, and so on.

As an index buffer, we simple tell our graphic car that the triangle is drawn in the order 0, 1, 2, which are indexes of the vertices in the vertex buffer.

Now time to upload the program

private function __uploadProgram():void {
	// UPLOAD TO GPU PROGRAM
	program.upload(vertexShader, fragmentShader); // Upload the combined program to the video Ram
}

Uploading the program is rather simple, so now we may move to the AGAL stuff.

Using the VertexBuffer inside the Shaders

When we created the VertexBuffer, we told the graphic card that each vertices will have a length of 6 components. This is a valuable information for the graphic card because when the VertexShader will run, it will be executed for each vertices in your buffer.

The GPU will then “cut” in the buffer 6 number at a time, because we told it that were the amount of data we needed. Each Vertex needs to be copied to fast access register before being used by the program. Think of register as a very small piece of RAM that is extremely fast, and very close to the GPU.

The last step is to tell the GPU how to split each chunk of data, and where to copy it.

For the code part :

private function __splitAndMakeChunkOfDataAvailableToProgram():void {
	// So here, basically, your telling your GPU that for each Vertex with a vertex being x,y,y,r,g,b
	// you will copy in register "0", from the buffer "vertexBuffer, starting from the postion "0" the FLOAT_3 next number
	context.setVertexBufferAt(0, vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // register "0" now contains x,y,z
 
	// Here, you will copy in register "1" from "vertexBuffer", starting from index "3", the next FLOAT_3 numbers
	context.setVertexBufferAt(1, vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // register 1 now contains r,g,b
}

Let’s go AGAL

We just instructed how the GPU should use the VertexBuffer. As a reminder :

  1. Take the 6 first information
  2. From 0 to 3, copy the data into the register 0
  3. From 3 to 6, copy the data into the register 1
  4. Register 0 now contains some coordinates, register 1 now contains some color.

The AGAL code will be separated in two Shaders. The Vertex Shader, and the Fragment Shader. The registers can only be accessed from the Vertex Shader, but we will learn a trick to make the color available for the Fragment Shader.

Before going any further, let’s have a look at the AGAL syntax :

opcode destination, source1[, source2][, options]

for instance :

mov op, va0

means :

op = va0

Another example :

mul vt0, va0, vc0

means :

vt0 = va0 * vc0

The names of the “variables” are actually names of register. It is rather simple to remember, but right now, let just learn the one we are using :

  • va0 : Vertex Attribute 0. This is the register where we copy the coordinate of each Vertex (see above). There are 8 registers, from va0 to va7.
  • op : Output Point. This is a special register where the final coordinate must be moved to.
  • v0 : Variable Register 0. The Variable registers are shared between the Vertex Shader and the Fragment Shader. Use them to pass some data to the Fragment Shader. There are 8 Variable Register from v0 to v7.
  • oc : Output Color. This is a special register where the final color must be moved to.

Let’s have a look at the first sample of AGAL code we have and which is the Vertex Shader (Line 159 and 160 of the Example Class)

mov op, va0
mov v0, va1

The first line move the va0 register where we copied the vertex coordinate to the output point without any modification. Then, we need to move some information to the Fragment Shader using the Variable Register. In our case, we only need to pass the color information, which is stored in the va1 register (see previous chapter for buffer splitting into register).

Simple isn’t it ? let’s have a look a the Fragment Shader.

mov oc, v0

Really simple : move the color we just copied into the Variable Register to the ouput color without any modification. That’s all !

Once the program compiled using AGALMiniAssembler, and uploaded to the GPU, we can make it the active program for the GPU

private function __setActiveProgram():void {
	// Set our program as the current active one
	 context.setProgram(program);
}

Head up to the rendering part.

Rendering our triangle on screen

The rendering part is quite simple in our example since we don’t have any interaction, camera, animation or whatever. The render method is called on each frame.

private function render(event:Event):void {
	context.clear(1, 1, 1, 1); // Clear the backbuffer by filling it with the given color
 
	context.drawTriangles(indexBuffer); // Draw the triangle according to the indexBuffer instructions into the backbuffer
	context.present(); // render the backbuffer on screen.
}

on each frame, we start by clearing the scene and filling it with a plain white. Then, we draw the triangle according to the indexBuffer instruction. The drawTriangles method actually draws into the back buffer. To render the back buffer content on screen, we simply call the present() method.

Compile, and relax.

Maybe some of you may have noticed that this example is simpler than the other you may have read before : no Matrix3D, no m44 opcode. This is intended.

You really need to understand how everything works together before going any further. If you can get it, you will find the following so much easier.

Practice !

You should try the following to help you understand how everything works together, as a practice :

  1. Render the triangle on a black background
  2. Get rid of the z coordinate since we are not using it
  3. Without modify the vertexData vector or the AGAL code, use the coordinate as color and the color as coordinate. You shoud get something like that :

If you need any help with either the tutorial or the exercises, feel free to leave a comment on the blog, or drop me a message on my twitter.

Again, as it is the first time I am writing a blog, I’d really like some feedback. Any criticism,  or encouragement, is welcomed.

Hope you liked it !

Stage3D / AGAL from scratch. Part II – Anatomy of a Stage3D Program

Anatomy of a Stage3D program

Before we start coding our first triangle, we may have a quick overlook at the structure of a Stage3D program. This is also true for a WebGL program or anything that deals with low level 3D programming.

When working with low level 3D API, you have to understand a bit how does the hardware works. As a reminder, V-Ram is your graphic card memory and a Buffer is some allocated space in your V-Ram. To render some triangles, your GPU needs at least some Vertices (a vertex being a point in space) and a program that will tell how to render those points on screen.

One thing we didn’t talk about yet is the Indexes. Indexes are instructions on which order your Vertices should be drawn to make triangle. You can think of Indexes such as numbers in those “connect the dots” games we had when we were kid. Great ! We can had one terms to our vocabulary.

Indexes : A list of Integer that defines in which order the graphic card should render the Vertices to make triangles.

Don’t worry to much on indexes now, we will come back to them later generic viagra canada. Now, I heard you like flowcharts.

A flowchart of the basic Stage3D program structure

You can identify 3 majors phase out of this flowchart :

  1. The Allocation Phase : Create your buffers, for each type of data your GPU will use (vertex, texture, program) in your V-Ram
  2. The Upload Phase : Send your data, or texture, or program into your buffers so your GPU can access it.
  3. The Declaration Phase : Tell your GPU which buffer to use as Vertex Data and Texture and Program so it can run it.

The two first phase, Allocation and upload, are the essence of Stage3D power. It takes a lot of time to allocate some space and to upload your data, but once your mesh is written in your Video Ram, there is no need of the CPU anymore, and that is the reason why your GPU can render so fast, leaving your CPU free for any other process such as physics calculation or so.

All those three phase can be done once, and should be executed again till the data or the program doesn’t change. We will call it the setup.

The setup can be done in any order : You can start with the Program, then your Vertices, then your Indexes, but you can do the opposite to. Of course, you need to follow the order for each column : you need to create a buffer to be able to upload some data in it ! But at the end of the setup, when you will call the drawTriangle() method, all what matters is that everything is in place and ready to run, no matters how you did it.

Now that you know how is structured any low level 3D program, you had a peak of view of what method Stage3D brought to Flash : VRam allocation, Data upload to Buffer, etc…

I’d love to write as soon as possible the first sample of code to put in practice all I have written so far but I don’t really have the time to do it right now. I promise in less in a week you will get your first triangle using Stage3D and only Stage3D.