How can I help you ?

Hi there,

I don’t have a lot of time right now, summer, scuba diving degrees, a little extra job freelancing, and of wourse, well, work…

So I’d like to know what would you be interesting on reading for the next article.

I’d like to go deeper into fragment shader, learn how to have bumpmapping working for instance, I could have a few example ported to WebGL, I could have a few articles about how to use minko to write your own shaders more easily than with AGAL…

Fell free to give me hints in the comments !

And for all, I wish you some nice vacations !

Stage3D / AGAL from scratch. Part VIII – Texture, please !

Hi folks,

As I said earlier, I have received some requests on twitter for an article about textures. So here we go.

In this tutorial, I’m going to take the previous article example and modify it so we can see a textured cube instead of a pink one.

By the way, I realized that the previous classes I gave you were incomplete, with some missing imports. Nothing huge actually, just a re-scale function, that is part of our framework here at Marcel. Long story short, sorry about that, I cleaned those one so you may be able to compile right away. And remember, if you have any troubles with any of my examples, just send me a tweet or something.

OK, lets go.

Texture coordinates

In the very firsts round of tutorial, we used to draw triangles with different colors “attached” to each vertices. That was how we could understand that every data that is passed from the vertex shader to the framgment shader is being interpolated.

Using a texture is relatively simple. As every thing we want to use in our shader, we will have to declare it, then upload it to the V-Ram, then allocate it.

The fragment shader, or pixel shader, is runned for every pixels that is being drawn, and it’s purpose is to compute that pixel color. Is previous example, the base color of every pixel was simply stored in a varying register, all we had to do was to, enventually, apply some light on it, then copy the color tu the Output Color register (oc).

Well, the only difference here will be that, for every pixel, the Fragment shader will not receive its color, but will have to sample the texture to get that pixel color.

The question is : how can our shader knows were, on the texture, he is suppose to “look at” to get the pixel color ? Well, this is what texture coordinates does.

Texture coordinate is very simple to understand. We call them U and V, and this is the exact same thing as X and Y in actionscript, except they are not absolute values, but relative values between 0 and 1. This way, you can change your texture size (from a 256*256 to a 512*512 for example) and your texture coordinates will remain the same.

Pretty simple hu ?

So, for a square, you basically have to use the coordinates above. For a triangle, pointing upward, you might have to set your first coordinate to 0.5, 0, so the first point starts at the center of the top edge.

UVs (for now on, I will call texture coordinates UVs), can be really more complex, for instance if you are trying to map a skin and hair texture to a face mesh, but you are not supposed to write them, they will be exported by your 3D software like 3DS Max.

It’s interesting to understand them though, because you can do a lot with UVs. With UVs you can only take a portion of your texture, there is no need to use it all. Ever heard of Sprite sheets ? For those of you that already tried Starling, and that were amazed by the very fast rendering of thousands of animation, this is how an animated “MovieClip” on the GPU is done : upload a single large texture with every frame of your animation, draw it on a square, then set the 4 UV into constants, and change them on every frame ! This way, every frame the GPU will pick a different portion of the image to sample !

An example of sprite sheet. Yep, old game were made that way too !

Texture Mipmapping.

As I am having a hard time trying to explain what is mipmapping, let me quote wikipedia :

In 3D computer graphics texture filteringmipmaps (also MIP maps) are pre-calculated, optimized collections of images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts.

As we saw, for every pixel that is being drawn, the shader will have to sample the texture to get the pixel colors at the corresponding UVs. The first problem is that, with large texture, it will take some time to process. The second, and more important issue is that, at  large distance, when your object becomes very small, you might hive issues with texture sampling, resulting in some moire patterns.

The solution is to upload a series of bitmap, each one being half the size of the previous one. The GPU will then automatically choose between two bitmap to sample according to the distance. This is a good optimization technic since each sampling will take less time, but it also takes more V-Ram to store each bitmap. Mipmap used to came with another artifact in old video games where each “layer” of the texture wont blend with each other, as you can see here on the left wall.

The good news is now there is some texture filtering applied so you wont get that ugly rendering. Actually, in the following demo, you will have an option to turn mipmap on and off, and also an option to add a random color to each mipmap level so you can notice the different texture being used.

Let’s start coding

First of all, you can grab the source code. In this archive, you will find the cleaned previous example, updated to use a texture. To upload a texture, you need a BitmapData instance, where the width and height are power of two (256, 512, 1024…). A texture doesn’t need to be square, it can be a rectangle, but remember that your U and V values are always clamped to [0, 1], even on a rectangle shaped texture.

I’m not explaining to you how to get a BitmapData, either load it or embed it as I did.

 private var textClass:Class;
 private var text:Bitmap = new textClass();

The tutorial class takes advantage of my “still-in-progress” Geometry class as the previous tutorial did. A few thing have been improved, you may have a look if you want. Texture declaration, upload and allocation is all done in the __createAndUploadTexture() method.

 * Create and upload the texture
 private function __createAndUploadTexture():void {
     if (texture) {
    texture = context.createTexture(1024, 1024, Context3DTextureFormat.BGRA, false);
     var bmd:BitmapData = text.bitmapData;
     var s:int = bmd.width;
     var miplevel:int = 0;
     while (s > 0) {
         texture.uploadFromBitmapData(getResizedBitmapData(bmd, s, s, true, (miplevel != 0 && _mipmapColor.selected) ? Math.random()*0xFFFFFF:0), miplevel);
         miplevel++; //miplevel going up by one
         s = s * .5; //... and size going down, divided by two.
    context.setTextureAt(0, texture);

The code is pretty self explanatory. First ask the context to create a texture, it’s the same as asking to create a buffer. The upload part is a bit more complicated though.

Unlike a vertexBuffer, you can upload “several textures” to a single texture, each one corresponding of a smaller texture, for mipmapping purpose. Every time you upload a smaller texture, the “mipmap level” goes up by one, the default being zero.

For instance, if you don’t want to use mipmapping, you can use :


But if you want to use mipmapping, you will end up using something like :

texture.uploadFromBitmapData(bitmapData512, 0);
texture.uploadFromBitmapData(bitmapData256, 1);
texture.uploadFromBitmapData(bitmapData128, 2);
texture.uploadFromBitmapData(bitmapData64, 3);
// and so on, down to a 1x1 bitmapdata...
texture.uploadFromBitmapData(bitmapData1, 9);

This is pretty much what my loop does, but instead of creating every texture size by myself, I just use ActionScript to generate them for me.

The rest of the method is allocation, very much like setVertexBufferAt, or setProgram. When you allocate a texture for the program, you will be able to use it in AGAL by calling  ”fsx“, so in this case, under fs0. Pretty much the same thing that fragment constants, vertex attributes or whatever…

fs : Fragment Sampler.


The AGAL code given in the class is almost the same as the previous article AGAL code, so I’m going only to highlights the differences. Well, as always, if you are having troubles with my explanations, feel free to contact me.

Vertex Shader :

code += "mov v0, va1n";   // Interpolate the UVs (va0) into variable register v1
 code += "mov v1, va2n";   // Interpolate the normal (va1) into variable register v1

In the Vertex Shader, be sure to pass the UVs to the fragment shader. All the rest is the same thing.

Fragment Shader :

In the class, the code changes according to the mipmap combobox, so I will flatten the code here :

"text ft0 v0, fs0 <2d,linear, nomip>n" // NO mipmap


"text ft0 v0, fs0 <2d,linear, miplinear>n" // WITH mipmap

As you can see, the text opcode takes the following arguments :

text destination, coordinates, texture <options>

Sampler options can be found very easily on google, I will explain them on another article a little bit later.

As you can see, this is really simple. Once sampled, the pixel diffuse color is store on the temporary ft0. Now you can use it as your base color instead of the previous constant that was used (fc4). If you took your last sources instead of those one, make sure you also change this line :

"mul ft2, ft0, ft1 n"+ //multiply fragment color (ft0) by light amount (ft1).

Or you will still be using the plain color uploaded as a constant (change fc4 by ft0).

Compile and Run, and here you go !

Get Adobe Flash player

To notice the mipmap effect, zoom out a lot, rotate the cube a little. You can also check “show mipmap colors” so each mipmap levels get a random color.

I hope that will answer questions you had about textures. A little more will come about options, and about cube textures. As always, any feedback is always appreciated, and feel free to contact me if you have troubles !

See you !

Stage3D / AGAL from scratch. Part VII – Let There Be Light

Let There Be Light

So, we displayed a lot of cubes and triangles together, and we also created a few controls to play around with the 3D Scene. But the very essence of 3D content is light.

For this tutorial, I’ve included a bunch of classes I am currently working one, not as a 3D engine, but as a small toolbax for future experiments. You will find in the given source code those currently “W.I.P” classes :

* Geometry : Stores a geometry, and it’s indexes. Has shorcuts to declare square faces instead of triangles. Can create and upload the buffers for you. Later, will be able to generate the faces normals (more about that later)

* Cube : A Simple class extending Geometry, creating a Cube made of 24 points so that the future faces normals act as intended.

* ColorMaterial : Right now a simple RGB value object like, but should contain the fragment shader soon.

* ArcBallCamera : Not something new, but completely revamped. It’s now really moving onto a circle and using the pointAt method to target the origin. This new method make the drag controler a little bit smarter (dragging to the bottom will only make the object rotate as if the screen would be the X axis.)

When I announced a few weeks ago this tutorial, I compared faces normals (actual normals in 3D language) to normalized vertices.

You will find in the Geometry class a method called “computeNormals” that will give you the first ones, the one we wants. This method is still in progress as right now, the normal can be the opposite of the wanted one if the face is drawn counter clockwise.

I will explain in another article how you can generate basic normals for your models, but keep in mind that this data should ideally be computed by your 3D designer, because it can “smooth” edges for low-poly meshes.

So anyway, by calling the computeNormals method, we will get small vector perpendicular to each face (each triangle).

The Lamberian Factor

The first light we will compute is what we call the diffuse light. The amount of light diffused by a surface depends on the angle between that surface, and the light. this is call the Lamberian Factor, or the Lamberiance Reflectance. Quoting Wikipedia, “The reflection is calculated by taking the dot product of the surface’s normal vector, and a normalized light-direction vector, pointing from the surface to the light source.”

The dot product is an operation we can do using AGAL very simply using the opcode dp3, which stands for Dot Product 3, 3 being the number of components (here, x, y and z).

Just a word about the dot product. The dot product, or scalar product takes two vectors and returns a single number. The only thing you need to remember is this :

  • If two vectors goes toward the “same” direction, the dot product will be a positive number.
  • If the vectors are perpendicular to each other, the dot product will be equal to zero
  • If the Vector are facing each other, the dot product will be a negative number.
Because the dot product also depends on the length of the vector, we will mostly use it with normalized vectors, giving you a result between -1 and 1, wich is very handy especially in light computation.

Allright, let’s now code this. First of all, download the following source code.

The LetThereBeLight class is rather simple. On context creation, I simply get an ArcBallCamera class, a bunch of projection matrix, a model matrix (that will be added to the Geometry class later), and a Cube. The Cube instance will receive a ColorMaterial (not really relevant right now) andcreate the buffers for me :

geometry = new Cube(50);
 geometry.setMaterial(new ColorMaterial(0xFF00FF));

This is simple stuff for you now, so let’s move on to the actual shader.

The Shader Constants

As we saw, the Lamberian Factor requires, in order to be calculated, the light direction, and the surface normal. The surface normals are already stored in the vertexBuffer, so we still need the Light Direction. But we also need a bunch of other values :

  • The Light Color. Here I chose a plain white
  • The Ambient Light. The ambient light is the minimum amount of light a surface can receive. It’s a simple technique to simulate the fact that, in the real world, the light is reflected so many time that even when an object side is not under the light, it’s still visible and doesn’t turns completely black.
  • The Light Direction. In this example, the light will always come from the camera, meaning that we will more have the impression of moving the cube under the light than moving around it, but feel free to try other values

All those data will be stored in shader constants, so here we go :

context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.&lt;Number&gt;([0,0,0,0]));
//fc0, for clamping negative values to zero
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.&lt;Number&gt;([0.1,0.1,0.1,0]));
//fc1, ambient lighting (1/4 of full intensity)
var p:Vector3D = camera.position;
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.&lt;Number&gt;([p.x,p.y,p.z,1]));
// Light Direction
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.&lt;Number&gt;([1,1,1,1]));
// Light Color

You may have noticed that all those constants, even if they are mostly vectors, directions, positions, are FRAGMENT constants, since we have no use of them in the vertex shader. Looking at the source, you will see that the color of the cube (here a nice pinkish color) is uploaded as a constant. We saw that already.

OK, so now, everything is in place, we may have a look at the shader AGAL code.


What we need to do according to the Lamberian Factor :

  1. Calculate the Lamberian Factor using a dot product between the normal (v1) and the light  direction (fc2)
  2. Negate the result : We do this because the Lamberian formula is using the light direction from the surface to the light source. So you can either negate the light direction vector, or negate the dot product result
  3. Clamp any result below 0 : if the angle between the light and the surface normal is higher than 90°, then the dot product will be negative. This could cause unexpected result when computing the output color, so we just set it to 0 (no light).
  4. Multiply the fragment color by the light amount. For a light amount equals to 0, the surface will be black, for a light amount equals to 1, the surface will have it’s regular color.
  5. Multiply the resulted color by the light color. Your red cube might look a little more purple if your light is blue
  6. Add the ambient light. This way, every black surface will become a little brighter.

Here is the corresponding AGAL code :

code = ""+
 "dp3 ft1, fc2, v1 n"+ // dot the transformed normal (v1) with light direction fc2 -&gt; This is the Lamberian Factor
 "neg ft1, ft1 n"+ // Get the "opposite" vector. We could also have uploaded the opposite of the light direction to avoid this step
 "max ft1, ft1, fc0 n"+ // clamp any negative values to 0 // ft1 = lamberian factor
 "mul ft2, fc4, ft1 n"+ //multiply fragment color (fc4) by light amount (ft1).
 "mul ft2, ft2, fc3 n"+ //multiply fragment color (ft2) by light color (fc3).
"add oc, ft2, fc1"; //add ambient light and output the color

UPDATE : Thanks to the help of Jean Marc, I discovered the sat opcode that one can use to clamp any value to the range [0,1]. So I should just replace the “max” line with this one :

" sat ft1, ft1 n"+

which allows me to save a constant, so I should  also get rid of fc0.

Also, you now know that copying values to the varying registers (v0, v1) the values are interpolated. That behavior was demonstrated by the color slowly fading between two points in the previous tutorials. Well, as Jean Marc stated, when being interpolated, the normals could not “normalized” anymore, so I should normalize my normals (duh !) in the fragment shader before using them. Thanks Jean Marc !

Compile and run : here it is, your first directional light !

For the posted demo, I added two options that are not in the sources : the first checkbox fix the light at the current position so you can rotate the cube and see the effect of ambient light, and the second one switch the normals to normalized vertices (see two first schemes).

Get Adobe Flash player

As always, have fun with the sources, and tell me what you think ! If you need more explainations or anything, just feel free to ask.

See you !

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44″ AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

         * Render the cube according to it's current parameters ( = modelMatrix)
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
            // program
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
            // render

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !

Stage3D / AGAL from scratch. Part V – Indexes and culling mode

What indexes are used for

On previous articles, I talked about indexes, and indexBuffer. The comparison I made was that indexes were like the numbers in “connect the dots” game.

To render something on screen, your graphics card need to know how to draw your triangles. Imagine you have 8 vertices, corresponding to the 8 corner of a cube. Without any further instruction, your graphic card can’t guess what you want to draw : Maybe you want a closed box, but maybe you don’t want any top face, or maybe you just want 2 planes crossing in the middle like an X.

Indexes are instructions about in which order you want to deal with vertices. You may run over a same vertex many times to achieve drawing your triangles. As a matter of fact, your indexBuffer length should always be a multiple of 3, since you can only draw triangles and not polygons.

Just remember that indexes refers to the “index” of a vertex in the buffer. So your indexes values are related on how you defined your vertices. Here is a sample on how to draw 2 faces of a cube

Cube Indexes Explanations

Let’s call vertex at index 0 A, vertex at index 1 B etc. My vertexBuffer looks like this :
A, B, C, D, E, F, G, H.

In this case, my IndexBuffer while contains those values :
0, 1, 2, 1, 2, 3, 0 ,4 ,5, 0, 5, 1

Note that if you swap point C and point D index in the vertexBuffer :
A, B, D, C, E, F, G, H

I need to adapt my indexBuffer, because I want to draw the same thing, but the vertices indexes in the buffer are modified, so I would have :
0, 1, 3, 1, 2, 3…

Get it ? Quite Simple actually…

Indexes and triangle culling

By default, the GPU renders every triangle, whether they are facing the “camera” (actually the clipspace) or not. An optimisation technics is to ask the GPU to render only triangles facing the camera, and ignoring the others. Using this, you may have better performance when rotating around a non transparent ball for example.

This is call the triangle culling mode. For non native speakers (like me…), to cull means to delete. Triangles that may be skipped regarding their orientation will be excluded from the rendering pipeline.

Culling mode can take 4 values related to which face is culled :

  • Context3DTriangleFace.NONE : This is the default value, triangles are always rendered. no face is being ignored
  • Context3DTriangleFace.BACK : Triangles are rendered only if they are “facing” the view. ie, triangle’s back face is culled.
  • Context3DTriangleFace.FRONT : Triangles are rendered only if they are not “facing” the view, triangle’s front face is culled.
  • Context3DTriangleFace.FRONT_AND_BACK : Nothing is rendered, since both faces are being excluded.

But which one is the front face ?

You may be wondering how one can even guess which triangle face is the front one. Well, it’s actually rather simple : it depends on the order you used on indexes. If the triangle is drawn clockwise, then your are looking at the front face. If the triangle is drawn counter clockwise, then your are facing the back of your triangle.

First triangle is drawn using indexes :      0, 1, 2
Second triangle is drawn using indexes :       0, 2, 1

To observe it yourself, you can grab the HelloTriangle source code and change the culling mode by calling :


This will set the culling mode to the one used in a lot of 3D Engine such as Away3D. Now you may try to invert the indexes : The triangle should not be visible.

Culling mode and double-sided polygons.

In most engine you will use, the culling mode will be set to BACK, meaning you will only view the triangle front face. Instead of changing the culling mode, you will mostly see a “doubleSided” option, making a polygon to be viewable from both side.

To draw a double sided triangle, it is relatively easy : you just have to twice as much indexes instructions, some for the front face, some for the back face. Remember that you can run over the same vertex many times !

To make the previous triangle double sided, you just have to concatenate both indexes buffer :
0, 1, 2, 0, 2, 1.
The simplest way is actually to concatenate the reverse of your first indexes :

var indexes:Array = [0, 1, 2];
    indexes = indexes.concat(indexes.reverse());

You may grab the HelloMatrix source code, add a rotation to the triangle, and change the culling mode : you will now only see one side. Make your triangle double sided, and voila !

That’s it for now, in the next article, we will build a simple Arc Ball class to help you test your projects.

As always, feedback is appreciated :)

Stage3D / AGAL from scratch. Part IV – Adding some depth

Understanding perspective

I thought this was 3D, so why can’t I use the z coordinate ?

Hopefully, you guys have read my previous article and play with the example class, or with your own. You may have noticed that changing the z coordinate didn’t changed anything. Let me explain why.

Your 3D scene is rendered in 2D, in some area called the clipspace. The clipspace is basically your screen, and every point that is behind your screen needs to be projected into the clipspace so it can be drawn.

I said earlier that x and y coordinates where going from -1 to 1. Well, it’s not true. It’s actually the clipspace coordinate that goes from -1 to 1. Imagine that the clipspace would have the same width and height as your screen, or your browser windows, you would have to compute the coordinate for every screen size, and for every size change ! Having a normalized clipspace is what allows us to forget about screen size and resolutions and focus on our scene coordinate.

Now, by default and without any other instruction, your graphic card project your vertices to your clipspace without any projection or any sense of perspective. That is why if you have a point out of clipspace coordinate, like x=2, you can’t see it.

Since we were only moving each vertex coordinate to the ouput point, here is the equation for any projected point

// mov op va1

xP = x
yP = y

In the following scheme, 3D object 1 and 3D object 2 have the same projected point, since the Z coordinate isn’t part of the equation.

The perspective divide

To be able to render 3D on a 2D plan (your screen), we need perspective. Perspective is what makes the borders of a road to look like they are converging when they are actually parallels lines. The equation is actually rather simple : The farther a point is, the closest to the middle it appears. Here is the equation :

xP = K1 * x / z
yP = K2 * y / z

With K1 and K2 some constants depending on things such as the field of view, or the aspect ratio of your clipspace. This is a perspective divide.

You can notice that if you divide by z, then z can’t be equal to 0. We will talk about this later.

When using the perspective divide, here is the result of the projection of 3D object 1 and 3D object 2 from previous scheme

Using Matrices

When we coded our first vertex shader, we just copied each vertex coordinate into the Output Point. You now learned that computing the output point actually defines the position of the projection of a vertex into the clipspace.

To be able to translate, rotate, or scale an object, we won’t be modifying all of his vertices. Why ?

  1. It would be really complex to compute the new position of every vertices when rotating by 45° on the Y axis, then scaling it up to 2.37 times.
  2. We would need to upload the coordinate into the vertex buffer again, which would completely lose any interest in using Stage3D. Remember, if the graphic card can render triangles so fast, it’s because everything is ready in the video ram.

Instead of uploading new coordinates to the V-Ram, we will compute the output point using a Matrix. This Matrix will be uploaded as a constant in every frame. Constant are very fast to update in the V-Ram unlike Vertex Buffers or Texture.

Updating the HelloTriangle example

Now, you can either open you HelloTriangle project, or download the following one. I recommend you to take your last project if you already have it since there is only a few line to add, but if you prefer to take my sources, you should be looking for the HelloMatrix class.

First thing we need to do is to create a Matrix3D, and upload it as a Vertex Constant to the Graphic Card. Go to the render function, on line 194. The HelloTriangle should already have a Matrix3D class member declared called m. So just instantiate it, append a translation to it either on x or y, and use the context.setProgramConstantsFromMatrix method to upload it to the GPU. Here is what I have :

// create a matrix3D, apply a translation on it, then set it as a vertex constant
m = new Matrix3D();
m.appendTranslation(Math.sin(getTimer()/500)*.5, 0, 0);
context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, m, true); // 0 means that we will retrieve the matrix under vc0

I choose to create a translation according to a Timer so you can see the triangle moving on the x axis.

At this point, if you compile your class, you won’t see any change. This is because we need to instruct our GPU how to use the Matrix, and this will be done in the Vertex Shader.

Updating the Vertex Shader

Obviously, updating the Vertex Shader will happen in the AGAL code. Time for us to learn how to invoke constants.

  • vc : Vextex Constant. Called by their first register (ex : vc0). Be carefull, matrix constants take 4 registers, so if you upload a Matrix to vc0, the next constant must be set on vc4.
  • fc : Fragment Constant. Same thing as above, but for fragment shaders.

Locate your Vertex Shader AGAL code, it should be around line 161. What we want to do is to compute the position of each vertices using the Matrix3D we stored as a constant instead of just copying x and y coordinate to clipspace.

To perform a 4×4 Matrix operation on a Vertex, you need to use the opcode m44 using this syntax

m44 destination, vertex, matrix

Where destination is the output point, the vertex is store in Vertex Attribute 0, and the Matrix in Vertex Constant 0. Got it ? Here what you should get :

var code:String = "";
code += "m44 op, va0, vc0n"; / Perform a 4x4 matrix operation on each vertices
code += "mov v0, va1n";

That’s it ! Now, on every frame, we will create a new Matrix3D, append a translation to it on x axis between -0.5 and 0.5, upload it to the GPU, then execute the program that will perform a m44 operation on each vertices to reflect the translation we made.

Go on, compile, you should see your triangle moving from left to right.

Back to perspective

Now we know how to use a matrix to transform the final rendering of our triangle. Understanding the Math behind perspective is great but you don’t want to do it every time, don’t you ? Hopefully, Adobe provided a downloadable class, PerspectiveMatrix3D.

This class will let you create a Matrix3D with some intelligible parameters to render perspective.

Now, you can either continue to update your HelloTriangle class, or take the same package as above and look for the “AddingPerspective” class.

The AddingPerspective class is actually drawing a square so that the effect of perspective can be noticed more easily. You know how to draw a triangle, drawing a square is just drawing two triangles. You can have a look in the sources, but we will be back to Quads (squares) on the next article which will deals with indexes. Either way, the following example can be achieved using a triangle or a quad, it doesn’t matter.

The PerspectiveMatrix3D class

Among many thing, the perspectiveMatrix3D allows you to define matrix parameter to render perspective using 4 parameters :

  1. The FoV or Field of View. The FoV, in radians, represent how wide is your field of view. We will set it to 45°
  2. The aspect ratio is the ratio of your backbuffer. We will set it to (width / height).
  3. The zNear is the minimum z coordinate that your eye can see. We will set it to 0.1. A word on that later.
  4. The zFar is the maximum z coordinate that your eye can see. We will set it to 1000.

Go to the render method and instantiate a new PerspectiveMatrix3D object, then apply it the previous parameters.

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);

About the zNear

You may wonder why we won’t render the z from 0, and start at 0.1. Well. Remember the perspective divide was

xP = K1 * x / z
yP = K2 * y / z

As we are dividing by z, and because, I hope you know that, dividing by zero is impossible, we can’t have the zNear parameter equal to 0 because the equation couldn’t be computed for objects with a z coordinate set to 0.

This is actually a kind of problem since our triangle’s vertices z coordinates are set to 0. Hold on, don’t go change the VertexBuffer, we learned how to move an object right ? We can simply append a translation on the z axis to push your object a little forward.

What we need to do now is :

  1. create the PerpectiveMatrix3D as above
  2. Do some rotation on the m Matrix so we can actually notice the effect of perspective
  3. Translate a little bit forward our vertices so that they are behind zNear value
  4. Multiply the first Matrix with the PerspectiveMatrix to add perspective to the final render.

What I get is this :

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);
m = new Matrix3D();
m.appendRotation(getTimer()/30, Vector3D.Y_AXIS);
m.appendRotation(getTimer()/10, Vector3D.X_AXIS);
m.appendTranslation(0, 0, 2);

Compile, and here it is ! a rotating triangle with some sense of perspective ! If you took my class, you should see a rotating square instead of a triangle.

Practice !

As always, a little practice on your own is the best way to learn so here is what you can try

  1. Set R, G and B value to [0-255] instead of [0-1], upload a [255,255,255,255] Vector Fragment Constant, then divide your color values before moving it to the Output Color. You may use the div AGAL opcode

This article was less into code, and I think I will keep it that way for now, for 2 reasons :

  1. The less I write the more you code
  2. And it was actually way too long to write the hello triangle article while describing each single lines of code.

Anyway, I will always be giving the class I use as an example, and those class will be documented. If you think that I should go back to something more verbose, just tell me, feedback is always appreciated.

As always, if you have any questions, feel free to ask !

Stage3D / AGAL from scratch. Part III – Hello Triangle

Here we go

So, you’ve read Part I and Part II, and you want things to get dirty. Alright.

You can start by downloading the source code : HelloTriangle Sources

In this exercise, we will draw our first triangle using only Stage3D. There will be a small part or AGAL, which is actually the minimum required to display anything on screen but don’t worry, it’s easier to understand than it seems.

In the last articles, I talked a lot about Vertices. A vertex is a point in space that will be used with 2 other Vertices by our GPU to render our triangle.

A VertexBuffer is a simple list of Number so you have to define a structure that will be the same for each Vertex. Every vertices in the same Buffer needs to follow the same structure. The interessting thing is that you can pass whatever you want in your VertexBuffer, and you will have to decide what data is used for.

There is, however, at least 2 common structures used for VertexBuffer but I will describe the first one, that is used in that example (baby steps !)

As you can see, our VertexBuffer will be structured as a simple list of Number, where the first three numbers are our Vertex coordinate, and the three others our Vertex color. Every Vertex we may add to the buffer needs to follow the same syntax.

 Time for some code

Now if you didn’t already, download the source code, and open the HelloTriangle class in your favorite editor. Then come back to me.

Class constructor is quite simple to understand, so we move on to the __init function.

// wait for Stage3D to provide us a Context3D
stage.stage3Ds[0].addEventListener(Event.CONTEXT3D_CREATE, __onCreate);

When working with Stage3D, you are actually not working at all with the Stage3D class. Instead, you are working with a Context3D.

First thing you have to do is to target one of your Stage3D (stage.stage3Ds[0]) and request it for a context. You will be able to access the context when receiving the proper event.

At this step, Flash will try to reach your graphic card. If it can, then you will get an hardware accelerated context, either driven by DirectX for Windows platform, or OpenGL for Linux and MacOS platform.

If Flash can’t reach your graphic card for any reason, you will still get a context3D but actually driven by the new CPU rasterizer called SwiftShader.

Once we get a context, we stock it in a class member, and then configure the back Buffer

private function __onCreate(event:Event):void {
	// // // CREATE CONTEXT // //
	context = stage.stage3Ds[0].context3D;
	// By enabling the Error reporting, you can get some valuable information about errors in your shaders
	// But it also dramatically slows down your program.
	// context.enableErrorChecking=true;
	// Configure the back buffer, in width and height. You can also specify the antialiasing
	// The backbuffer is the memory space where your final image is rendered.
	context.configureBackBuffer(W, H, 4, true);

The back Buffer is a specific memory space where all your rendering is copied. You will be able then to display on screen what does the back buffer contains.

Setting up buffers and program

Then we create the buffers and the program. Don’t worry too much on the AGAL part right now, ill explain it later.

private function __createBuffers():void {
	// // // CREATE BUFFERS // //
	vertexBuffer = context.createVertexBuffer(3, 6);
	indexBuffer = context.createIndexBuffer(3);

Creating buffer is rather simple. The only thing you need to know is :

  1. How much Vertex it will have : 3
  2. How much information each Vertex have : x,y,z,r,g,b = 6

The index buffer needs to know how many instruction he will have. Here 3.

The program part is a bit complicated right now, because we need to see more to understand the AGAL code. Still, we can have a look at the code.

private function __createAndCompileProgram() : void {
	// When you call the createProgram method you are actually allocating some V-Ram space
	// for your shader program.
	program = context.createProgram();
	// Create an AGALMiniAssembler.
	// The MiniAssembler is an Adobe tool that uses a simple
	// Assembly-like language to write and compile your shader into bytecode
	var assembler:AGALMiniAssembler = new AGALMiniAssembler();
	var code:String = "";
	code += "mov op, va0n"; // Move the Vertex Attribute 0 (va0), which is our Vertex Coordinate, to the Output Point
	code += "mov v0, va1n"; // Move the Vertex Attribute 1 (va1), which is our Vertex Color, to the variable register v0
	 // Variable register are memory space shared between your Vertex Shader and your Fragment Shader
	// Compile our AGAL Code into ByteCode using the MiniAssembler
	vertexShader = assembler.assemble(Context3DProgramType.VERTEX, code);
	code = "mov oc, v0n"; // Move the Variable register 0 (v0) where we copied our Vertex Color, to the output color
	// Compile our AGAL Code into Bytecode using the MiniAssembler
	fragmentShader = assembler.assemble(Context3DProgramType.FRAGMENT, code);

Creating a program is simple. Then we have the simplest AGAL code we can have. Just skip it. Then we use the AGALMiniAssembler tool from Adobe to compile at runtime our AGAL program into ByteCode, which give us two Shaders : one VertexShader, and one FragmentShader.

Uploading Data to the graphic card

Now we need to upload the data and the program to the graphic card.

private function __uploadBuffers():void {
	var vertexData:Vector=Vector.([
	-0.3, -0.3, 0, 1, 0, 0, 	// - 1st vertex x,y,z,r,g,b
	0, 0.3, 0, 0, 1, 0, 		// - 2nd vertex x,y,z,r,g,b
	0.3, -0.3, 0, 0, 0, 1		// - 3rd vertex x,y,z,r,g,b
	vertexBuffer.uploadFromVector(vertexData, 0, 3);
	indexBuffer.uploadFromVector(Vector.([0, 1, 2]), 0, 3);

To upload Data to a buffer, the simplest way is to upload a Vector. As you can see, I am creating a Vector or 3 * 6 entries, which correspond to the data of three vertices. The coordinates goes from -1 to 1 for x and y, and we don’t bother with the z part now. The colors component, R, G and B, goes from 0 to 1. So the first line match the top Vertex which will be red, the second line match the bottom left vertex and will be green, and so on.

As an index buffer, we simple tell our graphic car that the triangle is drawn in the order 0, 1, 2, which are indexes of the vertices in the vertex buffer.

Now time to upload the program

private function __uploadProgram():void {
	program.upload(vertexShader, fragmentShader); // Upload the combined program to the video Ram

Uploading the program is rather simple, so now we may move to the AGAL stuff.

Using the VertexBuffer inside the Shaders

When we created the VertexBuffer, we told the graphic card that each vertices will have a length of 6 components. This is a valuable information for the graphic card because when the VertexShader will run, it will be executed for each vertices in your buffer.

The GPU will then “cut” in the buffer 6 number at a time, because we told it that were the amount of data we needed. Each Vertex needs to be copied to fast access register before being used by the program. Think of register as a very small piece of RAM that is extremely fast, and very close to the GPU.

The last step is to tell the GPU how to split each chunk of data, and where to copy it.

For the code part :

private function __splitAndMakeChunkOfDataAvailableToProgram():void {
	// So here, basically, your telling your GPU that for each Vertex with a vertex being x,y,y,r,g,b
	// you will copy in register "0", from the buffer "vertexBuffer, starting from the postion "0" the FLOAT_3 next number
	context.setVertexBufferAt(0, vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // register "0" now contains x,y,z
	// Here, you will copy in register "1" from "vertexBuffer", starting from index "3", the next FLOAT_3 numbers
	context.setVertexBufferAt(1, vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // register 1 now contains r,g,b

Let’s go AGAL

We just instructed how the GPU should use the VertexBuffer. As a reminder :

  1. Take the 6 first information
  2. From 0 to 3, copy the data into the register 0
  3. From 3 to 6, copy the data into the register 1
  4. Register 0 now contains some coordinates, register 1 now contains some color.

The AGAL code will be separated in two Shaders. The Vertex Shader, and the Fragment Shader. The registers can only be accessed from the Vertex Shader, but we will learn a trick to make the color available for the Fragment Shader.

Before going any further, let’s have a look at the AGAL syntax :

opcode destination, source1[, source2][, options]

for instance :

mov op, va0

means :

op = va0

Another example :

mul vt0, va0, vc0

means :

vt0 = va0 * vc0

The names of the “variables” are actually names of register. It is rather simple to remember, but right now, let just learn the one we are using :

  • va0 : Vertex Attribute 0. This is the register where we copy the coordinate of each Vertex (see above). There are 8 registers, from va0 to va7.
  • op : Output Point. This is a special register where the final coordinate must be moved to.
  • v0 : Variable Register 0. The Variable registers are shared between the Vertex Shader and the Fragment Shader. Use them to pass some data to the Fragment Shader. There are 8 Variable Register from v0 to v7.
  • oc : Output Color. This is a special register where the final color must be moved to.

Let’s have a look at the first sample of AGAL code we have and which is the Vertex Shader (Line 159 and 160 of the Example Class)

mov op, va0
mov v0, va1

The first line move the va0 register where we copied the vertex coordinate to the output point without any modification. Then, we need to move some information to the Fragment Shader using the Variable Register. In our case, we only need to pass the color information, which is stored in the va1 register (see previous chapter for buffer splitting into register).

Simple isn’t it ? let’s have a look a the Fragment Shader.

mov oc, v0

Really simple : move the color we just copied into the Variable Register to the ouput color without any modification. That’s all !

Once the program compiled using AGALMiniAssembler, and uploaded to the GPU, we can make it the active program for the GPU

private function __setActiveProgram():void {
	// Set our program as the current active one

Head up to the rendering part.

Rendering our triangle on screen

The rendering part is quite simple in our example since we don’t have any interaction, camera, animation or whatever. The render method is called on each frame.

private function render(event:Event):void {
	context.clear(1, 1, 1, 1); // Clear the backbuffer by filling it with the given color
	context.drawTriangles(indexBuffer); // Draw the triangle according to the indexBuffer instructions into the backbuffer
	context.present(); // render the backbuffer on screen.

on each frame, we start by clearing the scene and filling it with a plain white. Then, we draw the triangle according to the indexBuffer instruction. The drawTriangles method actually draws into the back buffer. To render the back buffer content on screen, we simply call the present() method.

Compile, and relax.

Maybe some of you may have noticed that this example is simpler than the other you may have read before : no Matrix3D, no m44 opcode. This is intended.

You really need to understand how everything works together before going any further. If you can get it, you will find the following so much easier.

Practice !

You should try the following to help you understand how everything works together, as a practice :

  1. Render the triangle on a black background
  2. Get rid of the z coordinate since we are not using it
  3. Without modify the vertexData vector or the AGAL code, use the coordinate as color and the color as coordinate. You shoud get something like that :

If you need any help with either the tutorial or the exercises, feel free to leave a comment on the blog, or drop me a message on my twitter.

Again, as it is the first time I am writing a blog, I’d really like some feedback. Any criticism,  or encouragement, is welcomed.

Hope you liked it !

Stage3D / AGAL from scratch. Part II – Anatomy of a Stage3D Program

Anatomy of a Stage3D program

Before we start coding our first triangle, we may have a quick overlook at the structure of a Stage3D program. This is also true for a WebGL program or anything that deals with low level 3D programming.

When working with low level 3D API, you have to understand a bit how does the hardware works. As a reminder, V-Ram is your graphic card memory and a Buffer is some allocated space in your V-Ram. To render some triangles, your GPU needs at least some Vertices (a vertex being a point in space) and a program that will tell how to render those points on screen.

One thing we didn’t talk about yet is the Indexes. Indexes are instructions on which order your Vertices should be drawn to make triangle. You can think of Indexes such as numbers in those “connect the dots” games we had when we were kid. Great ! We can had one terms to our vocabulary.

Indexes : A list of Integer that defines in which order the graphic card should render the Vertices to make triangles.

Don’t worry to much on indexes now, we will come back to them later. Now, I heard you like flowcharts.

A flowchart of the basic Stage3D program structure

You can identify 3 majors phase out of this flowchart :

  1. The Allocation Phase : Create your buffers, for each type of data your GPU will use (vertex, texture, program) in your V-Ram
  2. The Upload Phase : Send your data, or texture, or program into your buffers so your GPU can access it.
  3. The Declaration Phase : Tell your GPU which buffer to use as Vertex Data and Texture and Program so it can run it.

The two first phase, Allocation and upload, are the essence of Stage3D power. It takes a lot of time to allocate some space and to upload your data, but once your mesh is written in your Video Ram, there is no need of the CPU anymore, and that is the reason why your GPU can render so fast, leaving your CPU free for any other process such as physics calculation or so.

All those three phase can be done once, and should be executed again till the data or the program doesn’t change. We will call it the setup.

The setup can be done in any order : You can start with the Program, then your Vertices, then your Indexes, but you can do the opposite to. Of course, you need to follow the order for each column : you need to create a buffer to be able to upload some data in it ! But at the end of the setup, when you will call the drawTriangle() method, all what matters is that everything is in place and ready to run, no matters how you did it.

Now that you know how is structured any low level 3D program, you had a peak of view of what method Stage3D brought to Flash : VRam allocation, Data upload to Buffer, etc…

I’d love to write as soon as possible the first sample of code to put in practice all I have written so far but I don’t really have the time to do it right now. I promise in less in a week you will get your first triangle using Stage3D and only Stage3D.

Stage3D / AGAL from scratch. Part I – The Basics


I started working with AGAL at Marcel with a coworker of mine a couple weeks ago. We’ve all seen what amazing stuff Stage3D allows to do, and maybe you went for the Away3D examples and compiled them.

While working with Away3D for my first experiments, I felt disappointed because I wasn’t able to go behind cubes, and spheres an other primitives because of an obvious lack in 3D modeling skills.

Then came Starling and ND2D, and we started to realize that Stage3D was also a wonderful tool to go for hundreds of sprites, particles, and animations.

But you have to face it : if you really want to understand how those thing works, either to improve them or to use those tool in the best way you can, you need to go deeper (krkrkr) and start using the Low Level API that Adobe gave us with Stage3D.

Like a lot of people, I start with a simple Hello Triangle example. But then, nothing. So we had to digg on our own.

In the next 3 or 4 posts, I will explain the principles and basics of Stage3D, and try to go a little behind the 3 colors triangle exercise.

Let’s start coding for Flash Player 11

First of all, we need to be able to compile something for Flash Player 11. I am using FDT, so you may have to adapt my explanations for some other IDE like Flash Develop or Flash Builder.

A lot of people already covered the subject so simply follow the example according to your IDE.

Make sure you can compile something with a dependency to Context3D before going any further.

Stage3D : What it is, and what it’s not

I feel that a lot of flash developer have been disappointed when looking at Stage3D. Adobe told us they were bringing true 3D to the platform, and now, all you get is this AGAL stuff, some numbers to write, no camera, no Scene3D, not even a z property on some Mesh object. Well.

Stage3D is only doing one thing, but it does it well. Stage3D only allows your to upload to your graphics card, or GPU, some data, textures, and a program to run. And that’s all.

What Stage3D is not is a 3D engine. A 3D engine allows you to work with Meshes, and Materials, and Lights, Cameras and tangible objects just like that. A lot of 3D Engine have been build on top of Stage 3D like Away3D 4.x, Alternativa 3D, Minko, Epic Game’s Unreal Engine or the Unity 3D Engine.

OK, you got it but now you want to learn actual things ? OK, go on then.

Basic 3D vocabulary

Like in Every domain, there is some specifics words you may want to know before going any further. I will keep it to the minimum, and explain later what is not absolutely fundamental.

GPU : Graphical Processing Units. In other word, your graphic card, or more precisely, your graphic card processor.
V-Ram : Video Ram. Your graphic car memory.
Buffer : Allocated space inside your V-Ram
Vertex : A Vertex is a point in a 3D space.
Fragment : Let just say it’s the color of a pixel right now
Shader : A Shader is a part of a program. Shaders come in two types : Vertex Shaders and Fragment Shaders. Fragment Shaders are also called Pixel Shaders.
Program : The combination of a Vertex Shader and a Fragment Shader.

That’s all for now !

In the next part, we will code our first triangle only using Stage3D, and try to understand the structure of a Stage3D program.