Stage3D / AGAL from scratch. Part VIII – Texture, please !

Hi folks,

As I said earlier, I have received some requests on twitter for an article about textures. So here we go.

In this tutorial, I’m going to take the previous article example and modify it so we can see a textured cube instead of a pink one.

By the way, I realized that the previous classes I gave you were incomplete, with some missing imports. Nothing huge actually, just a re-scale function, that is part of our framework here at Marcel. Long story short, sorry about that, I cleaned those one so you may be able to compile right away. And remember, if you have any troubles with any of my examples, just send me a tweet or something.

OK, lets go.

Texture coordinates

In the very firsts round of tutorial, we used to draw triangles with different colors “attached” to each vertices. That was how we could understand that every data that is passed from the vertex shader to the framgment shader is being interpolated.

Using a texture is relatively simple. As every thing we want to use in our shader, we will have to declare it, then upload it to the V-Ram, then allocate it.

The fragment shader, or pixel shader, is runned for every pixels that is being drawn, and it’s purpose is to compute that pixel color. Is previous example, the base color of every pixel was simply stored in a varying register, all we had to do was to, enventually, apply some light on it, then copy the color tu the Output Color register (oc).

Well, the only difference here will be that, for every pixel, the Fragment shader will not receive its color, but will have to sample the texture to get that pixel color.

The question is : how can our shader knows were, on the texture, he is suppose to “look at” to get the pixel color ? Well, this is what texture coordinates does.

Texture coordinate is very simple to understand. We call them U and V, and this is the exact same thing as X and Y in actionscript, except they are not absolute values, but relative values between 0 and 1. This way, you can change your texture size (from a 256*256 to a 512*512 for example) and your texture coordinates will remain the same.

Pretty simple hu ?

So, for a square, you basically have to use the coordinates above. For a triangle, pointing upward, you might have to set your first coordinate to 0.5, 0, so the first point starts at the center of the top edge.

UVs (for now on, I will call texture coordinates UVs), can be really more complex, for instance if you are trying to map a skin and hair texture to a face mesh, but you are not supposed to write them, they will be exported by your 3D software like 3DS Max.

It’s interesting to understand them though, because you can do a lot with UVs. With UVs you can only take a portion of your texture, there is no need to use it all. Ever heard of Sprite sheets ? For those of you that already tried Starling, and that were amazed by the very fast rendering of thousands of animation, this is how an animated “MovieClip” on the GPU is done : upload a single large texture with every frame of your animation, draw it on a square, then set the 4 UV into constants, and change them on every frame ! This way, every frame the GPU will pick a different portion of the image to sample !

An example of sprite sheet. Yep, old game were made that way too !

Texture Mipmapping.

As I am having a hard time trying to explain what is mipmapping, let me quote wikipedia :

In 3D computer graphics texture filteringmipmaps (also MIP maps) are pre-calculated, optimized collections of images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts.

As we saw, for every pixel that is being drawn, the shader will have to sample the texture to get the pixel colors at the corresponding UVs. The first problem is that, with large texture, it will take some time to process. The second, and more important issue is that, at  large distance, when your object becomes very small, you might hive issues with texture sampling, resulting in some moire patterns.

The solution is to upload a series of bitmap, each one being half the size of the previous one. The GPU will then automatically choose between two bitmap to sample according to the distance. This is a good optimization technic since each sampling will take less time, but it also takes more V-Ram to store each bitmap. Mipmap used to came with another artifact in old video games where each “layer” of the texture wont blend with each other, as you can see here on the left wall.

The good news is now there is some texture filtering applied so you wont get that ugly rendering. Actually, in the following demo, you will have an option to turn mipmap on and off, and also an option to add a random color to each mipmap level so you can notice the different texture being used.

Let’s start coding

First of all, you can grab the source code. In this archive, you will find the cleaned previous example, updated to use a texture. To upload a texture, you need a BitmapData instance, where the width and height are power of two (256, 512, 1024…). A texture doesn’t need to be square, it can be a rectangle, but remember that your U and V values are always clamped to [0, 1], even on a rectangle shaped texture.

I’m not explaining to you how to get a BitmapData, either load it or embed it as I did.

[Embed(source="../../../../../assets/texture.jpg")]
 private var textClass:Class;
 private var text:Bitmap = new textClass();

The tutorial class takes advantage of my “still-in-progress” Geometry class as the previous tutorial did. A few thing have been improved, you may have a look if you want. Texture declaration, upload and allocation is all done in the __createAndUploadTexture() method.

/**
 * Create and upload the texture
 */
 private function __createAndUploadTexture():void {
     if (texture) {
         texture.dispose();
     }
    texture = context.createTexture(1024, 1024, Context3DTextureFormat.BGRA, false);
     // MIPMAP GENERATION
     var bmd:BitmapData = text.bitmapData;
     var s:int = bmd.width;
     var miplevel:int = 0;
     while (s > 0) {
         texture.uploadFromBitmapData(getResizedBitmapData(bmd, s, s, true, (miplevel != 0 && _mipmapColor.selected) ? Math.random()*0xFFFFFF:0), miplevel);
         miplevel++; //miplevel going up by one
         s = s * .5; //... and size going down, divided by two.
     }
    context.setTextureAt(0, texture);
 }

The code is pretty self explanatory. First ask the context to create a texture, it’s the same as asking to create a buffer. The upload part is a bit more complicated though.

Unlike a vertexBuffer, you can upload “several textures” to a single texture, each one corresponding of a smaller texture, for mipmapping purpose. Every time you upload a smaller texture, the “mipmap level” goes up by one, the default being zero.

For instance, if you don’t want to use mipmapping, you can use :

texture.uploadFromBitmapData(bitmapData);

But if you want to use mipmapping, you will end up using something like :

texture.uploadFromBitmapData(bitmapData512, 0);
texture.uploadFromBitmapData(bitmapData256, 1);
texture.uploadFromBitmapData(bitmapData128, 2);
texture.uploadFromBitmapData(bitmapData64, 3);
// and so on, down to a 1x1 bitmapdata...
texture.uploadFromBitmapData(bitmapData1, 9);

This is pretty much what my loop does, but instead of creating every texture size by myself, I just use ActionScript to generate them for me.

The rest of the method is allocation, very much like setVertexBufferAt, or setProgram. When you allocate a texture for the program, you will be able to use it in AGAL by calling  ”fsx“, so in this case, under fs0. Pretty much the same thing that fragment constants, vertex attributes or whatever…

fs : Fragment Sampler.

AGAL Time

The AGAL code given in the class is almost the same as the previous article AGAL code, so I’m going only to highlights the differences. Well, as always, if you are having troubles with my explanations, feel free to contact me.

Vertex Shader :

code += "mov v0, va1n";   // Interpolate the UVs (va0) into variable register v1
 code += "mov v1, va2n";   // Interpolate the normal (va1) into variable register v1

In the Vertex Shader, be sure to pass the UVs to the fragment shader. All the rest is the same thing.

Fragment Shader :

In the class, the code changes according to the mipmap combobox, so I will flatten the code here :

"text ft0 v0, fs0 <2d,linear, nomip>n" // NO mipmap

or

"text ft0 v0, fs0 <2d,linear, miplinear>n" // WITH mipmap

As you can see, the text opcode takes the following arguments :

text destination, coordinates, texture <options>

Sampler options can be found very easily on google, I will explain them on another article a little bit later.

As you can see, this is really simple. Once sampled, the pixel diffuse color is store on the temporary ft0. Now you can use it as your base color instead of the previous constant that was used (fc4). If you took your last sources instead of those one, make sure you also change this line :

"mul ft2, ft0, ft1 n"+ //multiply fragment color (ft0) by light amount (ft1).

Or you will still be using the plain color uploaded as a constant (change fc4 by ft0).

Compile and Run, and here you go !

Get Adobe Flash player

To notice the mipmap effect, zoom out a lot, rotate the cube a little. You can also check “show mipmap colors” so each mipmap levels get a random color.

I hope that will answer questions you had about textures. A little more will come about options, and about cube textures. As always, any feedback is always appreciated, and feel free to contact me if you have troubles !

See you !

Stage3D / AGAL from scratch. Part VII – Let There Be Light

Let There Be Light

So, we displayed a lot of cubes and triangles together, and we also created a few controls to play around with the 3D Scene. But the very essence of 3D content is light.

For this tutorial, I’ve included a bunch of classes I am currently working one, not as a 3D engine, but as a small toolbax for future experiments. You will find in the given source code those currently “W.I.P” classes :

* Geometry : Stores a geometry, and it’s indexes. Has shorcuts to declare square faces instead of triangles. Can create and upload the buffers for you. Later, will be able to generate the faces normals (more about that later)

* Cube : A Simple class extending Geometry, creating a Cube made of 24 points so that the future faces normals act as intended.

* ColorMaterial : Right now a simple RGB value object like, but should contain the fragment shader soon.

* ArcBallCamera : Not something new, but completely revamped. It’s now really moving onto a circle and using the pointAt method to target the origin. This new method make the drag controler a little bit smarter (dragging to the bottom will only make the object rotate as if the screen would be the X axis.)

When I announced a few weeks ago this tutorial, I compared faces normals (actual normals in 3D language) to normalized vertices.

You will find in the Geometry class a method called “computeNormals” that will give you the first ones, the one we wants. This method is still in progress as right now, the normal can be the opposite of the wanted one if the face is drawn counter clockwise.

I will explain in another article how you can generate basic normals for your models, but keep in mind that this data should ideally be computed by your 3D designer, because it can “smooth” edges for low-poly meshes.

So anyway, by calling the computeNormals method, we will get small vector perpendicular to each face (each triangle).

The Lamberian Factor

The first light we will compute is what we call the diffuse light. The amount of light diffused by a surface depends on the angle between that surface, and the light. this is call the Lamberian Factor, or the Lamberiance Reflectance. Quoting Wikipedia, “The reflection is calculated by taking the dot product of the surface’s normal vector, and a normalized light-direction vector, pointing from the surface to the light source.”

The dot product is an operation we can do using AGAL very simply using the opcode dp3, which stands for Dot Product 3, 3 being the number of components (here, x, y and z).

Just a word about the dot product. The dot product, or scalar product takes two vectors and returns a single number. The only thing you need to remember is this :

  • If two vectors goes toward the “same” direction, the dot product will be a positive number.
  • If the vectors are perpendicular to each other, the dot product will be equal to zero
  • If the Vector are facing each other, the dot product will be a negative number.
Because the dot product also depends on the length of the vector, we will mostly use it with normalized vectors, giving you a result between -1 and 1, wich is very handy especially in light computation.

Allright, let’s now code this. First of all, download the following source code.

The LetThereBeLight class is rather simple. On context creation, I simply get an ArcBallCamera class, a bunch of projection matrix, a model matrix (that will be added to the Geometry class later), and a Cube. The Cube instance will receive a ColorMaterial (not really relevant right now) andcreate the buffers for me :

geometry = new Cube(50);
 geometry.setMaterial(new ColorMaterial(0xFF00FF));
 geometry.createBuffers(context);

This is simple stuff for you now, so let’s move on to the actual shader.

The Shader Constants

As we saw, the Lamberian Factor requires, in order to be calculated, the light direction, and the surface normal. The surface normals are already stored in the vertexBuffer, so we still need the Light Direction. But we also need a bunch of other values :

  • The Light Color. Here I chose a plain white
  • The Ambient Light. The ambient light is the minimum amount of light a surface can receive. It’s a simple technique to simulate the fact that, in the real world, the light is reflected so many time that even when an object side is not under the light, it’s still visible and doesn’t turns completely black.
  • The Light Direction. In this example, the light will always come from the camera, meaning that we will more have the impression of moving the cube under the light than moving around it, but feel free to try other values

All those data will be stored in shader constants, so here we go :

context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.&lt;Number&gt;([0,0,0,0]));
//fc0, for clamping negative values to zero
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.&lt;Number&gt;([0.1,0.1,0.1,0]));
//fc1, ambient lighting (1/4 of full intensity)
 
var p:Vector3D = camera.position;
p.negate();
p.normalize();
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.&lt;Number&gt;([p.x,p.y,p.z,1]));
// Light Direction
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.&lt;Number&gt;([1,1,1,1]));
// Light Color

You may have noticed that all those constants, even if they are mostly vectors, directions, positions, are FRAGMENT constants, since we have no use of them in the vertex shader. Looking at the source, you will see that the color of the cube (here a nice pinkish color) is uploaded as a constant. We saw that already.

OK, so now, everything is in place, we may have a look at the shader AGAL code.

AGAL Time

What we need to do according to the Lamberian Factor :

  1. Calculate the Lamberian Factor using a dot product between the normal (v1) and the light  direction (fc2)
  2. Negate the result : We do this because the Lamberian formula is using the light direction from the surface to the light source. So you can either negate the light direction vector, or negate the dot product result
  3. Clamp any result below 0 : if the angle between the light and the surface normal is higher than 90°, then the dot product will be negative. This could cause unexpected result when computing the output color, so we just set it to 0 (no light).
  4. Multiply the fragment color by the light amount. For a light amount equals to 0, the surface will be black, for a light amount equals to 1, the surface will have it’s regular color.
  5. Multiply the resulted color by the light color. Your red cube might look a little more purple if your light is blue
  6. Add the ambient light. This way, every black surface will become a little brighter.

Here is the corresponding AGAL code :

code = ""+
 "dp3 ft1, fc2, v1 n"+ // dot the transformed normal (v1) with light direction fc2 -&gt; This is the Lamberian Factor
 "neg ft1, ft1 n"+ // Get the "opposite" vector. We could also have uploaded the opposite of the light direction to avoid this step
 "max ft1, ft1, fc0 n"+ // clamp any negative values to 0 // ft1 = lamberian factor
 
 "mul ft2, fc4, ft1 n"+ //multiply fragment color (fc4) by light amount (ft1).
 "mul ft2, ft2, fc3 n"+ //multiply fragment color (ft2) by light color (fc3).
"add oc, ft2, fc1"; //add ambient light and output the color

UPDATE : Thanks to the help of Jean Marc, I discovered the sat opcode that one can use to clamp any value to the range [0,1]. So I should just replace the “max” line with this one :

" sat ft1, ft1 n"+

which allows me to save a constant, so I should  also get rid of fc0.

Also, you now know that copying values to the varying registers (v0, v1) the values are interpolated. That behavior was demonstrated by the color slowly fading between two points in the previous tutorials. Well, as Jean Marc stated, when being interpolated, the normals could not “normalized” anymore, so I should normalize my normals (duh !) in the fragment shader before using them. Thanks Jean Marc !

Compile and run : here it is, your first directional light !

For the posted demo, I added two options that are not in the sources : the first checkbox fix the light at the current position so you can rotate the cube and see the effect of ambient light, and the second one switch the normals to normalized vertices (see two first schemes).

Get Adobe Flash player

As always, have fun with the sources, and tell me what you think ! If you need more explainations or anything, just feel free to ask.

See you !

Watercolor effect

On Friday I found this amazing work from Stamen where they convert in real time an OpenStreetMap map into a wonderful watercolor like drawing. Check it out here !

I really love the result, it’s absolutely gorgeous. Lucky me, they give a few headlights on the whole process on their blog : http://content.stamen.com/watercolor_process

So I decided to create and bench the same effect in Flash. No Stage3D this time, only bitmap manipulation.

First, the demo :

First slider change the threshold sensitivity
Second slider change the perlin noise alpha
Use checkboxes to disable the shadows or to view the used mask

Right now the demo is a bit heavy. Almost no optimization was done, and I wonder if some of the computation could be done using pixel bender.

You can find every step image on the Stamen blog post, so I won’t detail them here, but here is the effect code. Feel free to go back and fourth between the two blog to see the filter in action step by step.

     private function filterColor(colorToFilter:uint, textureToApply:Bitmap, sensitivity:int = 90):void {
     var msk:Bitmap = new Bitmap(new BitmapData(_mask.width, _mask.height, true, 0xFF000000));
      // _mask is a generated bitmap we get using a background color, the textfield, and the two vector assets.
      // You can see it by selected the "show mask" checkbox
 
     msk.bitmapData.lock();
     msk.bitmapData.threshold(_mask.bitmapData, _mask.getBounds(this), new Point(0,0), "==", colorToFilter, 0xFFFFFFFF, 0xFFFFFFFF, false);
     // first threshold separate the given color, for instance pink for the text
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(4.5, 4.5, 2));
     msk.bitmapData.draw(_noise, null, new ColorTransform(.5, .5, .5, _slAlpha.value), BlendMode.NORMAL, null, true);
     // Blur then apply a "noise". _noise is a simple perlin noise bitmap generated on app initialisation We use the same for every layers
 
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", sensitivity, 0xFF000000, 0x000000FF);
     if(_useShadow.selected) msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), ">", sensitivity, 0xFFFFFFFF, 0x000000FF);
     // those threshold give us a black and white mask wich is a bit deformed by the noise and the blur filter.
     // The higher the sensitivity (which is actually just the color limit of the threshold from 0 to 255), the more the mask shrink, leaving some white space between layers)
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(2, 2, 3));
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", 0x66, 0xFF000000, 0x000000FF);
     // New blur / threshold operation to round a little bit the previous mask
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(1.2, 1.2, 1));
     msk.bitmapData.unlock();
     // small blur to antialiase the mask
 
     if(_useShadow.selected){
          var shadow:BitmapData = msk.bitmapData.clone();
          shadow.applyFilter(shadow, _mask.getBounds(this), new Point(0,0), new BlurFilter(5, 5, 3));
          shadow.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     }
     // the inner shadow is just the same mask blured again, then cut into by copying the unblurred mask red channel into the blurred mask alpha channel.
 
     var bmp:Bitmap = new Bitmap();
     bmp.bitmapData = textureToApply.bitmapData.clone();
 
     bmp.bitmapData.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     // Copy the mask red channel (could have been green or blue since we are working in greyscale) into texture alpha channel.
 
     if(_useShadow.selected) bmp.bitmapData.draw(shadow, null, new ColorTransform(1, 1, 1, .4), BlendMode.MULTIPLY, null, true);
     // Eventually draw the shadow bitmap onto the texture.
 
     _container.addChild(bmp);
}

That’s it ! Not as beautiful as the Stamen work, but right now I’m satisfied with the result.

I’m not giving the whole code since it’s embedded into the Agency Framework, so I would have to upload a lot of classes for a single effect, but you can try it by yourself really easily.

Credit goes for Stamen for the idea, and for Stamen again for those wonderful texture I used

Brace yourself, shaders are coming…

… Or at least I hope so !

Since I’m working on Stage3D, I want to understand how lightning effects works. To be able to work on my own shaders, I need a few thing :

A mesh more complex than just a cube

For this I am actually working on a very basic OBJ parser. Well, at start I thought it would be complete, but splitting the geometries into sub geometries according to materials, I can’t get it done… Anyway, I’ve read a lot on the subject, even took a little inspirational peak into Away3D code, and I have a simple mesh with no textures. Enough for now.

True Normals

When you work with light, you need some normals to compute your light. A Normal is just a Vector3D where the sum of it’s 3 components (x, y and z) have a length of 1.

Every Vector3D can be converted into a normal using Vector3D.normalize(), or directly into AGAL using the nrm opcode. And a Vertex, a coordinate is nothing else than a Vector3D (a coordinate represent the distance from the origin point)

So why do I need “true” normals ?

Well, normals are used to compute diffuse light, and the normal is used to compute the angle between the light and the surface. Now take a simple cube, and normalize its vertices, here is what you got :

This can be interesting since you will have every normals interpolated when passed to the fragment shader, the light won’t stop at the cube edge, making your cube glowing like a sphere. This is actually what one can do to have a very smooth light on a low poly sphere.

But in the case of the cube, you want to have normals that looks like that :

OK, it’s poorly drawn, but you get the idea.

Generating normals seems complicated, but actually, it’s rather simple.

So this is were I am, and this will be covered in the next tutorial article.
See you later guys !

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44″ AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
renderCube();
 
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 renderCube();
 
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);
renderCube();

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

        /**
         * Render the cube according to it's current parameters ( = modelMatrix)
         */
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
 
            // program
            context.setProgram(cube.program);
 
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
 
            //constants
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
 
            // render
            context.drawTriangles(cube.indexBuffer);
        }

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !

My latest work – Stage3D used in a real project

Hi,

Beside my last article, I’ve not published anything in a while. This is mostly because I was busy working with a coworker of mine on the last project we released here at Marcel, and I need to say that I am proud of the result !

First of all, the website : it’s a digital experience made for Cartier to emphasize their beautiful movie Odyssee. You may visit the dedicated website at :

www.odyssee.cartier.com

Why am I telling you this

First of all because I’m quite proud, and it was a wonderful experience. But also because, as a challenge, we proposed to work with Stage3D for the website. Excluding Flash Player 10 players was excluded, so we actually worked on a fallback system. It’s really simple : You can compile for Flash Player 11, and still be executed by Flash Player 10 if you don’t call specific features. The first thing the website does is this :

try {
    stage.stage3Ds;
    User.getInstance().isStage3DAvailable = true;
} catch(e:Error) {
    User.getInstance().isStage3DAvailable = false;
}

And it’s working !

Stage3D is used for the parallax system used on the “experience” part of the website. Fallback system is made with simple bitmaps and copyPixels instructions.

Because we wanted to be able to modify everything, and because we wanted something light, we didn’t used starling and my coworker did an amazing job at creating a small Stage3D framework, like a kind of StarlingNano.

Push the limits

I am very proud of this website because we succeeded in pushing the limits : we used a lot of pixel bender shader and we used Stage3D. What I am trying to say is that the flash community shouldn’t stay in its comfort zone anymore and start trying to push themselves using Stage3D technology even if it means to code a fall back system.

Using Stage3D wasn’t that hard, and even if it is not necessary at all at the end, it still brings some features to the project :

  • Parallax are smoother. We can push a lot of layers, even animated ones, without being afraid to get some performance issues.
  • We were able to add some particles on some chapter.
  • Most important : We learned something, and we challenged ourselves.

Anyway, flashers of the world, it’s time to push your knowledge to your real project.

You can do it !

Stage3D / AGAL from scratch. Part V – Indexes and culling mode

What indexes are used for

On previous articles, I talked about indexes, and indexBuffer. The comparison I made was that indexes were like the numbers in “connect the dots” game.

To render something on screen, your graphics card need to know how to draw your triangles. Imagine you have 8 vertices, corresponding to the 8 corner of a cube. Without any further instruction, your graphic card can’t guess what you want to draw : Maybe you want a closed box, but maybe you don’t want any top face, or maybe you just want 2 planes crossing in the middle like an X.

Indexes are instructions about in which order you want to deal with vertices. You may run over a same vertex many times to achieve drawing your triangles. As a matter of fact, your indexBuffer length should always be a multiple of 3, since you can only draw triangles and not polygons.

Just remember that indexes refers to the “index” of a vertex in the buffer. So your indexes values are related on how you defined your vertices. Here is a sample on how to draw 2 faces of a cube

Cube Indexes Explanations

Let’s call vertex at index 0 A, vertex at index 1 B etc. My vertexBuffer looks like this :
A, B, C, D, E, F, G, H.

In this case, my IndexBuffer while contains those values :
0, 1, 2, 1, 2, 3, 0 ,4 ,5, 0, 5, 1

Note that if you swap point C and point D index in the vertexBuffer :
A, B, D, C, E, F, G, H

I need to adapt my indexBuffer, because I want to draw the same thing, but the vertices indexes in the buffer are modified, so I would have :
0, 1, 3, 1, 2, 3…

Get it ? Quite Simple actually…

Indexes and triangle culling

By default, the GPU renders every triangle, whether they are facing the “camera” (actually the clipspace) or not. An optimisation technics is to ask the GPU to render only triangles facing the camera, and ignoring the others. Using this, you may have better performance when rotating around a non transparent ball for example.

This is call the triangle culling mode. For non native speakers (like me…), to cull means to delete. Triangles that may be skipped regarding their orientation will be excluded from the rendering pipeline.

Culling mode can take 4 values related to which face is culled :

  • Context3DTriangleFace.NONE : This is the default value, triangles are always rendered. no face is being ignored
  • Context3DTriangleFace.BACK : Triangles are rendered only if they are “facing” the view. ie, triangle’s back face is culled.
  • Context3DTriangleFace.FRONT : Triangles are rendered only if they are not “facing” the view, triangle’s front face is culled.
  • Context3DTriangleFace.FRONT_AND_BACK : Nothing is rendered, since both faces are being excluded.

But which one is the front face ?

You may be wondering how one can even guess which triangle face is the front one. Well, it’s actually rather simple : it depends on the order you used on indexes. If the triangle is drawn clockwise, then your are looking at the front face. If the triangle is drawn counter clockwise, then your are facing the back of your triangle.

First triangle is drawn using indexes :      0, 1, 2
Second triangle is drawn using indexes :       0, 2, 1

To observe it yourself, you can grab the HelloTriangle source code and change the culling mode by calling :

context.setCulling(Context3DTriangleFace.BACK);

This will set the culling mode to the one used in a lot of 3D Engine such as Away3D. Now you may try to invert the indexes : The triangle should not be visible.

Culling mode and double-sided polygons.

In most engine you will use, the culling mode will be set to BACK, meaning you will only view the triangle front face. Instead of changing the culling mode, you will mostly see a “doubleSided” option, making a polygon to be viewable from both side.

To draw a double sided triangle, it is relatively easy : you just have to twice as much indexes instructions, some for the front face, some for the back face. Remember that you can run over the same vertex many times !

To make the previous triangle double sided, you just have to concatenate both indexes buffer :
0, 1, 2, 0, 2, 1.
The simplest way is actually to concatenate the reverse of your first indexes :

var indexes:Array = [0, 1, 2];
if(_doubleSided){
    indexes = indexes.concat(indexes.reverse());
}

You may grab the HelloMatrix source code, add a rotation to the triangle, and change the culling mode : you will now only see one side. Make your triangle double sided, and voila !

That’s it for now, in the next article, we will build a simple Arc Ball class to help you test your projects.

As always, feedback is appreciated :)

Revamping the Rainbow Spectrum

The rainbow Spectrum was a study case, and wasn’t optimised at all. With AIR 3.2 coming out, it was the perfect time to improve things, to make it able to run on iOS and Android phones.

The previous version of the spectrum rainbow were using 6 ribbons made of 64.000 vertices each. Why so much ? because It allowed me to make the spectrum works for around 40 minutes.

I had to reduces severely the number of point, destroying old values and creating new as long as we go along the music.

I used the ability to upload a part of a vertex buffer to split my rainbow geometry into 4 sub-geometries. This is how geometry is upload now :

vertexBuffer.uploadFromVector(_geoms[0], 0, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[1], _geoms[0].length/3, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[2], _geoms[0].length/3*2, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[3], _geoms[0].length/3*3, _geoms[0].length/3);

Using this, I am able to :

  • Have a really infinite render
  • Run the shader 160 time less
  • Improve the render from 19FPS up to 60FPS on my transformer
  • Even better, improve the render from ~1FPS up to 25FPS when using SwiftShader (Software rasterizer)

Here is a demo of the rainbow running on my Asus Transformer tab.

I’ll explain the code a little more later.
I am preparing a simple demo APK for Android user. If you want one, please send me an email.

Enjoy

What I am working on right now

While working with some more advanced “camera” effect, I came with the desire of creating a simple Sound Spectrum visualizer in stage3D. Not the idea of the year, but right now I am quite happy about it.

This is still a work in progress so it will evolve a lot more in a near future I hope, as I’d like to come with a more finished “product”, so consider this as a sneak peak.

Right now, I am using the FrequencyAnalyser from Ben Stucki
Music is from a talented French composer, Opti, who gave me the permission to publish my work using his track. Thank you man !

You can change the colors with the top left button, uses Adobe Kuler API to get some palette, drag the camera a little bit with the mouse. If you want to try another music, you can press the spacebar to stop the music, then launch some sound in another browser tab, like a youtube video or google music. You may get an Error from the Flash Virtual Machine, just ignore it right now and you are good.

I sure will explain and share the code later, but right now, the code is really to messy and I just wanted to share something.

Have fun and please give me any feedback you have !

Stage3D / AGAL from scratch. Part IV – Adding some depth

Understanding perspective

I thought this was 3D, so why can’t I use the z coordinate ?

Hopefully, you guys have read my previous article and play with the example class, or with your own. You may have noticed that changing the z coordinate didn’t changed anything. Let me explain why.

Your 3D scene is rendered in 2D, in some area called the clipspace. The clipspace is basically your screen, and every point that is behind your screen needs to be projected into the clipspace so it can be drawn.

I said earlier that x and y coordinates where going from -1 to 1. Well, it’s not true. It’s actually the clipspace coordinate that goes from -1 to 1. Imagine that the clipspace would have the same width and height as your screen, or your browser windows, you would have to compute the coordinate for every screen size, and for every size change ! Having a normalized clipspace is what allows us to forget about screen size and resolutions and focus on our scene coordinate.

Now, by default and without any other instruction, your graphic card project your vertices to your clipspace without any projection or any sense of perspective. That is why if you have a point out of clipspace coordinate, like x=2, you can’t see it.

Since we were only moving each vertex coordinate to the ouput point, here is the equation for any projected point

// mov op va1

xP = x
yP = y

In the following scheme, 3D object 1 and 3D object 2 have the same projected point, since the Z coordinate isn’t part of the equation.

The perspective divide

To be able to render 3D on a 2D plan (your screen), we need perspective. Perspective is what makes the borders of a road to look like they are converging when they are actually parallels lines. The equation is actually rather simple : The farther a point is, the closest to the middle it appears. Here is the equation :

xP = K1 * x / z
yP = K2 * y / z

With K1 and K2 some constants depending on things such as the field of view, or the aspect ratio of your clipspace. This is a perspective divide.

You can notice that if you divide by z, then z can’t be equal to 0. We will talk about this later.

When using the perspective divide, here is the result of the projection of 3D object 1 and 3D object 2 from previous scheme

Using Matrices

When we coded our first vertex shader, we just copied each vertex coordinate into the Output Point. You now learned that computing the output point actually defines the position of the projection of a vertex into the clipspace.

To be able to translate, rotate, or scale an object, we won’t be modifying all of his vertices. Why ?

  1. It would be really complex to compute the new position of every vertices when rotating by 45° on the Y axis, then scaling it up to 2.37 times.
  2. We would need to upload the coordinate into the vertex buffer again, which would completely lose any interest in using Stage3D. Remember, if the graphic card can render triangles so fast, it’s because everything is ready in the video ram.

Instead of uploading new coordinates to the V-Ram, we will compute the output point using a Matrix. This Matrix will be uploaded as a constant in every frame. Constant are very fast to update in the V-Ram unlike Vertex Buffers or Texture.

Updating the HelloTriangle example

Now, you can either open you HelloTriangle project, or download the following one. I recommend you to take your last project if you already have it since there is only a few line to add, but if you prefer to take my sources, you should be looking for the HelloMatrix class.

First thing we need to do is to create a Matrix3D, and upload it as a Vertex Constant to the Graphic Card. Go to the render function, on line 194. The HelloTriangle should already have a Matrix3D class member declared called m. So just instantiate it, append a translation to it either on x or y, and use the context.setProgramConstantsFromMatrix method to upload it to the GPU. Here is what I have :

// create a matrix3D, apply a translation on it, then set it as a vertex constant
m = new Matrix3D();
m.appendTranslation(Math.sin(getTimer()/500)*.5, 0, 0);
context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, m, true); // 0 means that we will retrieve the matrix under vc0

I choose to create a translation according to a Timer so you can see the triangle moving on the x axis.

At this point, if you compile your class, you won’t see any change. This is because we need to instruct our GPU how to use the Matrix, and this will be done in the Vertex Shader.

Updating the Vertex Shader

Obviously, updating the Vertex Shader will happen in the AGAL code. Time for us to learn how to invoke constants.

  • vc : Vextex Constant. Called by their first register (ex : vc0). Be carefull, matrix constants take 4 registers, so if you upload a Matrix to vc0, the next constant must be set on vc4.
  • fc : Fragment Constant. Same thing as above, but for fragment shaders.

Locate your Vertex Shader AGAL code, it should be around line 161. What we want to do is to compute the position of each vertices using the Matrix3D we stored as a constant instead of just copying x and y coordinate to clipspace.

To perform a 4×4 Matrix operation on a Vertex, you need to use the opcode m44 using this syntax

m44 destination, vertex, matrix

Where destination is the output point, the vertex is store in Vertex Attribute 0, and the Matrix in Vertex Constant 0. Got it ? Here what you should get :

// VERTEX SHADER
var code:String = "";
code += "m44 op, va0, vc0n"; / Perform a 4x4 matrix operation on each vertices
code += "mov v0, va1n";

That’s it ! Now, on every frame, we will create a new Matrix3D, append a translation to it on x axis between -0.5 and 0.5, upload it to the GPU, then execute the program that will perform a m44 operation on each vertices to reflect the translation we made.

Go on, compile, you should see your triangle moving from left to right.

Back to perspective

Now we know how to use a matrix to transform the final rendering of our triangle. Understanding the Math behind perspective is great but you don’t want to do it every time, don’t you ? Hopefully, Adobe provided a downloadable class, PerspectiveMatrix3D.

This class will let you create a Matrix3D with some intelligible parameters to render perspective.

Now, you can either continue to update your HelloTriangle class, or take the same package as above and look for the “AddingPerspective” class.

The AddingPerspective class is actually drawing a square so that the effect of perspective can be noticed more easily. You know how to draw a triangle, drawing a square is just drawing two triangles. You can have a look in the sources, but we will be back to Quads (squares) on the next article which will deals with indexes. Either way, the following example can be achieved using a triangle or a quad, it doesn’t matter.

The PerspectiveMatrix3D class

Among many thing, the perspectiveMatrix3D allows you to define matrix parameter to render perspective using 4 parameters :

  1. The FoV or Field of View. The FoV, in radians, represent how wide is your field of view. We will set it to 45°
  2. The aspect ratio is the ratio of your backbuffer. We will set it to (width / height).
  3. The zNear is the minimum z coordinate that your eye can see. We will set it to 0.1. A word on that later.
  4. The zFar is the maximum z coordinate that your eye can see. We will set it to 1000.

Go to the render method and instantiate a new PerspectiveMatrix3D object, then apply it the previous parameters.

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);

About the zNear

You may wonder why we won’t render the z from 0, and start at 0.1. Well. Remember the perspective divide was

xP = K1 * x / z
yP = K2 * y / z

As we are dividing by z, and because, I hope you know that, dividing by zero is impossible, we can’t have the zNear parameter equal to 0 because the equation couldn’t be computed for objects with a z coordinate set to 0.

This is actually a kind of problem since our triangle’s vertices z coordinates are set to 0. Hold on, don’t go change the VertexBuffer, we learned how to move an object right ? We can simply append a translation on the z axis to push your object a little forward.

What we need to do now is :

  1. create the PerpectiveMatrix3D as above
  2. Do some rotation on the m Matrix so we can actually notice the effect of perspective
  3. Translate a little bit forward our vertices so that they are behind zNear value
  4. Multiply the first Matrix with the PerspectiveMatrix to add perspective to the final render.

What I get is this :

var projection:PerspectiveMatrix3D = new PerspectiveMatrix3D();
projection.perspectiveFieldOfViewLH(45*Math.PI/180, 4/3, 0.1, 1000);
 
m = new Matrix3D();
m.appendRotation(getTimer()/30, Vector3D.Y_AXIS);
m.appendRotation(getTimer()/10, Vector3D.X_AXIS);
m.appendTranslation(0, 0, 2);
m.append(projection);

Compile, and here it is ! a rotating triangle with some sense of perspective ! If you took my class, you should see a rotating square instead of a triangle.

Practice !

As always, a little practice on your own is the best way to learn so here is what you can try

  1. Set R, G and B value to [0-255] instead of [0-1], upload a [255,255,255,255] Vector Fragment Constant, then divide your color values before moving it to the Output Color. You may use the div AGAL opcode

This article was less into code, and I think I will keep it that way for now, for 2 reasons :

  1. The less I write the more you code
  2. And it was actually way too long to write the hello triangle article while describing each single lines of code.

Anyway, I will always be giving the class I use as an example, and those class will be documented. If you think that I should go back to something more verbose, just tell me, feedback is always appreciated.

As always, if you have any questions, feel free to ask !