Brace yourself, shaders are coming…

… Or at least I hope so !

Since I’m working on Stage3D, I want to understand how lightning effects works. To be able to work on my own shaders, I need a few thing :

A mesh more complex than just a cube

For this I am actually working on a very basic OBJ parser. Well, at start I thought it would be complete, but splitting the geometries into sub geometries according to materials, I can’t get it done… Anyway, I’ve read a lot on the subject, even took a little inspirational peak into Away3D code, and I have a simple mesh with no textures. Enough for now.

True Normals

When you work with light, you need some normals to compute your light. A Normal is just a Vector3D where the sum of it’s 3 components (x, y and z) have a length of 1.

Every Vector3D can be converted into a normal using Vector3D.normalize(), or directly into AGAL using the nrm opcode. And a Vertex, a coordinate is nothing else than a Vector3D (a coordinate represent the distance from the origin point)

So why do I need “true” normals ?

Well, normals are used to compute diffuse light, and the normal is used to compute the angle between the light and the surface. Now take a simple cube, and normalize its vertices, here is what you got :

This can be interesting since you will have every normals interpolated when passed to the fragment shader, the light won’t stop at the cube edge, making your cube glowing like a sphere. This is actually what one can do to have a very smooth light on a low poly sphere.

But in the case of the cube, you want to have normals that looks like that :

OK, it’s poorly drawn, but you get the idea.

Generating normals seems complicated, but actually, it’s rather simple.

So this is were I am, and this will be covered in the next tutorial article.
See you later guys !

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44″ AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
renderCube();
 
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 renderCube();
 
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);
renderCube();

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

        /**
         * Render the cube according to it's current parameters ( = modelMatrix)
         */
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
 
            // program
            context.setProgram(cube.program);
 
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
 
            //constants
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
 
            // render
            context.drawTriangles(cube.indexBuffer);
        }

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !

My latest work – Stage3D used in a real project

Hi,

Beside my last article, I’ve not published anything in a while. This is mostly because I was busy working with a coworker of mine on the last project we released here at Marcel, and I need to say that I am proud of the result !

First of all, the website : it’s a digital experience made for Cartier to emphasize their beautiful movie Odyssee. You may visit the dedicated website at :

www.odyssee.cartier.com

Why am I telling you this

First of all because I’m quite proud, and it was a wonderful experience. But also because, as a challenge, we proposed to work with Stage3D for the website. Excluding Flash Player 10 players was excluded, so we actually worked on a fallback system. It’s really simple : You can compile for Flash Player 11, and still be executed by Flash Player 10 if you don’t call specific features. The first thing the website does is this :

try {
    stage.stage3Ds;
    User.getInstance().isStage3DAvailable = true;
} catch(e:Error) {
    User.getInstance().isStage3DAvailable = false;
}

And it’s working !

Stage3D is used for the parallax system used on the “experience” part of the website. Fallback system is made with simple bitmaps and copyPixels instructions.

Because we wanted to be able to modify everything, and because we wanted something light, we didn’t used starling and my coworker did an amazing job at creating a small Stage3D framework, like a kind of StarlingNano.

Push the limits

I am very proud of this website because we succeeded in pushing the limits : we used a lot of pixel bender shader and we used Stage3D. What I am trying to say is that the flash community shouldn’t stay in its comfort zone anymore and start trying to push themselves using Stage3D technology even if it means to code a fall back system.

Using Stage3D wasn’t that hard, and even if it is not necessary at all at the end, it still brings some features to the project :

  • Parallax are smoother. We can push a lot of layers, even animated ones, without being afraid to get some performance issues.
  • We were able to add some particles on some chapter.
  • Most important : We learned something, and we challenged ourselves.

Anyway, flashers of the world, it’s time to push your knowledge to your real project.

You can do it !

Stage3D / AGAL from scratch. Part V – Indexes and culling mode

What indexes are used for

On previous articles, I talked about indexes, and indexBuffer. The comparison I made was that indexes were like the numbers in “connect the dots” game.

To render something on screen, your graphics card need to know how to draw your triangles. Imagine you have 8 vertices, corresponding to the 8 corner of a cube. Without any further instruction, your graphic card can’t guess what you want to draw : Maybe you want a closed box, but maybe you don’t want any top face, or maybe you just want 2 planes crossing in the middle like an X.

Indexes are instructions about in which order you want to deal with vertices. You may run over a same vertex many times to achieve drawing your triangles. As a matter of fact, your indexBuffer length should always be a multiple of 3, since you can only draw triangles and not polygons.

Just remember that indexes refers to the “index” of a vertex in the buffer. So your indexes values are related on how you defined your vertices. Here is a sample on how to draw 2 faces of a cube

Cube Indexes Explanations

Let’s call vertex at index 0 A, vertex at index 1 B etc. My vertexBuffer looks like this :
A, B, C, D, E, F, G, H.

In this case, my IndexBuffer while contains those values :
0, 1, 2, 1, 2, 3, 0 ,4 ,5, 0, 5, 1

Note that if you swap point C and point D index in the vertexBuffer :
A, B, D, C, E, F, G, H

I need to adapt my indexBuffer, because I want to draw the same thing, but the vertices indexes in the buffer are modified, so I would have :
0, 1, 3, 1, 2, 3…

Get it ? Quite Simple actually…

Indexes and triangle culling

By default, the GPU renders every triangle, whether they are facing the “camera” (actually the clipspace) or not. An optimisation technics is to ask the GPU to render only triangles facing the camera, and ignoring the others. Using this, you may have better performance when rotating around a non transparent ball for example.

This is call the triangle culling mode. For non native speakers (like me…), to cull means to delete. Triangles that may be skipped regarding their orientation will be excluded from the rendering pipeline.

Culling mode can take 4 values related to which face is culled :

  • Context3DTriangleFace.NONE : This is the default value, triangles are always rendered. no face is being ignored
  • Context3DTriangleFace.BACK : Triangles are rendered only if they are “facing” the view. ie, triangle’s back face is culled.
  • Context3DTriangleFace.FRONT : Triangles are rendered only if they are not “facing” the view, triangle’s front face is culled.
  • Context3DTriangleFace.FRONT_AND_BACK : Nothing is rendered, since both faces are being excluded.

But which one is the front face ?

You may be wondering how one can even guess which triangle face is the front one. Well, it’s actually rather simple : it depends on the order you used on indexes. If the triangle is drawn clockwise, then your are looking at the front face. If the triangle is drawn counter clockwise, then your are facing the back of your triangle.

First triangle is drawn using indexes :      0, 1, 2
Second triangle is drawn using indexes :       0, 2, 1

To observe it yourself, you can grab the HelloTriangle source code and change the culling mode by calling :

context.setCulling(Context3DTriangleFace.BACK);

This will set the culling mode to the one used in a lot of 3D Engine such as Away3D. Now you may try to invert the indexes : The triangle should not be visible.

Culling mode and double-sided polygons.

In most engine you will use, the culling mode will be set to BACK, meaning you will only view the triangle front face. Instead of changing the culling mode, you will mostly see a “doubleSided” option, making a polygon to be viewable from both side.

To draw a double sided triangle, it is relatively easy : you just have to twice as much indexes instructions, some for the front face, some for the back face. Remember that you can run over the same vertex many times !

To make the previous triangle double sided, you just have to concatenate both indexes buffer :
0, 1, 2, 0, 2, 1.
The simplest way is actually to concatenate the reverse of your first indexes :

var indexes:Array = [0, 1, 2];
if(_doubleSided){
    indexes = indexes.concat(indexes.reverse());
}

You may grab the HelloMatrix source code, add a rotation to the triangle, and change the culling mode : you will now only see one side. Make your triangle double sided, and voila !

That’s it for now, in the next article, we will build a simple Arc Ball class to help you test your projects.

As always, feedback is appreciated :)

Revamping the Rainbow Spectrum

The rainbow Spectrum was a study case, and wasn’t optimised at all. With AIR 3.2 coming out, it was the perfect time to improve things, to make it able to run on iOS and Android phones.

The previous version of the spectrum rainbow were using 6 ribbons made of 64.000 vertices each. Why so much ? because It allowed me to make the spectrum works for around 40 minutes.

I had to reduces severely the number of point, destroying old values and creating new as long as we go along the music.

I used the ability to upload a part of a vertex buffer to split my rainbow geometry into 4 sub-geometries. This is how geometry is upload now :

vertexBuffer.uploadFromVector(_geoms[0], 0, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[1], _geoms[0].length/3, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[2], _geoms[0].length/3*2, _geoms[0].length/3);
vertexBuffer.uploadFromVector(_geoms[3], _geoms[0].length/3*3, _geoms[0].length/3);

Using this, I am able to :

  • Have a really infinite render
  • Run the shader 160 time less
  • Improve the render from 19FPS up to 60FPS on my transformer
  • Even better, improve the render from ~1FPS up to 25FPS when using SwiftShader (Software rasterizer)

Here is a demo of the rainbow running on my Asus Transformer tab.

I’ll explain the code a little more later.
I am preparing a simple demo APK for Android user. If you want one, please send me an email.

Enjoy