Exploring away from 3D

Hi folks,

I never have been a “3D” developer, neither a “shader” developer. Actually, I’m just that random flash coder that love to learn those unknown and cool looking techologies.

If I didn’t write any articles in a long time, it doesn’t means that I haven’t been trying some new cool stuff to work with. I worked a lot with javascript lately, because flash is no longer used as often as it was before, so I had to stay up to date. It took me a long time, wrote a framework, had a couple project using it, now I feel like I could start exploring the canvas and webGL, so this blog may have some new content soon.

I also had the chance of working on some digital exhibitions. No browser, no Internet, no screen. It was very refreshing. We worked on a setup for “la FNAC”, a french store of music, books, DVD, and digital stuff. With the success of fifty shades of grey, La FNAC had a whole week dedicated to erotic novels with some conferences to promote some writers. Here at marcel, we worked on a standalone exhibition that was installed in one of their store. Here is the video we presented at the Cannes’ festival which explains better than me the whole idea behind it (in english). For non-french speakers here, the sentence could be translated “With that caress from the hand, so intoxicating, sweet carress of pur desir, he cherished the curves of the belly, went down…” or something like that…

FNAC E-ROTIC TOUCH from Marcel on Vimeo.

Here is another video I shot during the exhibition, so you can here the sentence better.

I’m not allowed to talk about how we dealt with the sound stretching thing, but I can give you a quick overlook of the whole setup. A PC was hidden in a small room close to the wall, controling 6 webcams hidden in the floor in some kind of a ramp along the 7 meter long wall. The PC was grabbing every webcam stream, and crop the sides to get a picture of the wall with no gaps or duplicates. Every few second or so, that giant picture was saved to get a “reference” picture if no movement were detected. Then by taking the difference between the reference picture and the current webcam stream, you can get the position of everything that were “added” to the scene, ie. hands of people interacting with the wall. Those differences were tracked using some features of the computer vision library to get “multi touch” feature.

Then, the position of the hand of the first user would be used to “seek and play”, back and forth on the recorded sentence that would match the one displayed on the wall. The resulting feeling was the ability to control the rythme of the sentence, slowing down on some teasing words. One thing some user did and that we didn’t think about was to “read” another sentence by jumping from word to word, remixing the whole meaning of the phrase.

Well, any way, I hope you enjoy it as I did working on this project ! Feel free to leave your comments.

And for the future, I hope I will be able to work on webGL soon. Also, I need to take some time to write about the rainbow spectrum as some people asked me for it.

Stay tunned !

Everything went better than expected

Hi there

If you can read this post, then you are on the new blog server. It seems that everything is working, but you might have troubles for a few days while the DNS are propagating.

As I was saying, some comments have been lost in the process, but nothing too much important.

Long story short, if you read this post, then you can comment and pingback again :)

How can I help you ?

Hi there,

I don’t have a lot of time right now, summer, scuba diving degrees, a little extra job freelancing, and of wourse, well, work…

So I’d like to know what would you be interesting on reading for the next article.

I’d like to go deeper into fragment shader, learn how to have bumpmapping working for instance, I could have a few example ported to WebGL, I could have a few articles about how to use minko to write your own shaders more easily than with AGAL…

Fell free to give me hints in the comments !

And for all, I wish you some nice vacations !

Stage3D / AGAL from scratch. Part VIII – Texture, please !

Hi folks,

As I said earlier, I have received some requests on twitter for an article about textures. So here we go.

In this tutorial, I’m going to take the previous article example and modify it so we can see a textured cube instead of a pink one.

By the way, I realized that the previous classes I gave you were incomplete, with some missing imports. Nothing huge actually, just a re-scale function, that is part of our framework here at Marcel. Long story short, sorry about that, I cleaned those one so you may be able to compile right away. And remember, if you have any troubles with any of my examples, just send me a tweet or something.

OK, lets go.

Texture coordinates

In the very firsts round of tutorial, we used to draw triangles with different colors “attached” to each vertices. That was how we could understand that every data that is passed from the vertex shader to the framgment shader is being interpolated.

Using a texture is relatively simple. As every thing we want to use in our shader, we will have to declare it, then upload it to the V-Ram, then allocate it.

The fragment shader, or pixel shader, is runned for every pixels that is being drawn, and it’s purpose is to compute that pixel color. Is previous example, the base color of every pixel was simply stored in a varying register, all we had to do was to, enventually, apply some light on it, then copy the color tu the Output Color register (oc).

Well, the only difference here will be that, for every pixel, the Fragment shader will not receive its color, but will have to sample the texture to get that pixel color.

The question is : how can our shader knows were, on the texture, he is suppose to “look at” to get the pixel color ? Well, this is what texture coordinates does.

Texture coordinate is very simple to understand. We call them U and V, and this is the exact same thing as X and Y in actionscript, except they are not absolute values, but relative values between 0 and 1. This way, you can change your texture size (from a 256*256 to a 512*512 for example) and your texture coordinates will remain the same.

Pretty simple hu ?

So, for a square, you basically have to use the coordinates above. For a triangle, pointing upward, you might have to set your first coordinate to 0.5, 0, so the first point starts at the center of the top edge.

UVs (for now on, I will call texture coordinates UVs), can be really more complex, for instance if you are trying to map a skin and hair texture to a face mesh, but you are not supposed to write them, they will be exported by your 3D software like 3DS Max.

It’s interesting to understand them though, because you can do a lot with UVs. With UVs you can only take a portion of your texture, there is no need to use it all. Ever heard of Sprite sheets ? For those of you that already tried Starling, and that were amazed by the very fast rendering of thousands of animation, this is how an animated “MovieClip” on the GPU is done : upload a single large texture with every frame of your animation, draw it on a square, then set the 4 UV into constants, and change them on every frame ! This way, every frame the GPU will pick a different portion of the image to sample !

An example of sprite sheet. Yep, old game were made that way too !

Texture Mipmapping.

As I am having a hard time trying to explain what is mipmapping, let me quote wikipedia :

In 3D computer graphics texture filteringmipmaps (also MIP maps) are pre-calculated, optimized collections of images that accompany a main texture, intended to increase rendering speed and reduce aliasing artifacts.

As we saw, for every pixel that is being drawn, the shader will have to sample the texture to get the pixel colors at the corresponding UVs. The first problem is that, with large texture, it will take some time to process. The second, and more important issue is that, at  large distance, when your object becomes very small, you might hive issues with texture sampling, resulting in some moire patterns.

The solution is to upload a series of bitmap, each one being half the size of the previous one. The GPU will then automatically choose between two bitmap to sample according to the distance. This is a good optimization technic since each sampling will take less time, but it also takes more V-Ram to store each bitmap. Mipmap used to came with another artifact in old video games where each “layer” of the texture wont blend with each other, as you can see here on the left wall.

The good news is now there is some texture filtering applied so you wont get that ugly rendering. Actually, in the following demo, you will have an option to turn mipmap on and off, and also an option to add a random color to each mipmap level so you can notice the different texture being used.

Let’s start coding

First of all, you can grab the source code. In this archive, you will find the cleaned previous example, updated to use a texture. To upload a texture, you need a BitmapData instance, where the width and height are power of two (256, 512, 1024…). A texture doesn’t need to be square, it can be a rectangle, but remember that your U and V values are always clamped to [0, 1], even on a rectangle shaped texture.

I’m not explaining to you how to get a BitmapData, either load it or embed it as I did.

[Embed(source="../../../../../assets/texture.jpg")]
 private var textClass:Class;
 private var text:Bitmap = new textClass();

The tutorial class takes advantage of my “still-in-progress” Geometry class as the previous tutorial did. A few thing have been improved, you may have a look if you want. Texture declaration, upload and allocation is all done in the __createAndUploadTexture() method.

/**
 * Create and upload the texture
 */
 private function __createAndUploadTexture():void {
     if (texture) {
         texture.dispose();
     }
    texture = context.createTexture(1024, 1024, Context3DTextureFormat.BGRA, false);
     // MIPMAP GENERATION
     var bmd:BitmapData = text.bitmapData;
     var s:int = bmd.width;
     var miplevel:int = 0;
     while (s > 0) {
         texture.uploadFromBitmapData(getResizedBitmapData(bmd, s, s, true, (miplevel != 0 && _mipmapColor.selected) ? Math.random()*0xFFFFFF:0), miplevel);
         miplevel++; //miplevel going up by one
         s = s * .5; //... and size going down, divided by two.
     }
    context.setTextureAt(0, texture);
 }

The code is pretty self explanatory. First ask the context to create a texture, it’s the same as asking to create a buffer. The upload part is a bit more complicated though.

Unlike a vertexBuffer, you can upload “several textures” to a single texture, each one corresponding of a smaller texture, for mipmapping purpose. Every time you upload a smaller texture, the “mipmap level” goes up by one, the default being zero.

For instance, if you don’t want to use mipmapping, you can use :

texture.uploadFromBitmapData(bitmapData);

But if you want to use mipmapping, you will end up using something like :

texture.uploadFromBitmapData(bitmapData512, 0);
texture.uploadFromBitmapData(bitmapData256, 1);
texture.uploadFromBitmapData(bitmapData128, 2);
texture.uploadFromBitmapData(bitmapData64, 3);
// and so on, down to a 1x1 bitmapdata...
texture.uploadFromBitmapData(bitmapData1, 9);

This is pretty much what my loop does, but instead of creating every texture size by myself, I just use ActionScript to generate them for me.

The rest of the method is allocation, very much like setVertexBufferAt, or setProgram. When you allocate a texture for the program, you will be able to use it in AGAL by calling  ”fsx“, so in this case, under fs0. Pretty much the same thing that fragment constants, vertex attributes or whatever…

fs : Fragment Sampler.

AGAL Time

The AGAL code given in the class is almost the same as the previous article AGAL code, so I’m going only to highlights the differences. Well, as always, if you are having troubles with my explanations, feel free to contact me.

Vertex Shader :

code += "mov v0, va1n";   // Interpolate the UVs (va0) into variable register v1
 code += "mov v1, va2n";   // Interpolate the normal (va1) into variable register v1

In the Vertex Shader, be sure to pass the UVs to the fragment shader. All the rest is the same thing.

Fragment Shader :

In the class, the code changes according to the mipmap combobox, so I will flatten the code here :

"text ft0 v0, fs0 <2d,linear, nomip>n" // NO mipmap

or

"text ft0 v0, fs0 <2d,linear, miplinear>n" // WITH mipmap

As you can see, the text opcode takes the following arguments :

text destination, coordinates, texture <options>

Sampler options can be found very easily on google, I will explain them on another article a little bit later.

As you can see, this is really simple. Once sampled, the pixel diffuse color is store on the temporary ft0. Now you can use it as your base color instead of the previous constant that was used (fc4). If you took your last sources instead of those one, make sure you also change this line :

"mul ft2, ft0, ft1 n"+ //multiply fragment color (ft0) by light amount (ft1).

Or you will still be using the plain color uploaded as a constant (change fc4 by ft0).

Compile and Run, and here you go !

Get Adobe Flash player

To notice the mipmap effect, zoom out a lot, rotate the cube a little. You can also check “show mipmap colors” so each mipmap levels get a random color.

I hope that will answer questions you had about textures. A little more will come about options, and about cube textures. As always, any feedback is always appreciated, and feel free to contact me if you have troubles !

See you !

Not in the mood…

Hi there.

I’m not really in the mood for writing another article right now. It’s mostly work-related, but also a little bit diablo-related.

Anyway, I’ve received some tweets about the lack of tutorial about texture. I really though this was an easy task for you guys, and I was waiting to crack the bumpmapping shader to start talking about texture, but I realize there is no need to wait for it, and that the bumpmapping tutorial will be a lot more easily to write (and to read !) if you already have some basic knowledge about textures.

So next article will talk about textures, and maybe sprite sheet, just to give you a hint about how to make your very own “Starling” Mini.

 

See you soon !

Stage3D / AGAL from scratch. Part VII – Let There Be Light

Let There Be Light

So, we displayed a lot of cubes and triangles together, and we also created a few controls to play around with the 3D Scene. But the very essence of 3D content is light.

For this tutorial, I’ve included a bunch of classes I am currently working one, not as a 3D engine, but as a small toolbax for future experiments. You will find in the given source code those currently “W.I.P” classes :

* Geometry : Stores a geometry, and it’s indexes. Has shorcuts to declare square faces instead of triangles. Can create and upload the buffers for you. Later, will be able to generate the faces normals (more about that later)

* Cube : A Simple class extending Geometry, creating a Cube made of 24 points so that the future faces normals act as intended.

* ColorMaterial : Right now a simple RGB value object like, but should contain the fragment shader soon.

* ArcBallCamera : Not something new, but completely revamped. It’s now really moving onto a circle and using the pointAt method to target the origin. This new method make the drag controler a little bit smarter (dragging to the bottom will only make the object rotate as if the screen would be the X axis.)

When I announced a few weeks ago this tutorial, I compared faces normals (actual normals in 3D language) to normalized vertices.

You will find in the Geometry class a method called “computeNormals” that will give you the first ones, the one we wants. This method is still in progress as right now, the normal can be the opposite of the wanted one if the face is drawn counter clockwise.

I will explain in another article how you can generate basic normals for your models, but keep in mind that this data should ideally be computed by your 3D designer, because it can “smooth” edges for low-poly meshes.

So anyway, by calling the computeNormals method, we will get small vector perpendicular to each face (each triangle).

The Lamberian Factor

The first light we will compute is what we call the diffuse light. The amount of light diffused by a surface depends on the angle between that surface, and the light. this is call the Lamberian Factor, or the Lamberiance Reflectance. Quoting Wikipedia, “The reflection is calculated by taking the dot product of the surface’s normal vector, and a normalized light-direction vector, pointing from the surface to the light source.”

The dot product is an operation we can do using AGAL very simply using the opcode dp3, which stands for Dot Product 3, 3 being the number of components (here, x, y and z).

Just a word about the dot product. The dot product, or scalar product takes two vectors and returns a single number. The only thing you need to remember is this :

  • If two vectors goes toward the “same” direction, the dot product will be a positive number.
  • If the vectors are perpendicular to each other, the dot product will be equal to zero
  • If the Vector are facing each other, the dot product will be a negative number.
Because the dot product also depends on the length of the vector, we will mostly use it with normalized vectors, giving you a result between -1 and 1, wich is very handy especially in light computation.

Allright, let’s now code this. First of all, download the following source code.

The LetThereBeLight class is rather simple. On context creation, I simply get an ArcBallCamera class, a bunch of projection matrix, a model matrix (that will be added to the Geometry class later), and a Cube. The Cube instance will receive a ColorMaterial (not really relevant right now) andcreate the buffers for me :

geometry = new Cube(50);
 geometry.setMaterial(new ColorMaterial(0xFF00FF));
 geometry.createBuffers(context);

This is simple stuff for you now, so let’s move on to the actual shader.

The Shader Constants

As we saw, the Lamberian Factor requires, in order to be calculated, the light direction, and the surface normal. The surface normals are already stored in the vertexBuffer, so we still need the Light Direction. But we also need a bunch of other values :

  • The Light Color. Here I chose a plain white
  • The Ambient Light. The ambient light is the minimum amount of light a surface can receive. It’s a simple technique to simulate the fact that, in the real world, the light is reflected so many time that even when an object side is not under the light, it’s still visible and doesn’t turns completely black.
  • The Light Direction. In this example, the light will always come from the camera, meaning that we will more have the impression of moving the cube under the light than moving around it, but feel free to try other values

All those data will be stored in shader constants, so here we go :

context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.&lt;Number&gt;([0,0,0,0]));
//fc0, for clamping negative values to zero
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.&lt;Number&gt;([0.1,0.1,0.1,0]));
//fc1, ambient lighting (1/4 of full intensity)
 
var p:Vector3D = camera.position;
p.negate();
p.normalize();
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.&lt;Number&gt;([p.x,p.y,p.z,1]));
// Light Direction
 
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.&lt;Number&gt;([1,1,1,1]));
// Light Color

You may have noticed that all those constants, even if they are mostly vectors, directions, positions, are FRAGMENT constants, since we have no use of them in the vertex shader. Looking at the source, you will see that the color of the cube (here a nice pinkish color) is uploaded as a constant. We saw that already.

OK, so now, everything is in place, we may have a look at the shader AGAL code.

AGAL Time

What we need to do according to the Lamberian Factor :

  1. Calculate the Lamberian Factor using a dot product between the normal (v1) and the light  direction (fc2)
  2. Negate the result : We do this because the Lamberian formula is using the light direction from the surface to the light source. So you can either negate the light direction vector, or negate the dot product result
  3. Clamp any result below 0 : if the angle between the light and the surface normal is higher than 90°, then the dot product will be negative. This could cause unexpected result when computing the output color, so we just set it to 0 (no light).
  4. Multiply the fragment color by the light amount. For a light amount equals to 0, the surface will be black, for a light amount equals to 1, the surface will have it’s regular color.
  5. Multiply the resulted color by the light color. Your red cube might look a little more purple if your light is blue
  6. Add the ambient light. This way, every black surface will become a little brighter.

Here is the corresponding AGAL code :

code = ""+
 "dp3 ft1, fc2, v1 n"+ // dot the transformed normal (v1) with light direction fc2 -&gt; This is the Lamberian Factor
 "neg ft1, ft1 n"+ // Get the "opposite" vector. We could also have uploaded the opposite of the light direction to avoid this step
 "max ft1, ft1, fc0 n"+ // clamp any negative values to 0 // ft1 = lamberian factor
 
 "mul ft2, fc4, ft1 n"+ //multiply fragment color (fc4) by light amount (ft1).
 "mul ft2, ft2, fc3 n"+ //multiply fragment color (ft2) by light color (fc3).
"add oc, ft2, fc1"; //add ambient light and output the color

UPDATE : Thanks to the help of Jean Marc, I discovered the sat opcode that one can use to clamp any value to the range [0,1]. So I should just replace the “max” line with this one :

" sat ft1, ft1 n"+

which allows me to save a constant, so I should  also get rid of fc0.

Also, you now know that copying values to the varying registers (v0, v1) the values are interpolated. That behavior was demonstrated by the color slowly fading between two points in the previous tutorials. Well, as Jean Marc stated, when being interpolated, the normals could not “normalized” anymore, so I should normalize my normals (duh !) in the fragment shader before using them. Thanks Jean Marc !

Compile and run : here it is, your first directional light !

For the posted demo, I added two options that are not in the sources : the first checkbox fix the light at the current position so you can rotate the cube and see the effect of ambient light, and the second one switch the normals to normalized vertices (see two first schemes).

Get Adobe Flash player

As always, have fun with the sources, and tell me what you think ! If you need more explainations or anything, just feel free to ask.

See you !

Articles Frequency

Hi every one !

I received a lot of really nice comments lately about my articles, and also some requests. I’m glad about it, really, I’m glad I helped a few of you, and I’m glad you find the articles efficients, simple and easy. I think my article are easy to read only because I share my knowledge with you guys as I am discovering it. Meaning I am not a lot ahead from you :)

I wanted to remind you this not because I don’t want you to ask for help, actually I’m glad some of you sent me sources to look at or challenged me with some question, but because from now, I may post articles less frequently as the easy part is behind, and that I need more time to discover myself the other mysteries behind stage3D.

So in other words, please stay tuned, and please keep talking with me via mail or twitter, it’s always interesting, but expect articles (about Stage3D at least) to come less frequently.

About Stage3D, I am currently writing something about directional light. It should be ready by tonight or by tomorrow.

See you folks !

Watercolor effect

On Friday I found this amazing work from Stamen where they convert in real time an OpenStreetMap map into a wonderful watercolor like drawing. Check it out here !

I really love the result, it’s absolutely gorgeous. Lucky me, they give a few headlights on the whole process on their blog : http://content.stamen.com/watercolor_process

So I decided to create and bench the same effect in Flash. No Stage3D this time, only bitmap manipulation.

First, the demo :

First slider change the threshold sensitivity
Second slider change the perlin noise alpha
Use checkboxes to disable the shadows or to view the used mask

Right now the demo is a bit heavy. Almost no optimization was done, and I wonder if some of the computation could be done using pixel bender.

You can find every step image on the Stamen blog post, so I won’t detail them here, but here is the effect code. Feel free to go back and fourth between the two blog to see the filter in action step by step.

     private function filterColor(colorToFilter:uint, textureToApply:Bitmap, sensitivity:int = 90):void {
     var msk:Bitmap = new Bitmap(new BitmapData(_mask.width, _mask.height, true, 0xFF000000));
      // _mask is a generated bitmap we get using a background color, the textfield, and the two vector assets.
      // You can see it by selected the "show mask" checkbox
 
     msk.bitmapData.lock();
     msk.bitmapData.threshold(_mask.bitmapData, _mask.getBounds(this), new Point(0,0), "==", colorToFilter, 0xFFFFFFFF, 0xFFFFFFFF, false);
     // first threshold separate the given color, for instance pink for the text
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(4.5, 4.5, 2));
     msk.bitmapData.draw(_noise, null, new ColorTransform(.5, .5, .5, _slAlpha.value), BlendMode.NORMAL, null, true);
     // Blur then apply a "noise". _noise is a simple perlin noise bitmap generated on app initialisation We use the same for every layers
 
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", sensitivity, 0xFF000000, 0x000000FF);
     if(_useShadow.selected) msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), ">", sensitivity, 0xFFFFFFFF, 0x000000FF);
     // those threshold give us a black and white mask wich is a bit deformed by the noise and the blur filter.
     // The higher the sensitivity (which is actually just the color limit of the threshold from 0 to 255), the more the mask shrink, leaving some white space between layers)
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(2, 2, 3));
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", 0x66, 0xFF000000, 0x000000FF);
     // New blur / threshold operation to round a little bit the previous mask
 
     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(1.2, 1.2, 1));
     msk.bitmapData.unlock();
     // small blur to antialiase the mask
 
     if(_useShadow.selected){
          var shadow:BitmapData = msk.bitmapData.clone();
          shadow.applyFilter(shadow, _mask.getBounds(this), new Point(0,0), new BlurFilter(5, 5, 3));
          shadow.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     }
     // the inner shadow is just the same mask blured again, then cut into by copying the unblurred mask red channel into the blurred mask alpha channel.
 
     var bmp:Bitmap = new Bitmap();
     bmp.bitmapData = textureToApply.bitmapData.clone();
 
     bmp.bitmapData.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     // Copy the mask red channel (could have been green or blue since we are working in greyscale) into texture alpha channel.
 
     if(_useShadow.selected) bmp.bitmapData.draw(shadow, null, new ColorTransform(1, 1, 1, .4), BlendMode.MULTIPLY, null, true);
     // Eventually draw the shadow bitmap onto the texture.
 
     _container.addChild(bmp);
}

That’s it ! Not as beautiful as the Stamen work, but right now I’m satisfied with the result.

I’m not giving the whole code since it’s embedded into the Agency Framework, so I would have to upload a lot of classes for a single effect, but you can try it by yourself really easily.

Credit goes for Stamen for the idea, and for Stamen again for those wonderful texture I used

Brace yourself, shaders are coming…

… Or at least I hope so !

Since I’m working on Stage3D, I want to understand how lightning effects works. To be able to work on my own shaders, I need a few thing :

A mesh more complex than just a cube

For this I am actually working on a very basic OBJ parser. Well, at start I thought it would be complete, but splitting the geometries into sub geometries according to materials, I can’t get it done… Anyway, I’ve read a lot on the subject, even took a little inspirational peak into Away3D code, and I have a simple mesh with no textures. Enough for now.

True Normals

When you work with light, you need some normals to compute your light. A Normal is just a Vector3D where the sum of it’s 3 components (x, y and z) have a length of 1.

Every Vector3D can be converted into a normal using Vector3D.normalize(), or directly into AGAL using the nrm opcode. And a Vertex, a coordinate is nothing else than a Vector3D (a coordinate represent the distance from the origin point)

So why do I need “true” normals ?

Well, normals are used to compute diffuse light, and the normal is used to compute the angle between the light and the surface. Now take a simple cube, and normalize its vertices, here is what you got :

This can be interesting since you will have every normals interpolated when passed to the fragment shader, the light won’t stop at the cube edge, making your cube glowing like a sphere. This is actually what one can do to have a very smooth light on a low poly sphere.

But in the case of the cube, you want to have normals that looks like that :

OK, it’s poorly drawn, but you get the idea.

Generating normals seems complicated, but actually, it’s rather simple.

So this is were I am, and this will be covered in the next tutorial article.
See you later guys !

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44″ AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
renderCube();
 
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 renderCube();
 
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);
renderCube();

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

        /**
         * Render the cube according to it's current parameters ( = modelMatrix)
         */
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
 
            // program
            context.setProgram(cube.program);
 
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
 
            //constants
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
 
            // render
            context.drawTriangles(cube.indexBuffer);
        }

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !