Everything went better than expected

Hi there

If you can read this post, then you are on the new blog server remplacer le viagra. It seems that everything is working, but you might have troubles for a few days while the DNS are propagating.

As I was saying, some comments have been lost in the process, but nothing too much important.

Long story short, if you read this post, then you can comment and pingback again 🙂

How can I help you ?

Hi there,

I don’t have a lot of time right now, summer, scuba diving degrees, a little extra job freelancing, and of wourse, well, work…

So I’d like to know what would you be interesting on reading for the next article.

I’d like to go deeper into fragment shader, learn how to have bumpmapping working for instance, I could have a few example ported to WebGL, I could have a few articles about how to use minko to write your own shaders more easily than with AGAL…

Fell free to give me hints in the comments !

And for all, I wish you some nice vacations !

Not in the mood…

Hi there.

I’m not really in the mood for writing another article right now. It’s mostly work-related, but also a little bit diablo-related pilule du viagra.

Anyway, I’ve received some tweets about the lack of tutorial about texture. I really though this was an easy task for you guys, and I was waiting to crack the bumpmapping shader to start talking about texture, but I realize there is no need to wait for it, and that the bumpmapping tutorial will be a lot more easily to write (and to read !) if you already have some basic knowledge about textures.

So next article will talk about textures, and maybe sprite sheet, just to give you a hint about how to make your very own “Starling” Mini.


See you soon !

Stage3D / AGAL from scratch. Part VII – Let There Be Light

Let There Be Light

So, we displayed a lot of cubes and triangles together, and we also created a few controls to play around with the 3D Scene. But the very essence of 3D content is light.

For this tutorial, I’ve included a bunch of classes I am currently working one, not as a 3D engine, but as a small toolbax for future experiments. You will find in the given source code those currently “W.I.P” classes :

* Geometry : Stores a geometry, and it’s indexes. Has shorcuts to declare square faces instead of triangles. Can create and upload the buffers for you. Later, will be able to generate the faces normals (more about that later)

* Cube : A Simple class extending Geometry, creating a Cube made of 24 points so that the future faces normals act as intended.

* ColorMaterial : Right now a simple RGB value object like, but should contain the fragment shader soon.

* ArcBallCamera : Not something new, but completely revamped. It’s now really moving onto a circle and using the pointAt method to target the origin. This new method make the drag controler a little bit smarter (dragging to the bottom will only make the object rotate as if the screen would be the X axis.)

When I announced a few weeks ago this tutorial, I compared faces normals (actual normals in 3D language) to normalized vertices.

You will find in the Geometry class a method called “computeNormals” that will give you the first ones, the one we wants. This method is still in progress as right now, the normal can be the opposite of the wanted one if the face is drawn counter clockwise.

I will explain in another article how you can generate basic normals for your models, but keep in mind that this data should ideally be computed by your 3D designer, because it can “smooth” edges for low-poly meshes.

So anyway, by calling the computeNormals method, we will get small vector perpendicular to each face (each triangle).

The Lamberian Factor

The first light we will compute is what we call the diffuse light. The amount of light diffused by a surface depends on the angle between that surface, and the light. this is call the Lamberian Factor, or the Lamberiance Reflectance. Quoting Wikipedia, “The reflection is calculated by taking the dot product of the surface’s normal vector, and a normalized light-direction vector, pointing from the surface to the light source.”

The dot product is an operation we can do using AGAL very simply using the opcode dp3, which stands for Dot Product 3, 3 being the number of components (here, x, y and z).

Just a word about the dot product. The dot product, or scalar product takes two vectors and returns a single number. The only thing you need to remember is this :

  • If two vectors goes toward the “same” direction, the dot product will be a positive number.
  • If the vectors are perpendicular to each other, the dot product will be equal to zero
  • If the Vector are facing each other, the dot product will be a negative number.
Because the dot product also depends on the length of the vector, we will mostly use it with normalized vectors, giving you a result between -1 and 1, wich is very handy especially in light computation.

Allright, let’s now code this. First of all, download the following source code.

The LetThereBeLight class is rather simple. On context creation, I simply get an ArcBallCamera class, a bunch of projection matrix, a model matrix (that will be added to the Geometry class later), and a Cube. The Cube instance will receive a ColorMaterial (not really relevant right now) andcreate the buffers for me :

geometry = new Cube(50);
 geometry.setMaterial(new ColorMaterial(0xFF00FF));

This is simple stuff for you now, so let’s move on to the actual shader.

The Shader Constants

As we saw, the Lamberian Factor requires, in order to be calculated, the light direction, and the surface normal. The surface normals are already stored in the vertexBuffer, so we still need the Light Direction. But we also need a bunch of other values :

  • The Light Color. Here I chose a plain white
  • The Ambient Light. The ambient light is the minimum amount of light a surface can receive. It’s a simple technique to simulate the fact that, in the real world, the light is reflected so many time that even when an object side is not under the light, it’s still visible and doesn’t turns completely black.
  • The Light Direction. In this example, the light will always come from the camera, meaning that we will more have the impression of moving the cube under the light than moving around it, but feel free to try other values

All those data will be stored in shader constants, so here we go :

context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 0, Vector.<Number>([0,0,0,0]));
//fc0, for clamping negative values to zero
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 1, Vector.<Number>([0.1,0.1,0.1,0]));
//fc1, ambient lighting (1/4 of full intensity)
var p:Vector3D = camera.position;
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.<Number>([p.x,p.y,p.z,1]));
// Light Direction
context.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 3, Vector.<Number>([1,1,1,1]));
// Light Color

You may have noticed that all those constants, even if they are mostly vectors, directions, positions, are FRAGMENT constants, since we have no use of them in the vertex shader. Looking at the source, you will see that the color of the cube (here a nice pinkish color) is uploaded as a constant. We saw that already.

OK, so now, everything is in place, we may have a look at the shader AGAL code.


What we need to do according to the Lamberian Factor :

  1. Calculate the Lamberian Factor using a dot product between the normal (v1) and the light  direction (fc2)
  2. Negate the result : We do this because the Lamberian formula is using the light direction from the surface to the light source. So you can either negate the light direction vector, or negate the dot product result
  3. Clamp any result below 0 : if the angle between the light and the surface normal is higher than 90°, then the dot product will be negative. This could cause unexpected result when computing the output color, so we just set it to 0 (no light).
  4. Multiply the fragment color by the light amount. For a light amount equals to 0, the surface will be black, for a light amount equals to 1, the surface will have it’s regular color.
  5. Multiply the resulted color by the light color. Your red cube might look a little more purple if your light is blue
  6. Add the ambient light. This way, every black surface will become a little brighter.

Here is the corresponding AGAL code :

code = ""+
 "dp3 ft1, fc2, v1 n"+ // dot the transformed normal (v1) with light direction fc2 -> This is the Lamberian Factor
 "neg ft1, ft1 n"+ // Get the "opposite" vector. We could also have uploaded the opposite of the light direction to avoid this step
 "max ft1, ft1, fc0 n"+ // clamp any negative values to 0 // ft1 = lamberian factor
 "mul ft2, fc4, ft1 n"+ //multiply fragment color (fc4) by light amount (ft1).
 "mul ft2, ft2, fc3 n"+ //multiply fragment color (ft2) by light color (fc3).
"add oc, ft2, fc1"; //add ambient light and output the color

UPDATE : Thanks to the help of Jean Marc, I discovered the sat opcode that one can use to clamp any value to the range [0,1]. So I should just replace the “max” line with this one :

" sat ft1, ft1 n"+

which allows me to save a constant, so I should  also get rid of fc0.

Also, you now know that copying values to the varying registers (v0, v1) the values are interpolated. That behavior was demonstrated by the color slowly fading between two points in the previous tutorials. Well, as Jean Marc stated, when being interpolated, the normals could not “normalized” anymore, so I should normalize my normals (duh !) in the fragment shader before using them. Thanks Jean Marc !

Compile and run : here it is, your first directional light !

For the posted demo, I added two options that are not in the sources : the first checkbox fix the light at the current position so you can rotate the cube and see the effect of ambient light, and the second one switch the normals to normalized vertices (see two first schemes).

Get Adobe Flash player

As always, have fun with the sources, and tell me what you think ! If you need more explainations or anything, just feel free to ask.

See you !

Articles Frequency

Hi every one !

I received a lot of really nice comments lately about my articles, and also some requests. I’m glad about it, really, I’m glad I helped a few of you, and I’m glad you find the articles efficients, simple and easy. I think my article are easy to read only because I share my knowledge with you guys as I am discovering it. Meaning I am not a lot ahead from you 🙂

I wanted to remind you this not because I don’t want you to ask for help, actually I’m glad some of you sent me sources to look at or challenged me with some question, but because from now, I may post articles less frequently as the easy part is behind, and that I need more time to discover myself the other mysteries behind stage3D.

So in other words, please stay tuned, and please keep talking with me via mail or twitter, it’s always interesting, but expect articles (about Stage3D at least) to come less frequently.

About Stage3D, I am currently writing something about directional light. It should be ready by tonight or by tomorrow.

See you folks !

Watercolor effect

On Friday I found this amazing work from Stamen where they convert in real time an OpenStreetMap map into a wonderful watercolor like drawing. Check it out here !

I really love the result, it’s absolutely gorgeous. Lucky me, they give a few headlights on the whole process on their blog : http://content.stamen.com/watercolor_process

So I decided to create and bench the same effect in Flash. No Stage3D this time, only bitmap manipulation.

First, the demo :

<img data-attachment-id="170" data-permalink="http://blog.norbz.net/2012/04/watercolor-effect/paint_on/" data-orig-file="https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=500%2C114" data-orig-size="500,114" data-comments-opened="1" data-image-meta="{"aperture":"0","credit":"","camera":"","caption":"","created_timestamp":"0","copyright":"","focal_length":"0","iso":"0","shutter_speed":"0","title":""}" data-image-title="paint_on header" data-image-description="" data-medium-file="https://i2.wp.com/blog acheter du viagra en pharmacie.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=300%2C68″ data-large-file=”https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?fit=500%2C114″ class=”alignnone size-full wp-image-170″ title=”paint_on header” src=”https://i2.wp.com/artofnorbz.com/norbz.net/blog/wp-content/uploads/2012/04/paint_on1.jpg?resize=500%2C114″ alt=”” srcset=”https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?w=500 500w, https://i2.wp.com/blog.norbz.net/wp-content/uploads/2012/04/paint_on1.jpg?resize=300%2C68 300w” sizes=”(max-width: 500px) 100vw, 500px” data-recalc-dims=”1″ />

First slider change the threshold sensitivity
Second slider change the perlin noise alpha
Use checkboxes to disable the shadows or to view the used mask

Right now the demo is a bit heavy. Almost no optimization was done, and I wonder if some of the computation could be done using pixel bender.

You can find every step image on the Stamen blog post, so I won’t detail them here, but here is the effect code. Feel free to go back and fourth between the two blog to see the filter in action step by step.

     private function filterColor(colorToFilter:uint, textureToApply:Bitmap, sensitivity:int = 90):void {
     var msk:Bitmap = new Bitmap(new BitmapData(_mask.width, _mask.height, true, 0xFF000000));
      // _mask is a generated bitmap we get using a background color, the textfield, and the two vector assets.
      // You can see it by selected the "show mask" checkbox

     msk.bitmapData.threshold(_mask.bitmapData, _mask.getBounds(this), new Point(0,0), "==", colorToFilter, 0xFFFFFFFF, 0xFFFFFFFF, false);
     // first threshold separate the given color, for instance pink for the text

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(4.5, 4.5, 2));
     msk.bitmapData.draw(_noise, null, new ColorTransform(.5, .5, .5, _slAlpha.value), BlendMode.NORMAL, null, true);
     // Blur then apply a "noise". _noise is a simple perlin noise bitmap generated on app initialisation We use the same for every layers

     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "", sensitivity, 0xFFFFFFFF, 0x000000FF);
     // those threshold give us a black and white mask wich is a bit deformed by the noise and the blur filter.
     // The higher the sensitivity (which is actually just the color limit of the threshold from 0 to 255), the more the mask shrink, leaving some white space between layers)

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(2, 2, 3));
     msk.bitmapData.threshold(msk.bitmapData, _mask.getBounds(this), new Point(0,0), "<=", 0x66, 0xFF000000, 0x000000FF);
     // New blur / threshold operation to round a little bit the previous mask

     msk.bitmapData.applyFilter(msk.bitmapData, _mask.getBounds(this), new Point(0,0), new BlurFilter(1.2, 1.2, 1));
     // small blur to antialiase the mask

          var shadow:BitmapData = msk.bitmapData.clone();
          shadow.applyFilter(shadow, _mask.getBounds(this), new Point(0,0), new BlurFilter(5, 5, 3));
          shadow.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     // the inner shadow is just the same mask blured again, then cut into by copying the unblurred mask red channel into the blurred mask alpha channel.

     var bmp:Bitmap = new Bitmap();
     bmp.bitmapData = textureToApply.bitmapData.clone();

     bmp.bitmapData.copyChannel(msk.bitmapData, _mask.getBounds(this), new Point(0,0), BitmapDataChannel.RED, BitmapDataChannel.ALPHA);
     // Copy the mask red channel (could have been green or blue since we are working in greyscale) into texture alpha channel.

     if(_useShadow.selected) bmp.bitmapData.draw(shadow, null, new ColorTransform(1, 1, 1, .4), BlendMode.MULTIPLY, null, true);
     // Eventually draw the shadow bitmap onto the texture.


That’s it ! Not as beautiful as the Stamen work, but right now I’m satisfied with the result.

I’m not giving the whole code since it’s embedded into the Agency Framework, so I would have to upload a lot of classes for a single effect, but you can try it by yourself really easily.

Credit goes for Stamen for the idea, and for Stamen again for those wonderful texture I used

Brace yourself, shaders are coming…

… Or at least I hope so !

Since I’m working on Stage3D, I want to understand how lightning effects works viagra naturel france. To be able to work on my own shaders, I need a few thing :

A mesh more complex than just a cube

For this I am actually working on a very basic OBJ parser. Well, at start I thought it would be complete, but splitting the geometries into sub geometries according to materials, I can’t get it done… Anyway, I’ve read a lot on the subject, even took a little inspirational peak into Away3D code, and I have a simple mesh with no textures. Enough for now.

True Normals

When you work with light, you need some normals to compute your light. A Normal is just a Vector3D where the sum of it’s 3 components (x, y and z) have a length of 1.

Every Vector3D can be converted into a normal using Vector3D.normalize(), or directly into AGAL using the nrm opcode. And a Vertex, a coordinate is nothing else than a Vector3D (a coordinate represent the distance from the origin point)

So why do I need “true” normals ?

Well, normals are used to compute diffuse light, and the normal is used to compute the angle between the light and the surface. Now take a simple cube, and normalize its vertices, here is what you got :

This can be interesting since you will have every normals interpolated when passed to the fragment shader, the light won’t stop at the cube edge, making your cube glowing like a sphere. This is actually what one can do to have a very smooth light on a low poly sphere.

But in the case of the cube, you want to have normals that looks like that :

OK, it’s poorly drawn, but you get the idea.

Generating normals seems complicated, but actually, it’s rather simple.

So this is were I am, and this will be covered in the next tutorial article.
See you later guys !

Stage3D / AGAL from scratch. Part VI – Organise your Matrices

Organise your Matrices

In previous articles we used some matrices to modify the rendering of a triangle. Rotations, scales, translations, We also learned to use a projection matrix to render the depth effect into the clipspace projection. And we saw that we would upload the matrix as a vertex constant, and use it with the “m44” AGAL opcode.

Matrices operation aren’t distributives, meaning if you scale first, then rotate, it’s not the same thing than if you rotate then scale. So you will have to organize your matrices in a certain order to get things done smooth and easy. Follow the guide.

From cameras to matrices

First of all, download the following code example. It’s made out of 3 classes :

  • The article example bootstrap
  • A simple Cube class, that will just create a colored cube vertex and index buffer, and store a matrix for its position.
  • An ArcballCamera class that you can use and share for your experiments. Very usefull to get a quick way of “browsing” your scene around the origin point.

The Cube class

Just a quick word about the Cube class, since you should be able to do it by yourself now : It is not “clean” and “optimised” at all, and I did it only to make the main code more readable.

The Cube class doesn’t even have a “render” function. When you instantiate a Cube, it will create its vertexBuffer and indexBuffer, and upload the simplest data ever. This cube is made out of 8 vertices which is why the color are merging on the corner and that you don’t get a plain color per face. The Cube also create the simple “3 lines” shader you need to have some rendering, and upload it. That’s it.

The ArcBallCamera class

The ArcBallCamera is a camera that rotates around the origin point. When I tried to build it at first, I though I had to look for geometry formula, about placing a point onto a 3D sphere or something. Actually, it’s a lot simpler.

Your matrices modify the “world”, not the camera

It sounds stupid to say it, but it is something you have to keep in mind. For instance, if you want to have your camera slowly going away from your scene, you will have to increase it’s z position, because you are actually “pushing” the world away from your clipspace.

Keep that in mind, and remember that matrices operations are not distributives. To make your arcball camera, the operation are actually very simple : rotate the world, then push it away. That’s it !

Both “method” should work, but it’s actually really simple to use the second one, for the same result : rotate the “world”, then “push” it away.

The rest of the class is pretty simple : on EnterFrame event, the class applies some rotation then some translation to a Matrix 3D according to mouse position and mouseWheel actions.

The ModelViewProjection matrix

OK, so we have a matrix that is our camera, and we have one for the projection, and we have one for the cube, great, but now ?

The final matrix used for the rendering is often named the modelViewProjection matrix. for a very simple reason : you have to append your every matrices in the following order :

  1. The Model Matrix : your model being the mesh you are currently drawing
  2. The View Matrix : the view being your “camera” somehow
  3. The Projection Matrix : being the “lense” in some 3D Engine, the projection always come last as far as I know.

Following this order will give you very intelligible results.

Head up toward the OrganizeYourMatrices class. Notice that when the context is created, I instantiate a single cube, a camera, and the projection matrix we will use later. Go one to the render function.

Rendering several cubes with only one

To both illustrates how following the previous matrices order will give you the wanted result and that you can draw several times the same vertexBuffer, I will keep my single cube and render four of them around the origin.

// render second cube
 cube.moveTo(1.1, -1.1, 0);
 // render third cube
cube.moveTo(-1.1, 1.1, 0);
 // render fourth cube
 cube.moveTo(1.1, 1.1, 0);

The following code isn’t the cleanest one I made but at least it is easy to understand. The only cube we have can be “moved” to 4 differents positions, and drawn onto the screen using the renderCube method. Go ahead, that is were the magic will happen.

         * Render the cube according to it's current parameters ( = modelMatrix)
        private function renderCube():void {
            modelViewProjection = new Matrix3D();
            modelViewProjection.append(cube.modelMatrix);         // MODEL
            modelViewProjection.append(camera.matrix);            // VIEW...    
            modelViewProjection.append(projectionMatrix);        // PROJECTION !
            // program
            // vertices
            context.setVertexBufferAt(0, cube.vertexBuffer, 0, Context3DVertexBufferFormat.FLOAT_3); // x, y, z
            context.setVertexBufferAt(1, cube.vertexBuffer, 3, Context3DVertexBufferFormat.FLOAT_3); // r, g, b
            context.setProgramConstantsFromMatrix(Context3DProgramType.VERTEX, 0, modelViewProjection, true);
            // render

Each time I want to draw the cube, I first start by recreating a modelViewProjection matrix. I could have instantiate it somewhere else, and only reset the matrix using modelViewProjection.identity(), that would have been better, but anyway, it’s the same.

First, append the modelMatrix of the cube. This matrix contains the translation parameters we made using cube.moveTo(x, y, z). Append the camera’s matrix, and finish with the projection.

The rest of the renderCube method is just classic Stage3D stuff : declaring your current program, and buffers, and drawing triangles.

The reason you can call several times (in this case, 4) the drawTriangles function and still get the complete scene is because the drawTriangle function only renders your mesh into the backbuffer. So the last thing you need to do on your rendering method is to present the backbuffer onto the screen.

Now you should get something like this

Get Adobe Flash player

Append and Prepend

There is some case where it is difficult to use this order because of implementations details. Hopefully, there is a way to add a transformation at the top of the operations stack : prepend.

Prepend comes in different flavors : prepend a matrix, prependTranslation, prependRotation and so on.

to understand what prepend does, just look at the 2 following codes : they both do the exact same thing.

modelViewProjection = new Matrix3D();
 modelViewProjection.append(cube.modelMatrix);         // MODEL
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
modelViewProjection = new Matrix3D();
 modelViewProjection.append(camera.matrix);            // VIEW...
 modelViewProjection.append(projectionMatrix);        // PROJECTION !
 modelViewProjection.prepend(cube.modelMatrix);   // PREPEND MODEL

That’s all for today, I hope you enjoyed this, as always, and that will be useful for you. Don’t hesitate to use, modify or share the ArcBallCamera class since it’s a very simple snippet of code.

As always, feedback is appreciated !