Pages

Wednesday 24 February 2010

OBJ Vertex Renderer

Just a really simple and quick experiment, to see if I could load a Wavefront .obj file into a flash movie and render the vertex co-ordinates in real time, in three dimensions. Just click the image below to launch the experiment. The object contains about 30,000 vertices, and these are simply transformed from 3D to 2D space and displayed using bitmap data. This is nowhere near as complicated as my prior attempt at a 3D engine, but I decided to make performance more of an issue this time. It does run at full frame rate quite happily on my computer. Let me know about your experiences.

The next step is an edge renderer (wireframe), then a face renderer. We'll see how high the performance can stay. Obviously the ideal case is a high quality raytrace in real time with shadows, reflections and more, but lets not kid ourselves.

Tuesday 23 February 2010

Image Transform Tool

I've just been playing around with this little class made by Ruben Swieringa.

It allows any image to be transformed by providing any four corner points. In the demo I used an animation of mine (Strange Attractors).

Click on the image to begin then try dragging around the corners. I'll see if I can come up with any other experiments using this neat class. Any bitmap data can be used, so videos, images and animations can all be dragged around in 3 dimensions....

Thursday 18 February 2010

Face Recognition Algorithms

Over the last few days I have been looking at a various algorithms and methods used in face recognition processes. The first method I looked at uses a colour space known as TSL (closely related to HSL - hue, saturation, luminosity). This colour space was developed with the intention of being far closer to the way the human brain sees colours than RGB and other computationally preferred systems. Taking a webcam stream and converting all pixels into TSL colour space with [R,G,B] -> [T,S,L], I found that although in certain lighting conditions the system can differentiate between skin colour and other surfaces, the method is far from ideal. For example the cream walls in my house can often be identified as skin colour, which clearly shows an issue. It is also clear from the images that areas of my face that are in shade are not recognised as skin. These issues are best addressed using methods that do not depend on colour.











In my research I found two viable methods for use in face recognition, using eigenfaces, and fisherfaces.

Eigenfaces seemed to be a more commonly used method, so I decided to follow that route first (albeit roughly, as I'm sure you all know by now I like to do things my own way when I program). In order to recognise a face in an image, the computer has to be trained to know what a face looks like. The first step involved writing a class to import .pgm files from the CMU face database. Here is a sample of what the faces looked like when imported. All images are in the frontal pose,  with similar lighting conditions, and varying facial expressions.

The next three steps involved creating an average face, and taking the resulting image and calculating it's vertical and horizontal gradients. The average face simply sums all of the pixels at each location for each face, and takes the average value. To calculate the gradient the difference between two adjacent pixels is taken in either the vertical or the horizontal direction. Computers find it easy to see vertical and horizontal lines (I have already written some basic shape detection software which uses these kinds of algorithms) so I thought this might be a good idea to use these as comparisons with found faces. I planned on using a kind of probability test, with a threshold as to the likeliness that any part of the image is a face, by comparing it to the mean face, the horizontal gradient face, and the vertical gradient face.

The three faces found are shown below for this database. Clearly one could find the mean face of all people wearing glasses, or all men, or all women, and this would affect it's final appearance. Therefore it could theoretically be simple to build in gender testing using webcams (assuming a complete lack of androgyny which clearly there is not....), but a probabilistic approach could still be taken.




























The mean face looks incredibly symmetric and smooth. This is perfection folks, and its kind of frightening! The idea behind using these for face recognition is relatively simple. Scan an image taking the difference between an overlaid mean face, and the region of the image being scanned. If the difference is below a threshold it means that the images are similar. This means it is likely that there is a face where you are checking. To ensure it is a face consider the horizontal and vertical gradients of the mean and compare them. If they are similar to within a certain threshold it is very likely you have found a face!

I'll come back to you when I have some working flash files and source code!

Tuesday 9 February 2010

Three Dimensional Projection

I came across a very interesting video recently. A design agency from the Netherlands called NuFormer Media have mapped various famous buildings in 3 dimensions. Using the three dimensional data, images can be projected onto the building, with amazing accuracy. The results are amazing and quite hypnotising, I especially like the crumbling building at about 36 seconds in. Just watch the video below!

Thursday 4 February 2010

Followers: 20,000 particles

I've been playing around with vectors and decided to see how many particles I could get to follow my mouse using bitmap data. Using 300,000 particles (nearly half a million) I clocked around 2fps, I decided to add a few more effects (colours, trigonometry for the following etc) and found the simulation to run at max fps up until around 20,000. Not bad at all and some of the effects look really pretty! Here are some screenshots, or just press here to run the app. You can use the slider at the bottom to change the number of particles from anything between 20 and 20,000.






The third picture above uses a blur filter on the bitmap data, some of the visuals created with this version of the program are really beautiful. Check it out by clicking the image!

Chrome Effect Ray Tracer

Applying a chrome filter to a finished ray trace render creates an amazing effect. Take a look for yourself. I'm in the process of developing online software where you'll be able to create simple 3D scenes and raytrace them with a number of effects; including anaglyph 3D, chrome and a few more! This is my screensaver at the moment:

Tuesday 2 February 2010

Road Generation: First Attempt

I decided to take a look at using Voronoi diagrams to generate road systems. Here are a few very early screenshots! Looks promising.