Pages

Wednesday, 19 January 2011

Multi-Color-Space-Thresholding

A few months ago I was commissioned to create some background subtraction technology in flash. The project fell through at the time. The final product wasn't as stable as I'd hoped it might be. I thought that since the project fell through I would share some of my findings to see if the collective flash community might have any ideas or suggestions as to how one might implement this kind of technology.

The idea of background subtraction is to separate two parts of an image, the foreground and a "static" background. Here is an example. Given the two images on the left, how does one extract the image on the right. You might think this is fairly easy, just look at the difference in color between the two you might say. Unfortunately it isn't as simple as this.



Firstly illumination and varying camera brightness cause a problem, not only does the foreground constantly change color, but also the background. Most web cams automatically adjust to a white point. You'll notice this if you hold a white sheet of paper against the web-cam, the background will dim. This means that in measuring the difference in RGB color for each pixel it is likely that the whole screen will be completely different from one frame to the next, not ideal if you want to separate out smaller regions!

The method I chose to incorporate was to use multiple color spaces, combined in order to create a new color space which was optimum for this kind of a background subtraction scenario. Here is the program I used to determine the optimum subtraction for each color space. Just click on the image to launch it:



Initially a background is captured, the user steps out of the frame, and a still bitmap image is taken, then for each color space a comparison is made. If the difference between given pixels is greater than a threshold for any color component (or a combination of the three) then the pixel is deemed to be in the foreground. The three color spaces used in this case where YUV, normalized RGB and normalized YUV. The latter of these I haven't seen in use before, but it gives reasonable results.

Once the image has been thresholded an alpha mask is applied to the current web-cam frame, and this image is used as the foreground. This allows some very impressive things to be done. The background can be replaced by an image, or another video (apple make use of this in their software Photo Booth), but more excitingly we now have two image planes, which means 3D!

Here is an example of background replacement:


In the image below you'll see 3D in action (excuse the poor lighting conditions), click to launch it but remember, it probably won't work fantastically. Just click to launch it:


The 3D angle can be controlled by head position (motion tracking) allowing you to look around the person in front of the camera which is quite an interesting effect.

The main issues with the project were that lighting conditions are constantly changing, and can vary hugely from web-cam to web-cam. There are other more computationally intensive approaches which flash may just about be able to handle, but unfortunately my brain cannot. Enjoy the experiment, and let me know how it works with your web-cams!

HTML5 Mandelbrot Explorer

A simple Mandelbrot explorer for HTML5, in javascript. I plan on optimizing and extending the functionality of the example when I have a bit less uni work to keep up with.

Tuesday, 18 January 2011

First Canvas Experiment: BitmapData

Lots of people have been talking about html 5 and canvas for a while now. As an actionscript purist I had decided not to venture into the world of incompatibility quite yet, but having seen some of the cool chrome experiments again recently, I decided to give writing some canvas toys a little go.

My first experiment is a simple one, just some randomly updating pixels on a screen.

Your browser doesn't support html5
Here is the .js code:
function setPixel(bitmapData, x, y, r, g, b, a) {
index = (x + y * bitmapData.width) * 4;
bitmapData.data[index+0] = r;
bitmapData.data[index+1] = g;
bitmapData.data[index+2] = b;
bitmapData.data[index+3] = a;
}

element = document.getElementById("bitmap");
canvas = element.getContext("2d");
width = parseInt(element.getAttribute("width"));
height = parseInt(element.getAttribute("height"));

setInterval('update()',30);

function update()
{
bitmap = canvas.createImageData(width,height);
for (i = 0; i < 10000; i++) {
     x = parseInt(Math.random() * width);
     y = parseInt(Math.random() * height);
     r = parseInt(Math.random() * 256);
     g = parseInt(Math.random() * 256);
     b = parseInt(Math.random() * 256);
     setPixel(bitmap, x, y, r, g, b, 0xff);
 }
 canvas.putImageData(bitmap, 0, 0);
}

Saturday, 15 January 2011

Transparency and Reflection

I haven't posted anything new on my blog in quite a while. Most of my time has been spent working on writing a fully function Actionscript plugin for Gedit, which will (with any luck) improve the development experience for users of Linux.

Anyway I thought I'd post some eye candy, just a simple transition between two ASTrace renders. I be plan on working on a pixel bender filter to carry out these effects in real time. A kind of one-track raytracer which optimizes a single scene with a central sphere. This should be possible with two displacement filters, one for reflection and one for refraction. For now, here is a prototype without full 3D rotation (Click to launch then move the mouse left or right to change reflectivity):