title: FaceDOM

FaceDOM uses Kyle McDonald's ofxFaceTracker to send the points of a person's face to a web page and move DOM elements into the shape of the face.

This past week in Kyle's Appropriating New Technologies we were asked to do something with face detection. After seeing Greg Borenstein send the Kinect skeleton data to a web browser last fall I've wanted to experiment with Node.js and sending things to the browser that don't traditionally belong there.

Also this project is in response to Kyle's People Staring At Computers. We stare at our computers a lot and they don't respond to our stares. Mouse clicks and key hits yes, but reading an article on the web is still largely non-interactive.

Currently FaceDOM is a proof of concept. I plan to add more features and a finer amount of detail and response to facial gestures, such as an open mouth, closed eyes, etc. Another important addition will be dynamically parsing the page, currently the elements chosen to form my face were directly coded with the proper CSS tags.

The source code is available on Github here.