By using the openFrameworks library, we can use the XBOX Kinect to collect data and export them to our cube. Documents and articles for installing the oF and the Kinect can be found in openframeworks.cc
Since the Kinect can track the skeletal form of a person, why not use that data and project the user inside the cube? Aside from RGB, the Kinect device uses its 2 infrared laser cameras to tell the depth of the image in front of it. This allows us to give the projected skeleton volume and thus more character for the audience to enjoy. By simply adding a second Kinect, we should be able to map both images the Kinect records and merge them so a full 3D object can be projected inside our LED Cube. Point cloud data can be collected from the Kinect, and if we program openFrameworks correctly, we should be able to do something like the following video, but in real life...and with better music.
I already have installed and dabbled with openFrameworks a bit. I would like to begin playing around with a Kinect and my computer, but for now, until I obtain one, I'll continue researching and coding what data can be collected and exported through serial.