ZVID Abstract for
Hypergraphics
Andy Quitmeyer
Project zvid.py
Concept and Goals
______The goal for my individual project will be to build a 3-d
"video cube". The first phase of the project will be to learn how
to program basic geometric shapes. Next I will learn to apply textures and
images to these shapes. Following that, I will have to uncover how to apply a
texture of canned video to these shapes. Hopefully this knowledge will lead
to an ability to pipe live video onto the sides of
the cube, with the option of each side responding to a different camera. From
there on I would experiment around with numbers of cameras and camera
locations and various types of shapes, and possible homotoping
from one shape to another while simultaneously displaying the live video.
Process
______Beginning in my research I first discovered how to
perform basic UNIX commands, and the bare bones of programming in Python. In
this environment I learned how to create and manipulate basic 3-d primitive
objects. I could apply transformations to the different fields of the primitives,
from color to shape to size and to position. From here I immediately thought
that the creation of my video cube was only 1 command away, but at that point
I did not know that there was a difference between the python I had been
programming in and the szg programming I would need to program in, and
furthermore, texture mapping consists of more than just typing in a command
telling the computer what texture to use.
______I have been researching how to apply texture maps to
shapes, and discovered that there are numerous mathematical methods of
wrapping a flat image onto a surface, and most of the programming seemed
quite out of my league. I did find solace in the fact that this warped
mapping really only applies to wrapping an image around a three dimensional
shape, and since each image will only be one side of a cube I should be
okay. As I ran into more problems with
my project I did learn that if I gave up for now on the idea of six separate
windows, and found an easy way to map at least a single image to the cube, I
would be able to work out more complicated versions of the project at a later
time. This is what I agreed upon with myself, and then began to study the szg
documentation and previously created code such as cosmos.py to construct szg
objects with textures.
______Once I managed to get a texture in place on my cube, I
was able to use skills I had previously obtained in the fields of video.
Using a program like Final Cut Pro, Adobe After Effects, or Adobe Premier Pro,
I took raw digital video clips and exported them as a numerical sequence of
.jpg images. This is as simple as loading your video into the correct program
and then choosing File>Export>Image Sequence or the like, depending on
your precise program (I used After Effects). Each of these images could then
be individually loaded onto my cube, in a loop I created which served to load
one image, and then the image numerically after it, until it reached the end
of all the pictures, and at this point it would loop back onto itself. In
this way I was able to put “canned video” onto the sides of my cube. The only
barrier between my current project and the live video cube I was hoping for
lies in finding a way to quickly capture and load live images onto my cube.
So far I have not been able to do this but my main problems have not been in
programming but in logistical processes of finding compatible camcorders,
software and computers. The idea is to use an online program which is
normally used for time-lapse photography, and set it to capture a picture
every 30th of a second (corresponding to a standard television
signal). This program would also give each picture a standard name and
numerical sequence attachment. My program of course could read in these
pictures then and load them in the proper sequence with minimal delay between
reality and display.
|