What is OSCeleton?
“As the title says, it’s just a small program takes Kinect skeleton data from the OpenNI framework and spits out the coordinates of the skeleton’s joints via OSC messages. These can can then be used on your language / framework of choice.”
It works straight out of the box with Animata and many other software, but the OSC formatting is not compatible with Quartz Composer. So I made a little Max/MSP patch that converts the messages to a format QC understands. I will try to write a tutorial about how to get the whole thing running pretty soon. But here is a quick rundown.
Last week, I took part in the SenseStage workshop at the Hexagram BlackBox in Montreal. http://sensestage.hexagram.ca/workshop/introduction/. It was a workshop designed to bring together people from different disciplines (dance, theatre, sound, video, light) and cooperate in a collaborative environment with interactive technologies.
During the workshop, there were tons of sensors – light, floor pressure, accelerometers, humidity etc. – all connected to little microcontrollers which in turn were all wirelessly connected to a central computer that gathered all the data and sent it forward as OSC to any client conected to the network.
Basically, we had 5 days to complete an interactive performance sequence using the data gathered by the sensor nodes. This is what our group came up with.
We call it Treasure Islands and it’s a bit twisted interactive performance/game where a girl finds herself in a weird world where she is floating on a donut in the middle of the ocean with a mermaid talking in her head. She has to travel to all of the different Islands around her, and collect sounds from them in order to open a portal into this strange dream world for all her friends. Sounds like a good concept, doesn’t it? Check out the video and you’ll see that it actually makes sense.
There was a lot of sensor data available, but we ended up using just the pressure sensors on the floor and camera tracking. With a bit more time we could have evolved the world to be more responsive to the real world, but I’m pretty happy with the results we were able to achieve in such a short time. Our group worked really well together, which is not always the case in such collaborative projects.
Sarah Albu – narrative, graphics, performance
Matt Waddell – sound, programming
Me – animation, programming
And I guess I need to include some more technical details for all the people who check my site for that kind of stuff (I know you’re out there).
We used camera tracking with tbeta to track Sarah and used that data to move the doughnut and to make the environment responsive to her movements. All of the real-time animation was done in Animata, which really is a perfect tool for something like this, because it allows me to animate things really fast without compromising in quality. Max was used as the middle man to convert the TUIO messages and the OSC from the sensor network into the kind of messages Animata needs to hear.
We sewed some IR LEDs on the hat to help with tracking in a dark space.
Each island is an instrument that you can play with. Stepping on a certain area would trigger loops, add effects to your voice etc. Matt could explain the sound part better than me, but the video should make it pretty clear. it doesn’t reproduce the effect of the quadraphonic sound system we used though. Some visual clues were also triggered in the animation based on her movements on the sensors.
That’s pretty much it. If you have any questions, leave a comment and I’ll try to get back to you as soon as possible.
My workflow in creating ths animation was pretty unorthodox. Almost all of the character animation was recorded real-time with a custom setup involving Max/MSP and Animata. I created a patch in Max to control animation in Animata with the sound of the interviews. I also had some sliders and buttons to trigger things like blinking and arm movements. I used After Effects for compositing and for some additional animation.
The second installment of my Mixed Up series has now seen the light. Let me introduce you to the Mixmaster 1200.
The Mixmaster 1200 is a wireless scratching device for the turntablist who prefers to deliver his/her scratches like a 5 star chef. As you can see, the Mixmaster does not have any beaters attached to it. This is because it has small laser powered plasma emitter beaters that actually heat up the airwaves around the device itself producing the unique sounding aural explosions.
Have you ever wondered what a banana mixed with a strawberry sounds like? Or how about kiwi-watermelon puree? Watch this video and you will find out.
I found this old blender from a flea market and noticed that the names of the different blending modes are very similar to the terminology used in DJing. So I decided to turn this kitchen appliance into a DJ mixer.
The audio tracks are triggered by inserting different fruits into the blender. The buttons on the front panel control the mixing modes and you also have two different types of transformer switches for cutting the sound in and out.
If you haven´t heard of Animata yet, you should head over to http://animata.kibu.hu/index.html and educate yourself. Download the software and go through the tutorials. I also recommend reading through the mailing list, it has tons of useful information.
Controlling Animata with a mouse and doing real-time animations is pretty cool by itself, but Animata really shows its true potential when you control it with OSC. Then you can start doing something like this:
There is a Processing example available from the Animata site that controls Animata with sound input.
HOW DOES IT WORK?
Unfortunately, the Kitchen Budapest guys are busy improving the software and there isn´t really good documentation available about the OSC messages needed to control Animata. I´ll try to go through all of the available messages and give you some examples in Pure Data and Max/MSP
I assume that you know something about OSC, Pure Data and Max/MSP, because I don´t want to write a huge post explaining everything from the beginning. I´m also assuming that you have spent some time learning the basics of Animata.
One more important thing. I´m using revision 35 of Animata compiled from the svn repository. NOTE! YOU WILL NEED TO COMPILE ANIMATA FROM THE SOURCE CODE TO MAKE THE /LAYERPOS MESSAGES WORK. IT IS NOT AVAILABLE IN THE BINARY VERSION ON THE ANIMATA WEBSITE. All the other messages I´m showing here do work with Animata 003 that is available from the site. OK, let’s start.
All incoming messages to Animata must be sent through port 7110. The “name” in the message refers to the name of the joint, bone or layer.
Moving a joint, x and y are float values:
/joint name x y
Control the length of a bone, value is a float between 0 and 1:
/anibone name value
Switch on and off a layer, on_off is 0 or 1:
/layervis name on_off
Set the transparency of the layer, value is a float between 0 and 1:
/layeralpha name value
The next two messages require the svn version:
Moving a layer in absolute mode, x and y are the position coordinates as float values:
/layerpos name x y
Moving a layer in relative mode, x and y is the amount of pixels you want the layer to move from it’s current position:
/layerdeltapos name x y
PURE DATA TO ANIMATA
I´m not really comfortable with Pure Data, but I was able to get all of the messages working except /layervis. I believe this is because Animata is very picky and is looking for real boolean values and Pure Data is sending integers when sending 0 or 1. This was just fixed by the Kitchen Budabest guys. The /layervis message works now. I have updated the code so please download the .zip again. You need to compile Animata again from the svn for this to work.
There is a little problem, because Animata needs float values in the messages and Pure Data doesn’t have a separate number box for floats, so have to make sure the number you are sending is never an even number. I did this by multiplying the values by 0.999. If someone knows a better way, let me know.
MAX/MSP 5 TO ANIMATA
It´s pretty much the same deal with Max/MSP. The /layervis doesn’t work here either. This was fixed in the svn version (>36). My Max-patch has been updated so please download again.
I didn´t add the /layerdeltapos to the example patches, because it´s really easy to lose your layers somewhere outside the window.
SENDING OSC FROM ANIMATA
There is also an option to send OSC messages from Animata. For this you need the SVN version. It simply works by clicking on the small OSC tick box on the Skeleton tab. The messages are sent through port 7111. The message format is: /joint name x y
I’ve made a plugin for Quartz Composer that makes it really easy to control Animata from Quartz Composer. Check it out over here.
HOW ABOUT OPEN FRAMEWORKS, PROCESSING ETC.
Basically, any software or programming environment that is able to send OSC messages should be able to communicate with Animata.
Not many people know this, but Concordia University in Montréal also has a toon department deep inside the maze that is known as the EV building. The university officials would prefer to keep this knowledge as a secret, since the brutal self torture that goes on inside the faculty would shock many people. In the same way that the Average Joe or Jane does not want to know where the meat inside his/her burger comes from, no-one really wants to know the shocking truth about the stories behind your Saturday morning dose of laughter.
When watching cartoons, people rarely think about the amount of time and dedication the cartoon characters spend on perfecting their sketches and routines. Unfortunately, consumers love to see toons getting hurt. There is just something special about dropping heavy anvils on the heads of unsuspecting cartoon characters that appeals to the majority of viewers.
Like in all fields of entertainment, the competition in the cartoon business is also very harsh. You are only as good as your last fall from a huge cliff. That´s why all the aspiring cartoon students at tooniversities across the world practice new and inventive ways of getting themselves hurt.
A group of activists from PETT (People for the Ethical Treatment of Toons) have been able to sneak a spy camera inside the Tooniversity facilities at Concordia University. Because of their brave action, all the dirty secrets inside the Tooniversity will be exposed. Please go to http://tooniversity.originalhamsters.com to find more information and sign a petition to stop this madness.
I´m really interested in stereoscopy, which you might have guessed, if you´ve ever seen me running around with my View-Master camera. In my opinion, View-Master is still a superior method for viewing stereoscopic images, but it´s only still images. That´s why I wanted to see if I could improve the design and make an interactive View-Master for animations.
This little hybrid between Mickey Mouse and Steve Mann enables you to control and view stereoscopic animations that are animated in real-time.
It´s an old View-Master viewer modified to have ChromaDepth lenses, some custom buttons, accelerometer, bluetooth radio and an Arduino to control it all. I thought about hiding the electronics with bigger ears, but decided not to, because I like the ghetto-cyborg look he´s got going on there.
So how does it work? You look through the viewer to the screen where you will see some 3-layer Månsteri-action in all of its stereoscopic glory. The great thing about ChromaDepth stereoscopy is that it works with basic colors. You dont need two channels for the video to achieve a 3D-effect. On a dark background, everything that is blue will appear to be in the background and everything that is red will appear to be in the foreground. Colors in the spectrum between blue and red will appear to be somewhere in the middle. If you didn´t understand my explanation, look it up on the interwebs.
The accelerometer detects your motion and will move the character on the middle layer, giving the illusion that the character is trying to mimic your movement. You can control the content of the layers with the three buttons on the side of the viewer. Button three controls the background, button two the middle layer and button one controls the foreground. Check out the video and you´ll understand what I mean. If you have ChromaDepth glasses, put them on to see the 3d effect.
The Arduino sends the sensor data and the button states wirelessly via bluetooth to my computer. The information is parsed in Max/MSP, which in turn sends the data as OSC packets to Animata (my favourite software at the moment). Animata then animates everything in real-time and handles the hiding/revealing of different layers.
If you are interested, I have uploaded the Arduino and Max 5 source codes and also the Animata scene. It´s all very specific to my setup, but someone might find it useful. Download the source.