Posts Tagged ‘interactive’

Rupriikki Media Museum

Sunday, October 14th, 2012

Rupriikki Media Museum opened their new exhibition Jokapäiväinen mediamme (Our Daily Media) on October 4, 2012. For the past year, I’ve been working on three interactive installations for the exhibition. The approach for designing the installations was Media Archaeological or Interface Archaeological. We used old technologies and interfaces – such as the telegraph, analog photography, rotary dial telephones – as the interfaces for providing digital content and user experiences for the visitors.

I would like to thank the staff of Rupriikki for inviting me to work on this project. The whole process was a collaboration with the museum. I would especially like to thank researchers Niklas Nylund and Outi Penninkangas, and exhibiton designer Elina Rantasaari. And of course the construction and technical crew who built the exhibition.

Pimiö [The Darkroom]

The first installation I would like to talk about is the Darkroom. Film photography and darkrooms are far from obsolete, but still the whole process of developing you photographs in a darkroom might be quite unfamiliar for the generation that has only used digital cameras. This installation uses the gestures and artefacts found in a real darkroom as the way to interact with the installation.

The visitor places an empty photograph paper into the developing tray and a picture appears on the paper. The picture then turns into a slideshow of other photographs of the same theme. There are currently five different papers and each paper is going to display different photographs from the Photo Archives of Tampere Museums.

The whole interaction experience is not very accurate compared to all the steps you need to do in a real darkroom, but is enough to trigger the memories of a real darkroom for anyone who has ever worked in one. Or it could inspire someone to get into analog photography. It also does what it is meant to do, which is to serve as an interface for browsing historical photographs.

Technical stuff: Each paper has an RFID tagembedded into it and the RFID reader under the developing tray recognizes each paper and displays the correct content. The reader is attached to an Arduino Uno board and Quartz Composer is used to display the projected images.

Sähkötyschat [Telegraph Chat]

There are two telegraph keys in different locations in the exhibition. The visitors can send messages from one location to the other with morse code. The display shows the message that you are writing and also the incoming telegram sent from someone from the other side of the room.

I started working on this in January 2012. A couple of weeks after that, I saw the Tworse Key project by Martin Kaltenbrunner. (Other Telegraph + Arduino projects exist too). Since he had done the work of converting morse code to text on the Arduino and released the source with a CC license, I decided to not reinvent the wheel and based my code on that. In accordance with the license (cc-by-sa), I will release the source as soon as I clean it up a bit.

Technical stuff: Arduino + Quartz Composer. One Mac Mini controlled both of the locations.

Haloo? [Hello?]

The third installation is a simple phone that allows the visitor to call various numbers from the past and present, such as Neiti Aika (Miss Time/Speaking Clock service) or Juho Holmstén-Heiniö, an inventor from Tampere.

Treasure Islands

Friday, May 29th, 2009

Last week, I took part in the SenseStage workshop at the Hexagram BlackBox in Montreal. http://sensestage.hexagram.ca/workshop/introduction/. It was a workshop designed to bring together people from different disciplines (dance, theatre, sound, video, light) and cooperate in a collaborative environment with interactive technologies.

During the workshop, there were tons of sensors – light, floor pressure, accelerometers, humidity etc. – all connected to little microcontrollers which in turn were all wirelessly connected to a central computer that gathered all the data and sent it forward as OSC to any client conected to the network.

Basically, we had 5 days to complete an interactive performance sequence using the data gathered by the sensor nodes. This is what our group came up with.

We call it Treasure Islands and it’s a bit twisted interactive performance/game where a girl finds herself in a weird world where she is floating on a donut in the middle of the ocean with a mermaid talking in her head. She has to travel to all of the different Islands around her, and collect sounds from them in order to open a portal into this strange dream world for all her friends. Sounds like a good concept, doesn’t it? Check out the video and you’ll see that it actually makes sense.

There was a lot of sensor data available, but we ended up using just the pressure sensors on the floor and camera tracking. With a bit more time we could have evolved the world to be more responsive to the real world, but I’m pretty happy with the results we were able to achieve in such a short time. Our group worked really well together, which is not always the case in such collaborative projects.

Credits:

Sarah Albu – narrative, graphics, performance
Matt Waddell – sound, programming
Me – animation, programming

And I guess I need to include some more technical details for all the people who check my site for that kind of stuff (I know you’re out there).

We used camera tracking with tbeta to track Sarah and used that data to move the doughnut and to make the environment responsive to her movements. All of the real-time animation was done in Animata, which really is a perfect tool for something like this, because it allows me to animate things really fast without compromising in quality. Max was used as the middle man to convert the TUIO messages and the OSC from the sensor network into the kind of messages Animata needs to hear.

sense hat
We sewed some IR LEDs on the hat to help with tracking in a dark space.

Each island is an instrument that you can play with. Stepping on a certain area would trigger loops, add effects to your voice etc. Matt could explain the sound part better than me, but the video should make it pretty clear. it doesn’t reproduce the effect of the quadraphonic sound system we used though. Some visual clues were also triggered in the animation based on her movements on the sensors.

That’s pretty much it. If you have any questions, leave a comment and I’ll try to get back to you as soon as possible.