Improv Remix was my PhD thesis project, where I created an interface for improv performers to record and play back video of the stage, all from the stage itself. The nature of ad-hoc impulsive recontextualization of the stage was inspired by the longform improv structure The Harold described by Del Close and Charna Halpern in their book Truth in Comedy.
Improv Remix was developed through an iterative process with performers over three years, and was exhibiting in various theatres throughout Toronto.
Orienting a performer with a projection so they could act with it is an interesting problem.
In normal conversation people face each other directly. However, theatre performers intentionally “cheat out”, meaning they turn their torsos slightly towards the audience.
The simplest setup is to have a projection behind the performer. However, performers struggle to both face the projection, and to cheat out to the audience. This necessitates a projection in-front of the performer.
Turns out you can do this with a scrim (porous fabric screen) and careful lighting.
To have a projected image next to a live performer in a theatre environment, you either need a very powerful projector ($$$) or careful lighting. You must light only the area of the stage where the performer will be, and ensure that no light gets on the screen.
Such tight lighting yields nice consequences! The performer has a great deal of control of whether they are lit or not.
We use a Microsoft Kinect (v2) for gestural interaction. Since the Kinect does not penetrate the scrim, we need to place it behind performers. This yields an interesting design constraint: small gestures in front of the body are impossible to detect.
Determining when a gesture was intended for the system, or just part of regular performance, was a very hard problem. We found performers were loathe to have certain gestures banned, and were more likely to make them if they were told not to.
We solved this problem by having different zones: closest to the scrim is interaction, where you interact with the system; a step back is performance, where all the acting takes place; and a further step back out of the light is off-stage, which we used as a global cancel command.
It is very easy to transition between zones by a single step forward or backwards. This was quicker and less cumbersome than having a special delimiter gesture, which can semantically conflict with a character a performer is trying to take on.
We use a minimally-intrusive zone visualization projected on the scrim above performers to indicate which zone they are in.
Red indicates the performance zone, whereas green is the interaction zone. (Note: you’ll see a menu appear when the performer is in the interaction zone – this is described in the next section).
To prevent zone debouncing (quick, unintended switching), we use smoothing and hysteresis of the boundary between zones. Notice the boundary between green and red jump back as the performer steps forward, and forward as the performer steps back.
We call the menu that appears around the performer in the interaction zone the Vitruvian Menu (shout-out to Da Vinci). We show the performer’s silhouette in blue to aid their interaction.
At the start of a show, each of these buttons is an empty slot to be filled with a performers’ scene.
To activate the buttons, a performer hovers over them for 300ms, which prevents most accidental activations without being too long.
If a scene has been recorded in one of the slots, it appears green in the Vitruvian menu – a play button, which behaves like a toggling check box for whether the performer wants to queue that scene and act against it.
Whatever command is queued by the Vitruvian Menu, it only executes once the performer steps back from the interaction to the performance zone. We found this to be much more graceful than commands that executed immediately, and performers felt more in control of the stage.
The performer has queued up a recording, and when they step back, a countdown to the recording starts. While recording, red circles in the upper corners of the scrim inform the performer that they are “on air”. As off-stage is our universal cancel action, the performer simply steps off to stop recording.
The performer has queued up his previous scene for playback. The first frame of the scene fades in underneath a decreasing progress bar to show the live performer where the recorded performer will appear on stage.
A large motivation for video usage in this work is how call-backs and recontextualization are themes in comedy and other forms of art. We should be able to take a performance, disassemble it and reassemble it into something new. If we want to be serious, this can be for critical examination – but also, it’s just fun to mash things up.
Here, we see a performer choosing to play two previous scenes, while also choosing to record his own performance.
Later, another performer can take the previous performer’s performance and bring it back by itself, recontextualizing it in his own creative way.
We are also exploring interface designs to scratch through video using parts of your body. This is an early prototype where we had a dancer explore the sensation of dancing and controlling a video of her own motion.
In this case, the yellow line is a play marker in the video, from the beginning (top), to the end (bottom). When the silhouette of the live performer intersects with the control region, the play marker seeks to the centroid of the silhouette in the region.
Look for this to be included in the final prototype!
Dustin Freeman (Programming, Interaction Design)
Montgomery C. Martin (Projection Design, Technical Direction)
Special Thanks To
Paul Stoesser (technical director, CDTPS)
The Toronto Improv Theatre Community
Bruce Barton, Daniel Wigdor, Derek Reilly, Ravin Balakrishnan
Tom McGee, Michael Reinhart, Richard Windeyer
Ricardo Jota, Haijun Xia
Members of DGP
John Hancock and Ingrid Varga
Luella Massey Studio Theatre
Centre for Drama, Theatre, and Performance Studies (CDTPS)