(Christian Marclay + Guitar Hero)/2

As previously posted, I had the pleasure of doing some shows with my friend Yan Jun 颜峻 from Beijing recently. As I was digging up some links to introduce him to Boston area friends, I found this clip of a performance we did, together with Beijing-based Bruce Gremo, at the 2008 Shanghai eArts Festival (shot by Amsterdam-based artist Emile Zile, who I met after the concert; read his account here). We performed at the gracious invitation of Defne Ayas and Davide Quadrio of Arthub Asia, who curated the Final Cut section of the eArts Festival, which transpired in and around Xujiahui Park in Shanghai and also featured performances by B6, Aaajiao, Feng Mengbo 冯梦波, Dead J, Alizia Borsari, and Elliott Sharp, among others (Elliott Sharp is featured in the second half of this clip).

Here we’re performing a video score by Christian Marclay entitled Screen Play, which consists of a bunch of black and white footage from old movies, mostly evocative of sound in some way (telephones, slamming doors, ocean waves, dancers, phonograph records, etc.), overlaid with simple, abstract shapes in bright colors. The piece is about half an hour long. There are no clear indications how the score should be interpreted; rather, it serves as an inspiration, a framework for improvisation.

As I watch this clip now, my first reaction is, “Wow, it worked!” It’s become something of an established practice to do these kinds of live, improvised accompaniments to new video or old films, but in my observation, there’s one problem inherent in the format: synchronization. No matter how skilled the performer, it takes a certain amount of time to register changes in the video and interpret them as sound. So in preparing for this performance, I specifically set myself the task of finding a solution, and reviewing our work two and a half years later, I’m pretty pleased with the results.

Synchronization requires anticipation. This was one of my primary lessons when I studied conducting back at St. Olaf. In 4/4 time, if you want the orchestra to come in loud on the one, you need to make a big gesture on four of the previous measure; you need to stay a beat ahead. In traditional music notation, sound is represented on a grid in which the x axis is time and the y axis is pitch, so it’s easy to peek ahead on the timeline. Or in waveform representations, x is time and y is amplitude. But a video, unlike a graphic representation of sound on a page, is a time-based medium, and x and y can’t help you; time is time! There’s no way to look ahead and prepare for what’s coming next.


Christian, Bruce, and Yan Jun

To address this issue, I took a tip from some of my favorite videogames, Frequency, Amplitude, Guitar Hero, and Rock Band, developed by Harmonix Music Systems (just up the road here in Cambridge, MA, in fact; I just gave a talk there last month). In these games, as I imagine anyone reading this is already well aware, notes are represented by little colored dots coming towards you on the screen, and when they get to a certain point towards the bottom of the screen, you hit a button on your controller to sound the note. Notes are coming at you on a timeline, so it’s easy to look ahead and prepare for new notes to come, just like in traditional sheet music. This is a true video score.

To approximate this kind of prescience in Christian Marclay’s piece, I wrote a Jitter patch (the first time I used Jitter for a project, in fact) that plays back the movie in 4 separate windows, each window offset by one second. So I was able to see every event in the film coming from three seconds away and count down to activation: 3-2-1-play!

My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch

The window in my Jitter patch that displays the largest image (the actual current time) also doubled as my control surface for generating sound. To play along with the video, I was literally drawing on it with my mouse. The timbres I was playing employed very simple synthesis techniques, lots of bright cross modulation, and a bit of granular synthesis. The idea was that my buzzy tones would correspond to the bright, abstract graphics in the score, whereas Bruce (performing on his amazing homemade digital flute controller, the Cilia) would evoke the representational black and white clips, and Yan Jun (working with lo-fi electronics and voice) was more of a moody glue, bridging the two worlds.

I’m a big fan of Christian Marclay. His solo show at the Seattle Art Museum in 2004 is one of the best exhibitions I’ve ever seen, a fecund amalgamation of wit, inventiveness, historical awareness. He represents the full range of what a sound artist can be. He makes sound, of course, in performances, recordings, and installations. But he also makes silent pieces about sound, or about the ephemera surrounding contemporary sound production, and he also makes video pieces that suggest the contours of sound in another medium.

This playfulness is evident in Screen Play in the choice of images, their clever editing, and their relationship to the abstract graphics. He’s clearly toying with different ideas of sonic representation in the way these graphics are deployed, at times stretching five lines across the screen to evoke a music staff, at times drawing a waveform as on an oscilloscope, at times merging into the underlying scene (as when a bright yellow ball becomes attached to a man’s spoon as he’s slurping soup).

Jitter Patch in Action
Jitter Patch in Action

I realize that for Christian Marclay, this synchronization issue is probably not a problem at all. Screen Play was conceived for the kind of freely improvising musician exemplified by downtown New Yorkers like Elliott Sharp. For a certain type of resourceful performer, the video is a way to nudge the music along, to create an overall contour and form that may not have otherwise emerged, and which provides the potential for greater large scale contrast and recurrence than an unmediated free improv jam. It’s kind of like a time-based version of a graphic score, such as Earle Brown’s December 1952, Cornelius Cardew’s Treatise, or Christian Wolff’s Edges.

However, there are a few sudden transitions in Screen Play, in particular a sequence that alternates a slamming door with a full screen of saturated color, that strike me as contrary to this ethos. That bit, and a few others like it, seem to call out for big contrasts and tight synchronization, and I think at these moments one could legitimately criticize the score for setting up an expectation that the performer cannot fulfill. But I’m happy to report that, by applying a simple technique from videogame design, we nonetheless nailed it.

Using my Jitter patch to perform this score actually felt a lot like playing a videogame. It gets at what I consider to be the heart of gaming: to develop a skill and apply it to new challenges. This aspect of gaming is very much like performing music; from a certain point of view, any musical performance can be considered a game. I’d estimate that this modified approach to performing Screen Play lies somewhere near the midpoint between downtown New York free improvisation and Guitar Hero, and I think there’s a lot more interesting work to be done along this continuum.

On the One!
On the One!

Thanks to Defne Ayas and Mathieu Borysevicz, I think, for the pics. And thanks to Arthub Asia for the invitation to participate!

It’s for this experimental film that nobody knows about and which I’m still figuring out what’s going to go in my experimental film.

Hey, look at this thing I made:

Study for Insomnia from Ben Houge on Vimeo.

This is what I spent the last 4 days doing, instead of everything else I should have been doing instead. I’m calling it “Study for Insomnia.”

It began as a demo for artist Cindy Ng 吴少英, whose video work is presented in the same room as my 路口 sound installation at Art+Shanghai (you have until November 1 to see the show, so hurry!). She was asking how my piece was constructed (audio “phrases” strung together and overlapped in real-time by a computer, providing constant variation), and I suggested that it would be possible to use the same technique to deploy her video work. I made a quick demo to show her last weekend before we collaborated on one of her “Ink Walk” performances, and she was intrigued to the point that we’re now planning a full-on collaboration. I’ve continued to flesh out my little demo over the last few days into something I’m pretty happy with, and it ought to serve as a pretty good template for whatever we end up doing together.

Like most of my electronic pieces, there’s no pure synthesis going on; everything is manipulated from source recordings, in this case, video and audio recordings of rumpled bed sheets (yes, the inspiration came while lying awake a few nights ago). The unpredictable patterns that result are similar to other kinds of natural phenomena from which I draw inspiration (e.g., rain, fireworks, traffic), chaotic on a small scale, but consistent on a large scale. Like my other installation work, the piece can continue indefinitely, with no beginning or end; here, for convenience, I’ve recorded only a brief excerpt.

I guess you could call this my first video piece. I’ve been using Jitter, the video-processing add-on for Max/MSP for over a year, although this is the first time I’m presenting the results publicly. The first time I used it was in the software I developed for my performance (together with Yan Jun 颜峻 and Bruce Gremo) of Christian Marclay’s Screen Play at the Shanghai eArts Festival in October 2008, but it was used only to prepare cues for myself, not projected for the audience to see. (In fact, in the very first version of this little demo, I used a snippet from Screen Play as video source, a shot of rolling waves.) I’ve also used Jitter in some consulting I did for a videogame company last winter, analyzing audio signals to generate game levels.

I’m not sure yet if this will ultimately serve as the background texture of a bigger piece later on, or if it’s fine just as it is, as a super ambient standalone piece. I could see it working as a backdrop for a live performance. Or it could work in a gallery setting (ideally with 4 channels of sound). For what it’s worth, in terms of scale and mood, the 4-day process of putting this program together felt very much like writing a pop song.

What’s happening in the program is pretty simple. I’m picking little bits of my original bed sheet video and slowing them down, rotating them, and fading in and out from black. There are four asynchronous layers of this activity happening at once. The speeds are different, and there are slight hue offsets for each phrase. I adjust the contrast, hue, and saturation, and I add some feedback. For each video phrase, I pick an audio phrase from my original wave file (recorded separately from the video), and fade in and out, in time with the video. I’m also analyzing the average brightness of each video stream and using that to control the pitch of a bank of filters that the audio runs through before hitting the speakers. For each phrase I pick a different pitch multiplier (over roughly 4 octaves of a just intonation minor scale, not that it matters). Hue, feedback, filter Q, cutoff frequency, and probably a few other things are slowly modulated by random offsets to keep things interesting. I found there are lots of settings that produce different outcomes, all acceptable, so I implemented the old John Cage dictum: “Say we have a problem and 100 solutions; instead of chosing just one of them, we use them all.”

I ran into a tricky little trigonometry problem while working on this piece. I had to figure out the zoom ratio while rotating the images so that I wouldn’t go off the image and add weird black edges to the composite. I pondered long and hard, and I think I was at least looking in the right direction, but I eventually had to enlist the help of my pal Micah Sittig, who teaches physics over at SMIC Private School, and to whose class I gave a little presentation on music acoustics last March. He solved the problem in about a minute. It’s good to have smart friends. Thanks, Micah!

Crazy coincidence: yesterday while working on this piece, I thought I’d take a break and listen to something new, and a CD of music by Esa-Pekka Salonen came to mind. I picked up this disk following one of the premiere performances of his fellow Finn Kaija Saariaho’s second opera Adriana Mater in Paris in 2006, but I had never listened to it. I feel like a CD, if I have any reason to believe it will be good, really ought to have a focused and concentrated first listening, and consequently, I have a huge stack of shame of CD’s I’ve purchased, but haven’t yet had the time to devote to a proper first listen. Especially with all the hoo-ha about the new music director Gustavo Dudamel taking Salonen’s place at the Los Angeles Philharmonic, I’ve been thinking lately I really ought to listen to it. So today I picked it up and unwrapped it, and to my amazement, the last piece is also called Insomnia! I’d already been planning to call this new piece of mine something about insomnia for a few days, so I swear I didn’t steal the idea, unless it was buried somewhere deep in my subconscious. Anyway, I just listened to it, and it’s really a really nice piece, though somewhat more detailed than mine. Isn’t that nuts?