(Christian Marclay + Guitar Hero)/2
As previously posted, I had the pleasure of doing some shows with my friend Yan Jun 颜峻 from Beijing recently. As I was digging up some links to introduce him to Boston area friends, I found this clip of a performance we did, together with Beijing-based Bruce Gremo, at the 2008 Shanghai eArts Festival (shot by Amsterdam-based artist Emile Zile, who I met after the concert; read his account here). We performed at the gracious invitation of Defne Ayas and Davide Quadrio of Arthub Asia, who curated the Final Cut section of the eArts Festival, which transpired in and around Xujiahui Park in Shanghai and also featured performances by B6, Aaajiao, Feng Mengbo 冯梦波, Dead J, Alizia Borsari, and Elliott Sharp, among others (Elliott Sharp is featured in the second half of this clip).
Here we’re performing a video score by Christian Marclay entitled Screen Play, which consists of a bunch of black and white footage from old movies, mostly evocative of sound in some way (telephones, slamming doors, ocean waves, dancers, phonograph records, etc.), overlaid with simple, abstract shapes in bright colors. The piece is about half an hour long. There are no clear indications how the score should be interpreted; rather, it serves as an inspiration, a framework for improvisation.
As I watch this clip now, my first reaction is, “Wow, it worked!” It’s become something of an established practice to do these kinds of live, improvised accompaniments to new video or old films, but in my observation, there’s one problem inherent in the format: synchronization. No matter how skilled the performer, it takes a certain amount of time to register changes in the video and interpret them as sound. So in preparing for this performance, I specifically set myself the task of finding a solution, and reviewing our work two and a half years later, I’m pretty pleased with the results.
Synchronization requires anticipation. This was one of my primary lessons when I studied conducting back at St. Olaf. In 4/4 time, if you want the orchestra to come in loud on the one, you need to make a big gesture on four of the previous measure; you need to stay a beat ahead. In traditional music notation, sound is represented on a grid in which the x axis is time and the y axis is pitch, so it’s easy to peek ahead on the timeline. Or in waveform representations, x is time and y is amplitude. But a video, unlike a graphic representation of sound on a page, is a time-based medium, and x and y can’t help you; time is time! There’s no way to look ahead and prepare for what’s coming next.
To address this issue, I took a tip from some of my favorite videogames, Frequency, Amplitude, Guitar Hero, and Rock Band, developed by Harmonix Music Systems (just up the road here in Cambridge, MA, in fact; I just gave a talk there last month). In these games, as I imagine anyone reading this is already well aware, notes are represented by little colored dots coming towards you on the screen, and when they get to a certain point towards the bottom of the screen, you hit a button on your controller to sound the note. Notes are coming at you on a timeline, so it’s easy to look ahead and prepare for new notes to come, just like in traditional sheet music. This is a true video score.
To approximate this kind of prescience in Christian Marclay’s piece, I wrote a Jitter patch (the first time I used Jitter for a project, in fact) that plays back the movie in 4 separate windows, each window offset by one second. So I was able to see every event in the film coming from three seconds away and count down to activation: 3-2-1-play!
The window in my Jitter patch that displays the largest image (the actual current time) also doubled as my control surface for generating sound. To play along with the video, I was literally drawing on it with my mouse. The timbres I was playing employed very simple synthesis techniques, lots of bright cross modulation, and a bit of granular synthesis. The idea was that my buzzy tones would correspond to the bright, abstract graphics in the score, whereas Bruce (performing on his amazing homemade digital flute controller, the Cilia) would evoke the representational black and white clips, and Yan Jun (working with lo-fi electronics and voice) was more of a moody glue, bridging the two worlds.
I’m a big fan of Christian Marclay. His solo show at the Seattle Art Museum in 2004 is one of the best exhibitions I’ve ever seen, a fecund amalgamation of wit, inventiveness, historical awareness. He represents the full range of what a sound artist can be. He makes sound, of course, in performances, recordings, and installations. But he also makes silent pieces about sound, or about the ephemera surrounding contemporary sound production, and he also makes video pieces that suggest the contours of sound in another medium.
This playfulness is evident in Screen Play in the choice of images, their clever editing, and their relationship to the abstract graphics. He’s clearly toying with different ideas of sonic representation in the way these graphics are deployed, at times stretching five lines across the screen to evoke a music staff, at times drawing a waveform as on an oscilloscope, at times merging into the underlying scene (as when a bright yellow ball becomes attached to a man’s spoon as he’s slurping soup).
I realize that for Christian Marclay, this synchronization issue is probably not a problem at all. Screen Play was conceived for the kind of freely improvising musician exemplified by downtown New Yorkers like Elliott Sharp. For a certain type of resourceful performer, the video is a way to nudge the music along, to create an overall contour and form that may not have otherwise emerged, and which provides the potential for greater large scale contrast and recurrence than an unmediated free improv jam. It’s kind of like a time-based version of a graphic score, such as Earle Brown’s December 1952, Cornelius Cardew’s Treatise, or Christian Wolff’s Edges.
However, there are a few sudden transitions in Screen Play, in particular a sequence that alternates a slamming door with a full screen of saturated color, that strike me as contrary to this ethos. That bit, and a few others like it, seem to call out for big contrasts and tight synchronization, and I think at these moments one could legitimately criticize the score for setting up an expectation that the performer cannot fulfill. But I’m happy to report that, by applying a simple technique from videogame design, we nonetheless nailed it.
Using my Jitter patch to perform this score actually felt a lot like playing a videogame. It gets at what I consider to be the heart of gaming: to develop a skill and apply it to new challenges. This aspect of gaming is very much like performing music; from a certain point of view, any musical performance can be considered a game. I’d estimate that this modified approach to performing Screen Play lies somewhere near the midpoint between downtown New York free improvisation and Guitar Hero, and I think there’s a lot more interesting work to be done along this continuum.
Thanks to Defne Ayas and Mathieu Borysevicz, I think, for the pics. And thanks to Arthub Asia for the invitation to participate!