(Christian Marclay + Guitar Hero)/2

As previously posted, I had the pleasure of doing some shows with my friend Yan Jun 颜峻 from Beijing recently. As I was digging up some links to introduce him to Boston area friends, I found this clip of a performance we did, together with Beijing-based Bruce Gremo, at the 2008 Shanghai eArts Festival (shot by Amsterdam-based artist Emile Zile, who I met after the concert; read his account here). We performed at the gracious invitation of Defne Ayas and Davide Quadrio of Arthub Asia, who curated the Final Cut section of the eArts Festival, which transpired in and around Xujiahui Park in Shanghai and also featured performances by B6, Aaajiao, Feng Mengbo 冯梦波, Dead J, Alizia Borsari, and Elliott Sharp, among others (Elliott Sharp is featured in the second half of this clip).

Here we’re performing a video score by Christian Marclay entitled Screen Play, which consists of a bunch of black and white footage from old movies, mostly evocative of sound in some way (telephones, slamming doors, ocean waves, dancers, phonograph records, etc.), overlaid with simple, abstract shapes in bright colors. The piece is about half an hour long. There are no clear indications how the score should be interpreted; rather, it serves as an inspiration, a framework for improvisation.

As I watch this clip now, my first reaction is, “Wow, it worked!” It’s become something of an established practice to do these kinds of live, improvised accompaniments to new video or old films, but in my observation, there’s one problem inherent in the format: synchronization. No matter how skilled the performer, it takes a certain amount of time to register changes in the video and interpret them as sound. So in preparing for this performance, I specifically set myself the task of finding a solution, and reviewing our work two and a half years later, I’m pretty pleased with the results.

Synchronization requires anticipation. This was one of my primary lessons when I studied conducting back at St. Olaf. In 4/4 time, if you want the orchestra to come in loud on the one, you need to make a big gesture on four of the previous measure; you need to stay a beat ahead. In traditional music notation, sound is represented on a grid in which the x axis is time and the y axis is pitch, so it’s easy to peek ahead on the timeline. Or in waveform representations, x is time and y is amplitude. But a video, unlike a graphic representation of sound on a page, is a time-based medium, and x and y can’t help you; time is time! There’s no way to look ahead and prepare for what’s coming next.

Christian, Bruce, and Yan Jun

To address this issue, I took a tip from some of my favorite videogames, Frequency, Amplitude, Guitar Hero, and Rock Band, developed by Harmonix Music Systems (just up the road here in Cambridge, MA, in fact; I just gave a talk there last month). In these games, as I imagine anyone reading this is already well aware, notes are represented by little colored dots coming towards you on the screen, and when they get to a certain point towards the bottom of the screen, you hit a button on your controller to sound the note. Notes are coming at you on a timeline, so it’s easy to look ahead and prepare for new notes to come, just like in traditional sheet music. This is a true video score.

To approximate this kind of prescience in Christian Marclay’s piece, I wrote a Jitter patch (the first time I used Jitter for a project, in fact) that plays back the movie in 4 separate windows, each window offset by one second. So I was able to see every event in the film coming from three seconds away and count down to activation: 3-2-1-play!

My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch

The window in my Jitter patch that displays the largest image (the actual current time) also doubled as my control surface for generating sound. To play along with the video, I was literally drawing on it with my mouse. The timbres I was playing employed very simple synthesis techniques, lots of bright cross modulation, and a bit of granular synthesis. The idea was that my buzzy tones would correspond to the bright, abstract graphics in the score, whereas Bruce (performing on his amazing homemade digital flute controller, the Cilia) would evoke the representational black and white clips, and Yan Jun (working with lo-fi electronics and voice) was more of a moody glue, bridging the two worlds.

I’m a big fan of Christian Marclay. His solo show at the Seattle Art Museum in 2004 is one of the best exhibitions I’ve ever seen, a fecund amalgamation of wit, inventiveness, historical awareness. He represents the full range of what a sound artist can be. He makes sound, of course, in performances, recordings, and installations. But he also makes silent pieces about sound, or about the ephemera surrounding contemporary sound production, and he also makes video pieces that suggest the contours of sound in another medium.

This playfulness is evident in Screen Play in the choice of images, their clever editing, and their relationship to the abstract graphics. He’s clearly toying with different ideas of sonic representation in the way these graphics are deployed, at times stretching five lines across the screen to evoke a music staff, at times drawing a waveform as on an oscilloscope, at times merging into the underlying scene (as when a bright yellow ball becomes attached to a man’s spoon as he’s slurping soup).

Jitter Patch in Action
Jitter Patch in Action

I realize that for Christian Marclay, this synchronization issue is probably not a problem at all. Screen Play was conceived for the kind of freely improvising musician exemplified by downtown New Yorkers like Elliott Sharp. For a certain type of resourceful performer, the video is a way to nudge the music along, to create an overall contour and form that may not have otherwise emerged, and which provides the potential for greater large scale contrast and recurrence than an unmediated free improv jam. It’s kind of like a time-based version of a graphic score, such as Earle Brown’s December 1952, Cornelius Cardew’s Treatise, or Christian Wolff’s Edges.

However, there are a few sudden transitions in Screen Play, in particular a sequence that alternates a slamming door with a full screen of saturated color, that strike me as contrary to this ethos. That bit, and a few others like it, seem to call out for big contrasts and tight synchronization, and I think at these moments one could legitimately criticize the score for setting up an expectation that the performer cannot fulfill. But I’m happy to report that, by applying a simple technique from videogame design, we nonetheless nailed it.

Using my Jitter patch to perform this score actually felt a lot like playing a videogame. It gets at what I consider to be the heart of gaming: to develop a skill and apply it to new challenges. This aspect of gaming is very much like performing music; from a certain point of view, any musical performance can be considered a game. I’d estimate that this modified approach to performing Screen Play lies somewhere near the midpoint between downtown New York free improvisation and Guitar Hero, and I think there’s a lot more interesting work to be done along this continuum.

On the One!
On the One!

Thanks to Defne Ayas and Mathieu Borysevicz, I think, for the pics. And thanks to Arthub Asia for the invitation to participate!

It’s for this experimental film that nobody knows about and which I’m still figuring out what’s going to go in my experimental film.

Hey, look at this thing I made:

Study for Insomnia from Ben Houge on Vimeo.

This is what I spent the last 4 days doing, instead of everything else I should have been doing instead. I’m calling it “Study for Insomnia.”

It began as a demo for artist Cindy Ng 吴少英, whose video work is presented in the same room as my 路口 sound installation at Art+Shanghai (you have until November 1 to see the show, so hurry!). She was asking how my piece was constructed (audio “phrases” strung together and overlapped in real-time by a computer, providing constant variation), and I suggested that it would be possible to use the same technique to deploy her video work. I made a quick demo to show her last weekend before we collaborated on one of her “Ink Walk” performances, and she was intrigued to the point that we’re now planning a full-on collaboration. I’ve continued to flesh out my little demo over the last few days into something I’m pretty happy with, and it ought to serve as a pretty good template for whatever we end up doing together.

Like most of my electronic pieces, there’s no pure synthesis going on; everything is manipulated from source recordings, in this case, video and audio recordings of rumpled bed sheets (yes, the inspiration came while lying awake a few nights ago). The unpredictable patterns that result are similar to other kinds of natural phenomena from which I draw inspiration (e.g., rain, fireworks, traffic), chaotic on a small scale, but consistent on a large scale. Like my other installation work, the piece can continue indefinitely, with no beginning or end; here, for convenience, I’ve recorded only a brief excerpt.

I guess you could call this my first video piece. I’ve been using Jitter, the video-processing add-on for Max/MSP for over a year, although this is the first time I’m presenting the results publicly. The first time I used it was in the software I developed for my performance (together with Yan Jun 颜峻 and Bruce Gremo) of Christian Marclay’s Screen Play at the Shanghai eArts Festival in October 2008, but it was used only to prepare cues for myself, not projected for the audience to see. (In fact, in the very first version of this little demo, I used a snippet from Screen Play as video source, a shot of rolling waves.) I’ve also used Jitter in some consulting I did for a videogame company last winter, analyzing audio signals to generate game levels.

I’m not sure yet if this will ultimately serve as the background texture of a bigger piece later on, or if it’s fine just as it is, as a super ambient standalone piece. I could see it working as a backdrop for a live performance. Or it could work in a gallery setting (ideally with 4 channels of sound). For what it’s worth, in terms of scale and mood, the 4-day process of putting this program together felt very much like writing a pop song.

What’s happening in the program is pretty simple. I’m picking little bits of my original bed sheet video and slowing them down, rotating them, and fading in and out from black. There are four asynchronous layers of this activity happening at once. The speeds are different, and there are slight hue offsets for each phrase. I adjust the contrast, hue, and saturation, and I add some feedback. For each video phrase, I pick an audio phrase from my original wave file (recorded separately from the video), and fade in and out, in time with the video. I’m also analyzing the average brightness of each video stream and using that to control the pitch of a bank of filters that the audio runs through before hitting the speakers. For each phrase I pick a different pitch multiplier (over roughly 4 octaves of a just intonation minor scale, not that it matters). Hue, feedback, filter Q, cutoff frequency, and probably a few other things are slowly modulated by random offsets to keep things interesting. I found there are lots of settings that produce different outcomes, all acceptable, so I implemented the old John Cage dictum: “Say we have a problem and 100 solutions; instead of chosing just one of them, we use them all.”

I ran into a tricky little trigonometry problem while working on this piece. I had to figure out the zoom ratio while rotating the images so that I wouldn’t go off the image and add weird black edges to the composite. I pondered long and hard, and I think I was at least looking in the right direction, but I eventually had to enlist the help of my pal Micah Sittig, who teaches physics over at SMIC Private School, and to whose class I gave a little presentation on music acoustics last March. He solved the problem in about a minute. It’s good to have smart friends. Thanks, Micah!

Crazy coincidence: yesterday while working on this piece, I thought I’d take a break and listen to something new, and a CD of music by Esa-Pekka Salonen came to mind. I picked up this disk following one of the premiere performances of his fellow Finn Kaija Saariaho’s second opera Adriana Mater in Paris in 2006, but I had never listened to it. I feel like a CD, if I have any reason to believe it will be good, really ought to have a focused and concentrated first listening, and consequently, I have a huge stack of shame of CD’s I’ve purchased, but haven’t yet had the time to devote to a proper first listen. Especially with all the hoo-ha about the new music director Gustavo Dudamel taking Salonen’s place at the Los Angeles Philharmonic, I’ve been thinking lately I really ought to listen to it. So today I picked it up and unwrapped it, and to my amazement, the last piece is also called Insomnia! I’d already been planning to call this new piece of mine something about insomnia for a few days, so I swear I didn’t steal the idea, unless it was buried somewhere deep in my subconscious. Anyway, I just listened to it, and it’s really a really nice piece, though somewhat more detailed than mine. Isn’t that nuts?

Fat Art Lessons

(Before I dive in, let me draw your attention to a recap of the Fat Art show I did for the China Music Radar blog! Also, if you want a thorough description of the installation I did for the show in collaboration with Chen Hangfeng, check out my previous post on the subject.)

I’m trying to imagine what my reaction would be if I were to check out the Fat Art show as an impartial observer. According to the show introduction, “Music to My Eyes is an art exhibition with a difference: in each of the works created for the project, sound is an integral part of the visual presentation.” But it’s really not such a unique concept; I’ve seen many shows that try to do more or less the same thing, one at Duolun a few years back, that tent annex at the Shanghai Biennial in 2004, a recent Shanghai MOCA show, etc.

Not only is it not such an original idea, but it’s also not particularly well-advised; every time I go to one of these multimedia installation spectaculars, the result is cacophony, where no piece has the sonic space it requires to say its piece, and if a good piece is even to be found, it’s usually lost in the din. Even the current Nam June Paik show at Beijing’s Central Academy of Fine Arts suffers from this problem (it’s the worst of both worlds, in fact; the sound is thin and weak, yet still manages to bleed through from piece to piece).

And now, I, too, have fallen victim.

A critical aspect of my piece is that it’s processing real-time sound coming from two microphones positioned around the gallery. It’s an audio corollary to a kaleidoscope, performing the same kind of function that Hangfeng’s video piece does, fracturing the everyday world into something unexpected and, I think, beautiful. So when noise from other pieces in the show makes it impossible to tell that the sound on the speakers is a manipulated live feed, or even to distinguish what sound is coming from my installation as opposed to the piece next door, the point of the piece is lost. In this context, I have to say, my piece fails.

But of course, some of life’s most important lessons are born of failure, so I’ve done my best to derive some helpful maxims, some logistical, some practical, some aesthetic, to keep in mind for future work. The comments that follow are not necessarily limited to Fat Art, but are culled from my experiences and frustrations with a number of sound art exhibitions over the years, issues that were very much on my mind while installing our piece.

Most important: make sure you’re prepared to handle the unique challenges presented by a show of all sound-producing works.
I’ve never seen a show all sound-producing works in which the pieces didn’t bleed into each other and diminish the experience. I think this is simply because most galleries and show organizers are not equipped to deal with sound. To successfully pull off a show like this requires the expertise of an acoustician and an audio engineer.

An understanding of the sound that will be generated by each piece is essential in determining the layout of the show, not just from description, but from actually hearing it (a challenge, of course, when the pieces are being developed specifically for the show, but one that must be addressed). Someone with some acoustical background should work to improve the acoustics of the gallery space (typically an afterthought) and help to sonically isolate pieces (typically a stab in the dark). An audio engineer’s expertise is needed when laying cables and positioning speakers, as well, to ensure that, for example, power cables and audio cables are not all run side by side (thus incurring interference). Finally, someone with trained ears also needs to be there to mix and set levels for the entire show in a systematic manner.

Given the proliferation of sound art, I’d say there’s a need for a new type of specialized “sound curator” to emerge to handle issues related to sound art and sound in galleries.

Not everything in a show about sound needs to generate sound.
There seems to be a pervasive assumption that if one sound-producing work is good, then a whole show of them is even better. But even apart from the practical considerations of acoustical isolation, the issue of aesthetic isolation remains. Exhibitions often derive their strength from a dialog between pieces, but in most new media shows, where pieces are often corralled into little cattle stalls (as with a recent exhibit at the South River Art Center), the goal is usually just the opposite.

A show about music or sound doesn’t need to include exclusively sound-producing works. There are so many mute images that nonetheless suggest rhythm or sound or music in their form or subject matter, and a sound installation might well benefit from proximity to visual works. I remember seeing an excellent Christian Marclay show at the Seattle Art Museum in 2004 that pulled this balance off expertly. Liu Ye’s paintings didn’t really need to have music by the artists depicted playing in the same room on an endless loop; maybe another piece in the show could have served as a soundtrack. A record label like Modern Sky could also capitalize on its album art, for example, the fine prints Jonathan Leijonhufvud created for the latest ReTROS album (actually on sale just around the corner at the Today Art Museum gift shop).

Bring your own gear.
We initially agreed that Hangfeng and I would provide all of our own equipment. This would allow us to work with the final hardware and ensure everything was functioning properly before arriving in Beijing. But then, in an effort to cut costs, it was decided that the museum would furnish all the gear instead. So I was greeted with a pair of “Vocal King” karaoke monitors when I arrived at the space, not exactly the “Tannoy 5A or equivalent” I had specified. And all of the mismatched projectors that had showed up for the various pieces that required them (including six for our piece) had to be sent back, as they were not up to snuff. Getting the cheapest gear ended up costing more money, time, and frustration in the end.

On the other hand, I believe Mathieu Borysevicz located all of his own gear, which allowed him to get up and running a lot sooner. I’m going to insist more firmly on this in the future.

Pay attention to acoustics.
According to the Fat Art magazine that doubles as the show’s catalogue, the Xinjiang artist Aniwar intended to create “a realm complete and utter silence,” in which the only sound would be “the rise and fall of the breath, the pounding of blood in the veins, the roar of silence in the ears.” His comments closely mirror John Cage’s often repeated account of his visit to an anechoic chamber (in which, instead of silence, he was surprised to hear two sounds: the high frequency buzz of his nervous system and the low frequency throb of his circulatory system [a claim which always struck me as somewhat scientifically specious]) to an extent that I doubt is coincidental.

But if you want to build an anechoic chamber, you don’t turn to an abstract painter. His main technique was to line the walls with bolts of felt. In the end, the installation doesn’t even look finished, with bolts of felt standing next to the entrance, which suggests to me that when the other pieces started making sound (including a video looping right outside his room’s open door), he kind of gave up on the idea.

Given this approach, complete silence was clearly not going to happen. But when I popped in to check on progress halfway through installation, I was nonetheless struck by the sudden change in acoustical space. Museums (especially the renovated warehouses and factories popular in China) are generally not conceived with a consideration for acoustics, and with all the construction and yelling going on as people were setting up their pieces, his room did have a markedly different feel. I think a lot of the time people don’t pay attention to the acoustical characteristics of the space they’re in until it changes (unless it’s really bad). So this could be something fun to play with in the future, either as an artistic parameter on its own, or just to help set off some other sound-producing element.

Mapping space and time
It’s actually a testament to the catchiness of Ge Fei’s piece that I still don’t mind hearing it on the CD accompanying the Fat Art magazine, even after a week of it seeping through the walls of our installation and interfering with my own sound. You have to read the magazine to realize that the sound is actually derived directly from the painting by Xu Ruotao next to which it was looping. The methods used are not described, but I imagine the technique was to use a tool along the lines of Metasynth to transform an image into a sound. The result was a five minute file that endlessly looped on a portable music player.

From my perspective, the act of mapping between different sets of data is at the core of digital art, full of fascinating challenges and possibilities. (This kind of mapping, in fact, inspired the name of this very blog.) But from my conversations with others (even other artists conversant in new media), it seems there are many assumptions regarding the mapping of images and sound that go unquestioned, though they are far from the only approaches possible. I guess this stems from our general familiarity with the two most common methods of representing sound pictorially: music notation (x equals time, y equals pitch) and waveform displays (x equals time, y equals amplitude). But there is no innate characteristic of the x axis (or the y axis, or the color depth or brightness of any pixel or any other parameter) that signifies time. Ge Fei’s suggestion that the painting is five minutes long is therefore completely arbitrary (and I’d say, having spent some time with the painting myself, perhaps a bit generous).

Experiencing this piece made me ponder that there are a lot of ways to map space to time in a real-time system, something to potentially explore in future work. It seems to me it would have been more effective if the mapping were happening continuously in real-time, so that you could experience the sound as you experience a painting: as long as you want to, making your own beginning and end as you come and go. You could even use some head-tracking routine to generate music based on the area of the canvas being examined.

Plan for adequate ventilation.
Our room was a small, custom-built hexagon inside one of the main galleries. I’m not sure if it’s due to the quality of construction materials used, or due to the lack of ventilation, but after a while the room started to stink, to the point where I saw a few people enter, take one whiff, make a face, and leave immediately. In the end the imperfect solution was to put the curtain aside to air the room out when the museum was closed.

This would have been less of an issue if the walls weren’t feebly trying to block out sound from other pieces; they might have been made from a more porous material. But if the walls must serve as soundproofing, then issues of ventilation (one of the trickiest issues in building a sound booth, as I learned when we installed one at my former office at Sierra) must also be addressed.

A more whimsical thought: while our experience brought the subject of smell to the fore, my friend Defne has also been collaborating with a perfumer to create the smell of the moon for her upcoming Futurist event. Tagging the subject for future research…

Even if you don’t need to be there, be there.
There’s really no reason I should have had to be on site for much of the set-up of our piece; there were workers there to hang cables, paint walls, etc. Most of the time I felt I could have been more productive fine-tuning my Max patch back at the hotel room than hanging around on-site amid the cacophony and astringent fumes of construction.

But not only was I able to catch some mistakes in installation (you can’t run power cables and audio cables next to each other), but when I was there, our piece’s needs simply got more attention. If something wasn’t happening, and I started doing it myself, help would suddenly materialize.

This really isn’t unique to putting on an art show; it’s general project management, just as true for a videogame. In fact, I was struck several times by the similarities between setting up for a big event like this one, and getting a videogame out the door.

And a few longstanding maxims were reinforced.
No loops!
If I have a mantra, this is it, finely honed from 12 years of audio development for videogames (i.e., real-time, digital systems). I think anyone who decides to loop a piece to make it run indefinitely in a gallery fundamentally doesn’t understand the medium of installation. I’ll expound upon this more in another post; basically a loop is the least creative answer to a very interesting question.

The refrigerator door effect
The only pieces that were really interactive at the show were Wang Bo’s and (maybe, depending on how it was supposed to be working) Yan Lei’s. Wang Bo’s piece included some of his cartoon characters rendered in life-sized plastic that cried out in pain when struck. This type of interaction, so distressingly common in digital artworks, is analogous to a refrigerator door: when you open the door, the light goes on, a simple one-to-one correspondence that, once observed, offers very little in terms of replay value. Further, behind these plastic figures, an animation of the same characters being menaced by a monstrous figure also looped, so that the piece actually broke two of my cardinal rules. And it must be said that the basic audio elements on such incessant display were poorly balanced in volume and timbre, offering no illusion that they were emanating from organic personages in a common acoustical space.

Let film be film, and let installation be installation
I was planning a big post on this topic to register my disappointment with the SH Contemporary show last fall; maybe I’ll still get around to it. What I noticed there was that almost all of the video art, except for Bill Viola and a Korean artist who’s name I’ve forgotten, was unduly beholden to the conventions of film. I don’t want to get mired down in semantics, but for me the most useful distinction is that film (including “films” shot on digital cameras) is about providing surrogate eyes, occuring in a dark room that is designed to make you forget you’re in a room at all; by contrast, video installation exists in a space, as an object.

To me, Mathieu’s piece falls squarely into the former category; I think it’s an eloquent film, and here as well as in other works of his that I’ve seen, he shows a particular knack for multiple channel narrative. But since his film so clearly presents a narrative arc, I found it frustrating to encounter his piece at the top of the stairs to the exhibitions second floor, where you’re almost guaranteed to start watching the film somewhere in the middle, then watch through to the end, then keep watching from the beginning until you get to the point where you came in, then try to piece the whole thing together into a coherent narrative in your head.

I’ve been proposing a simple solution to this problem for years: a countdown timer to the next show time! I’ve yet to see anyone try this.

I actually thought Sun Lei’s and Pei Li’s pieces both worked better as installations, even though they were also looping, since they were dramatically flat, more a series of tableaux than a story. It doesn’t really matter when you come and go.

In closing
Hope this doesn’t all come off as too grumpy; as my surliest composition professor, Richard Karpen, once said, if everyone simply applauds and says, “Great piece,” you never make any progress. In the end, despite some frustrations, it was a fun and rewarding experience, and I got a chance to work with many lovely people in the process, deepening my relationship with Hangfeng, getting to know other artists like Sun Lei and Pei Li, plus all the indefatigable folks who organized the show, Karen (particularly spry in addressing unforeseen challenges during set-up, and an unwobbling pivot throughout development) and her lovely assistant Lauren, Shen Yue and the tireless Ji Su from Modern Sky, the gregarious Liu Yitao from TAM, and so many others. Let’s do it again sometime!