Three Studies for Disklavier and Mechanical Chihuahua

[Note: this article was originally drafted on July 23, 2015.]

After teaching at Berklee College of Music’s campus in sunny Valencia, Spain, for the past two years, I’m gearing up to head back to the US this fall, where I’ll be returning to my prior post at Berklee’s Boston campus (although I’m switching departments, from Film Scoring to Electronic Production and Design). Since graduating our latest class of master’s students last week, I’ve been spending time archiving some of my projects from the past two years. One of the quirkier among these is my Three Studies for Disklavier and Mechanical Chihuahua from early 2014.

I composed these super short pieces (about twenty seconds altogether) at the invitation of my friend the mad genius Ranjit Bhatnagar (we had the pleasure of bringing him over to Valencia as a visiting artist in the fall of 2013). He issued an open call for compositions that he could use as part of his installation Short Ride in a Fast Chihuahua, which was presented at the Qubit Machine Music festival, February 12-14, 2014, in New York City. The constraints were extreme: he needed compositions of exactly 26 regular beats at a tempo of 320 bpm, so that each beat could be triggered by a yip from his mechanical Chihuahua (detected via microphone). That comes out to just over six seconds, if you add some time for the resonance of the instrument. A composer was free to place as many notes of varying velocities on each beat as desired.

A Disklavier, as you may know, is a fully functional acoustic Yamaha grand piano that can also be controlled remotely, whether via real-time signals or a pre-recorded file (using the MIDI protocol). It’s like a fancy player piano. I suppose the first thing that occurs to most composers writing for this instrument is to seize the chance to devise something that would be impossible for a human performer to execute, perhaps along the lines of Conlon Nancarrow’s masterful player piano studies from the 50’s and 60’s or the more recent black MIDI phenomenon.

Given Ranjit’s constraints, a sophisticated rhythmic study of the type that Nancarrow frequently undertook was impossible, but harmony was wide open. So I thought it would be interesting to explore the relationship between harmony and timbre (which can kind of be thought of as the same thing) and see if I could perform a kind of additive synthesis on an acoustic piano.

Getting Max to talk to the Disklavier
Getting Max to talk to the Disklavier

This was the process for composing my first and third studies. First, I wrote a simple contrapuntal passage in two (first study) or three (third study) voices. Then I wrote a Max patch that, based on each note I composed, calculated a bunch of additional notes ascending from the original note, to be played at the same time, decreasing in volume as pitch increases. I believe I used a couple of different formulas, but all based on the way the harmonic series works (i.e., fundamental frequency multiplied by a regular sequence of numbers starting with 1). Since each “partial” generated in this way is not a simple sine wave, but another piano tone with its own rich timbre, the music gets dense in a hurry, but there’s a hard upper limit—the piano’s high C8—that keeps things from getting too crazy. There’s also necessarily a lot of rounding off of pitches going on, so that they land on a pitch the piano can play, with the result that on the upper end of the piano, most of the keys are being played, although, crucially, at different velocities.

So the result is that, for each of the fifty or so notes I originally composed, my program spit out around seven hundred notes for the piano to play for each of my six-second, twenty-six beat compositions. My original compositions were actually quite simple and modal, avoiding big leaps in pitch, in an effort to help all of the different pitches perceptually fuse into one complex timbre as they move around in parallel. (I may have been thinking a bit of the first of Stravinsky’s Three Pieces for String Quartet.)

Original sketches for studies #2 and #1 (I don't think I wrote out #3). (BTW, the scrawls on the top of the page are from shortly after I fractured my humerus at the end of 2012, hence a little sloppy.)
Original sketches for studies #2 and #1 (I don’t think I wrote out #3). (BTW, the scrawls on the top of the page are from shortly after I fractured my humerus at the end of 2012, hence a little sloppy.)
First Study
First Study
Third Study
Third Study

And I was perhaps even a bit surprised to hear that it actually works! If you focus on, for example, the middle voice of the third study, which starts on the fourth beat with a diatonic run from C below middle C up to F, it doesn’t sound like a normal piano sound, but something brighter, almost more like a Rhodes. The same is true, maybe a bit less pronounced, in the bass part of the first study.

To hear the difference, compare with the second study. This one was composed the old fashioned way; I wrote the parts out on paper and just wanted to see what I could do by employing the whole range of the piano at once, so maybe this one’s a bit closer to a Nancarrow piece. Here, the timbre of the piano is untransformed; it still sounds like a regular piano, albeit a very busy one.

Recording the Disklavier on the Ann Kreis Scoring Stage
Recording the Disklavier on the Ann Kreis Scoring Stage

Berklee staff engineer and assistant professor and all around friendly and quite capable guy Chris Wainright was the engineer; as I recall, his description of this music was that it evoked an ice cream truck being pushed off a cliff. We used a pair of Royer 121 ribbon mics (if you see photos of the session, there’s a third mic set up, but we only used two for this recording) and recorded on Berklee Valencia’s Ann Kreis Scoring Stage into an Avid System 5 mixer. Ostensibly, this session arose out of Chris’ desire to test a faulty ribbon mic, although I’m not sure how ideally my music ultimately suited this purpose. We didn’t use any reverb or EQ or any other post-processing; I thought these recordings were dense enough as they were.

Of course the idea of using a piano to synthesize new timbres has been explored before. I would perhaps be remiss not to mention the work of the Austrian composer Peter Ablinger who got a lot of attention for his piece “DEUS CANTANDO,” which presents the arresting phenomenon of a speaking piano, in 2009. His custom mechanical piano, designed by Winfried Ritsch, appears to have much greater resolution than a Disklavier (each note can be pulsed up to 16 times per second), allowing him to recreate the sound of a child’s voice reading “the text of the 2009 Declaration of the International Environmental Criminal Court that was founded at the World Venice Forum pursuant to an initiative of Adolfo Pérez Esquivel and the Dalai Lama” (all of this according to this Ars Electronica citation).

Thanks so much to Ranjit for the invitation to do this wacky thing! I invite you all to check out his fantastic work (maybe start with his brilliant Twitter bot Pentametron, which retweets, in rhymed pairs, tweets that happen to be in iambic pentameter).

29 Giraffes

So you’ve got one more week to view my solo show over at the Axiom Center for New and Experimental Media (through November 6). The centerpiece of the exhibition is my 6-channel, real-time, algorithmic sound installation Kaleidoscope Music, the history, aesthetics, and inner workings of which are amply documented elsewhere (Dig Boston feature, Artforum critic’s pick, Kickstarter project, and several exhaustive blog posts, for starters).

Kaleidoscope Music at Axiom
Kaleidoscope Music at Axiom

Rather, what I’m here to tell you about today are the prints from my 29 Giraffes series that are also included in the show. I usually do a pretty good job of documenting my pieces when they go up, but I never got around to blogging about this series when it was first exhibited at [the studio] in Shanghai, back in August 2009 (check out the press release, flyer, and this fine review from That’s Shanghai magazine). So allow me to take a few moments to fill you in.

First exhibition of 29 Giraffes at [the studio] in Shanghai in 2009
First exhibition of 29 Giraffes at [the studio] in Shanghai in 2009
29 Giraffes represents my first foray into visual art. The earliest source file I could find dates from July 29, 2007. You can see some of the images on Flickr and additional images (including some early tests) on Facebook.

These images were a natural extension of my work in sound, and the original goal was simply to create an album cover for my CD Radiospace 040823 (as featured last night on Gregory Taylor’s radio program Remember Those Quiet Evenings!). That piece performs algorithmically modulated granular synthesis on a live radio signal, sort of sandblasting the sound into new patterns and textures. I got the idea in my head that the album art ought to be generated the same way, and this stubborn notion delayed the release of the album (recorded in 2004) by about five years. It took a while to conceive of how to translate the process into the visual domain, and then another long while before I realized I could use software I already knew (Max/MSP, specifically the lcd object, before I learned Jitter) to pull it off, and then an extended period of experimentation and testing before I felt the results were worth sharing with anyone.

After all that effort, I kind of hate the image I rather hastily chose for an album cover, but I was in a hurry to get the CD pressed in time for the exhibition opening. If I had waited a little longer, this is the image I would have used (and still hope to use for a reissue one of these days).

Giraffe 2009719144455
In the same way that Radiospace samples bits of radio, these images sample fragments of digital photographs. The final images manipulate images of neon lights from Shanghai’s Nanjing Dong Lu pedestrian corridor (the stretch of street where the Sofitel in which I recorded Radiospace 040823 is located), not too far from the Bund (taken on a photo shoot with Jutta for my birthday in 2007, four years and one day ago). The software excises little chunks from these images and statistically arranges them into new patterns, according to various parameters that I can set (min/max size of the image fragment, location in the original image, density, opacity, etc.). The final compositions are comprised of one or more layers (sometimes quite a few) of these statistical passes (horizontal or vertical), which I think of as analogous to brushstrokes, over the black digital canvas.

The boundaries of these digital brushstrokes into which fragments of photographs are statistically pasted are derived from curves I’ve drawn by hand into tables with a mouse. My earliest studies involved Gaussian patterns and other types of statistical distributions, but I eventually decided I wanted to incorporate a more tactile, hand-drawn element. I felt at the time the need to emphasize that these works weren’t simply the cold, rational, impersonal result of a some obscure mathematical formula. Rather, I was involved in an intuitive and iterative process with my software, guiding the generation of new material, and then responding to it to see if I liked it or not, shaping its evolution much as I imagine an artist in a more traditional medium would.

When I moved to Shanghai in 2004, I read that Shanghai was the second largest city in the world, behind Mexico City. These images convey something of the density of urban life I experienced in one of the world’s most bustling metropolises, the exhilarating disorientation and striking juxtapositions. I think of this work in terms similar to those Robert Hughes used to describe the Merz collages of Kurt Schwitters:

Their common theme was the city as compressor, intensifier of experience. So many people, and so many messages: so many traces of intimate journeys, news, meetings, possession, rejection, with the city renewing its fabric of transaction every moment of the day and night…

Kurt Schwitters, Merz 410: Irgendsowas (1922)
Kurt Schwitters, Merz 410: “Irgendsowas” (1922)

The final images have nothing to do with giraffes. When I started developing software to manipulate digital images, my earliest test subject was a photo of a giraffe I took while visiting my parents in Kenya in 2005. I started using the term “giraffe” as shorthand for the whole project, since it was quicker to explain to my Ubisoft coworkers that I was staying in over the lunch hour to work on my “giraffes,” rather than my “algorithmic image manipulation software” or whatever. There aren’t 29 of them either; the number was chosen as arbitrarily as the name, and I kept both to emphasize the idea that arbitrariness (or artistic intuition) is a key part of the piece.

The original giraffe photo I used as the basis for my first visual studies
The original giraffe photo I used as the basis for my first visual studies
Giraffe study, lines between random points in Gaussian distributions, with colors drawn from the original giraffe image
An early Giraffe study, lines between random points in Gaussian distributions, with colors drawn from the original giraffe image

In addition to the first show at [the studio] and the current exhibition, several of these images were included in my solo show “The Point of Departure” at the True Color Museum in Suzhou about a year ago (read the little blurb I wrote about the series at that time here). I also sell these images as limited edition prints on archival paper, and several are already in private collections. If you’re interested, by all means, drop me a line! I guess these are the images CNNGo was referring to when they described my work as “very pretty.”

Going over my notes from the time of the first show, I’m reminded of several follow-up avenues still unexplored. The unanimous feedback I received from that first exhibition was that people would like to see these images larger; at the time I was constrained by what I could fit on one screen at once, but now that I’ve gotten into Jitter, I should look into that. Also, right before I left Shanghai last year, I collected a bunch of additional potential source images from backlit fashion advertisements, and I had the idea of doing digital collages based on awareness of different body parts, but I haven’t jumped on that yet either. As Morton Feldman said, “Now that things are so simple, there’s so much to do!”

(Christian Marclay + Guitar Hero)/2

As previously posted, I had the pleasure of doing some shows with my friend Yan Jun 颜峻 from Beijing recently. As I was digging up some links to introduce him to Boston area friends, I found this clip of a performance we did, together with Beijing-based Bruce Gremo, at the 2008 Shanghai eArts Festival (shot by Amsterdam-based artist Emile Zile, who I met after the concert; read his account here). We performed at the gracious invitation of Defne Ayas and Davide Quadrio of Arthub Asia, who curated the Final Cut section of the eArts Festival, which transpired in and around Xujiahui Park in Shanghai and also featured performances by B6, Aaajiao, Feng Mengbo 冯梦波, Dead J, Alizia Borsari, and Elliott Sharp, among others (Elliott Sharp is featured in the second half of this clip).

Here we’re performing a video score by Christian Marclay entitled Screen Play, which consists of a bunch of black and white footage from old movies, mostly evocative of sound in some way (telephones, slamming doors, ocean waves, dancers, phonograph records, etc.), overlaid with simple, abstract shapes in bright colors. The piece is about half an hour long. There are no clear indications how the score should be interpreted; rather, it serves as an inspiration, a framework for improvisation.

As I watch this clip now, my first reaction is, “Wow, it worked!” It’s become something of an established practice to do these kinds of live, improvised accompaniments to new video or old films, but in my observation, there’s one problem inherent in the format: synchronization. No matter how skilled the performer, it takes a certain amount of time to register changes in the video and interpret them as sound. So in preparing for this performance, I specifically set myself the task of finding a solution, and reviewing our work two and a half years later, I’m pretty pleased with the results.

Synchronization requires anticipation. This was one of my primary lessons when I studied conducting back at St. Olaf. In 4/4 time, if you want the orchestra to come in loud on the one, you need to make a big gesture on four of the previous measure; you need to stay a beat ahead. In traditional music notation, sound is represented on a grid in which the x axis is time and the y axis is pitch, so it’s easy to peek ahead on the timeline. Or in waveform representations, x is time and y is amplitude. But a video, unlike a graphic representation of sound on a page, is a time-based medium, and x and y can’t help you; time is time! There’s no way to look ahead and prepare for what’s coming next.

Christian, Bruce, and Yan Jun

To address this issue, I took a tip from some of my favorite videogames, Frequency, Amplitude, Guitar Hero, and Rock Band, developed by Harmonix Music Systems (just up the road here in Cambridge, MA, in fact; I just gave a talk there last month). In these games, as I imagine anyone reading this is already well aware, notes are represented by little colored dots coming towards you on the screen, and when they get to a certain point towards the bottom of the screen, you hit a button on your controller to sound the note. Notes are coming at you on a timeline, so it’s easy to look ahead and prepare for new notes to come, just like in traditional sheet music. This is a true video score.

To approximate this kind of prescience in Christian Marclay’s piece, I wrote a Jitter patch (the first time I used Jitter for a project, in fact) that plays back the movie in 4 separate windows, each window offset by one second. So I was able to see every event in the film coming from three seconds away and count down to activation: 3-2-1-play!

My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch

The window in my Jitter patch that displays the largest image (the actual current time) also doubled as my control surface for generating sound. To play along with the video, I was literally drawing on it with my mouse. The timbres I was playing employed very simple synthesis techniques, lots of bright cross modulation, and a bit of granular synthesis. The idea was that my buzzy tones would correspond to the bright, abstract graphics in the score, whereas Bruce (performing on his amazing homemade digital flute controller, the Cilia) would evoke the representational black and white clips, and Yan Jun (working with lo-fi electronics and voice) was more of a moody glue, bridging the two worlds.

I’m a big fan of Christian Marclay. His solo show at the Seattle Art Museum in 2004 is one of the best exhibitions I’ve ever seen, a fecund amalgamation of wit, inventiveness, historical awareness. He represents the full range of what a sound artist can be. He makes sound, of course, in performances, recordings, and installations. But he also makes silent pieces about sound, or about the ephemera surrounding contemporary sound production, and he also makes video pieces that suggest the contours of sound in another medium.

This playfulness is evident in Screen Play in the choice of images, their clever editing, and their relationship to the abstract graphics. He’s clearly toying with different ideas of sonic representation in the way these graphics are deployed, at times stretching five lines across the screen to evoke a music staff, at times drawing a waveform as on an oscilloscope, at times merging into the underlying scene (as when a bright yellow ball becomes attached to a man’s spoon as he’s slurping soup).

Jitter Patch in Action
Jitter Patch in Action

I realize that for Christian Marclay, this synchronization issue is probably not a problem at all. Screen Play was conceived for the kind of freely improvising musician exemplified by downtown New Yorkers like Elliott Sharp. For a certain type of resourceful performer, the video is a way to nudge the music along, to create an overall contour and form that may not have otherwise emerged, and which provides the potential for greater large scale contrast and recurrence than an unmediated free improv jam. It’s kind of like a time-based version of a graphic score, such as Earle Brown’s December 1952, Cornelius Cardew’s Treatise, or Christian Wolff’s Edges.

However, there are a few sudden transitions in Screen Play, in particular a sequence that alternates a slamming door with a full screen of saturated color, that strike me as contrary to this ethos. That bit, and a few others like it, seem to call out for big contrasts and tight synchronization, and I think at these moments one could legitimately criticize the score for setting up an expectation that the performer cannot fulfill. But I’m happy to report that, by applying a simple technique from videogame design, we nonetheless nailed it.

Using my Jitter patch to perform this score actually felt a lot like playing a videogame. It gets at what I consider to be the heart of gaming: to develop a skill and apply it to new challenges. This aspect of gaming is very much like performing music; from a certain point of view, any musical performance can be considered a game. I’d estimate that this modified approach to performing Screen Play lies somewhere near the midpoint between downtown New York free improvisation and Guitar Hero, and I think there’s a lot more interesting work to be done along this continuum.

On the One!
On the One!

Thanks to Defne Ayas and Mathieu Borysevicz, I think, for the pics. And thanks to Arthub Asia for the invitation to participate!

Statement of Purpose

At PechaKucha Boston earlier this week I presented the US premiere of my Statement of Purpose. I primarily think of it as a composition, but you might also call it a performance piece. I suppose “multimedia lecture” might be most accurate. It was written in September 2008 for presentation at PechaKucha Shanghai and thus adheres to the PechaKucha format: 20 slides of 20 seconds each. In Boston as in Shanghai, I think it seemed to go over pretty well.

Statement of Purpose is consciously indebted to John Cage’s Lecture on Nothing, one of the pieces collected in his book Silence, which I first read many years ago. The thing that struck me about these early Cage lectures and essays is that, in many cases, the message is in the form of the work, not the content. Rather than just describing his ideas about rhythmic structure, he demonstrates them; you experience them directly.

So in my piece, which I have described as an update of Lecture on Nothing for the digital era, I’ve adopted a mobile structure, as opposed to a linear lecture format. I take great inspiration from Alexander Calder, because in his mobiles, the individual elements are fixed, but the relationships between them are in constant flux. So here my lecture is arranged topically, around nodes of ideas. The main idea-nodes are

Aspects of Music and Audition
Stasis in Sound
Dynamism and Interactivity
The Current State of Videogames
Non-Linear Structure
The Nature of Multimedia

On each of these topics, I wrote a bunch (around six to ten, I think) of one sentence statements, single ideas that could be presented in any order. Then I wrote a program that generates a script by randomly picking one of these idea-nodes, picking some of the ideas associated with it, picking another idea-node, etc. Pauses are added between each statement to vary the density of the lecture over time (using a random walk, aka a “drunk” function or brown noise), in the same way that a tide or a rainstorm has a changing contour over time. Indications about when to clear my throat, gesture to the screen, take a swig of beer, etc. are also algorithmically scattered throughout the script, as a kind of textural element, subverting the ephemera of a typical lecture scenario.

I also interspersed a purely musical element, consisting of a set of low drones plus a set of brief melodies in a higher register (outlining an A mixolydian scale) all sung on a textless “ooh.” It’s pretty arbitrary; I thought the piece could use it, and I like the texture that results. But it also serves to focus attention on the abstract structure of the piece, rather than the content, and to suggest that the piece as a whole may be considered in musical terms.

There’s another type of behavior, too, statistically less likely to occur. While 16 of the 20 slides use the above formulations, the remaining 4 are shuffled riffs on standard salutations and closing statements: “Hello,” “Good evening,” “My name is Ben Houge,” “Thank you for your attention,” “Good night,” etc. The idea is that through repetition and dislocation, these phrases become formal (rather than syntactical) elements; it’s very similar to what I’m doing with radio broadcasts in Radiospace. Having another type of behavior helps vary and articulate the overall form. I also just think it’s funny, and I sensed that the audience was similarly amused. Humor is like music, in that it plays with audience expectations, as when I end my piece with a cordial, “Hello, everyone.”

The slides were generated using very similar techniques to those I employed in my 29 Giraffes series, but substituting text for little chunks of photographs. The colors, in fact, are algorithmically extracted from the same Nanjing Dong Lu source material I used in my Giraffes. Here again, the emphasis is more on the texture that emerges from all this superimposed text, rather than on the text itself; as with the algorithmically generated script, the slides communicate through form, rather than content.

The whole piece has an audio accompaniment, too, one 20 second audio clip per slide. To create this backdrop, I processed a recording of myself reading the text of the piece using a bunch of custom software I had lying around at the time, programs I had developed for other pieces. You can identify bits of Psalmus, Study for Eventual World Domination (my contribution to The Bike Bin Project), Radiospace, and a granular synthesis demo I did as a videogame audio engine prototype. Looking back, the evocations of these pieces that crop up (as of the Giraffes) provide a nice snapshot of my digital workspace in September 2008, which was part of the idea.

To assemble all of these elements, I selected the 20 slides I wanted to use of the many I had generated, then I wrote a program to shuffle them. Same for the 20-second audio segments I generated. In the end, it’s a combination of arbitrary decisions and procedurally generated bits, which is really how just about any artwork comes together, digital or otherwise.

The result is that ideas come and go, freely floating. I’ve referred to a lot of my pieces as “meditations,” and the term is certainly apt here. Ideas recur, sometimes in different media (text from the slides may pop up again in the spoken presentation or recorded backdrop). They “interpenetrate,” to use one of John Cage’s favorite terms. They reinforce each other, and they add up to a way of thinking, which is very much my way of thinking, a network, a web of ideas, all connected.

It’s a good time for me to revisit this piece. Especially in the past year, I’ve been thinking a lot about the meaning that can be conveyed through pure structure. I think this has come to the fore as I’ve been increasingly active in visual media. In music, we take this for granted; you could say that music traditionally conveys meaning through structure alone. Music is the most abstract of the arts; representation or mimesis in the pre-recording era was by far the exception (think of the timpani evoking thunder in the “Scène aux champs” of Berlioz’s Symphonie Fantastique). In some cases you can say what a piece is “about,” because you know something of the circumstances of its composition, or because of a prefatory note by the composer. But principally, music’s meaning is all in the relationships of different frequencies, rhythms, velocities, timbres, etc., and, more importantly, the calculus of how this all changes over time. You would never say that even as abstract a composition as The Art of the Fugue is meaningless.

So coming, as I do, from a background in music, it’s only natural for me to approach my visual art in the same way, applying the same types of structures that I use in my sound work to visual information, and it’s been surprising to see how the conversation unfolds differently. A prominent arts person (don’t worry, no one you know) came to see my show in Suzhou last fall, and I was kind of amazed when she asked me what my piece was trying to convey. A musician would never think to ask such a question. As Elvis Costello said, if I could have written the song with any other words than the words I used in the song, I would have written a different song, wouldn’t I?

Of course there was a bit of a conscious impulse to poke a hole in the sometimes punctilious proceedings of a standard PechaKucha event (I have my Seattle School cohorts to thank for any vestigial confrontational aesthetic). As when I sneakily built an ambient electronic piece from mildly acrimonious pre-show chatter at Opensound a few weeks ago, I like the idea of snapping people into a different state of awareness with some new or unexpected realization. I also like the pacing of it; PechaKucha is usually about people cramming as much as they can into their 6:40, but my script actually includes indications to pause for as long as 10 seconds. But both times I’ve presented this work, the audience seemed to get it and dig it; it’s not just some avant-garde stunt. The message was conveyed.

Statement of Purpose was my first project after leaving Ubisoft at the end of August 2008. The deadline was tight, less than a month, as I recall, and I liked the idea of doing a new piece completely from scratch to emphasize my new trajectory as a full time, independent artist. I remember staying up all night to get it done, with an urgency that had been missing from my corporate gig for quite a while. I consciously wanted to make a statement about the main issues I was setting out to address in my work, my mission, as I considered it (and still do). Check out some documentation from that performance, and a video excerpt below.

I originally wanted to generate my slides and script in real-time using custom software, which I feel is technically still in keeping with the PechaKucha format, but in Boston as well as in Shanghai, the organizers very understandably wanted to stick to a standard set-up for all speakers. This is still something I want to explore, though, particularly the idea of giving cues to a performer on the fly, exploring the idea of real-time score generation (which is exactly what happens in a music videogame like Guitar Hero, and which I’ve already started to explore in pieces like my Zhujiajiao Drinking Game, more commonly referred to as Beer Hero). I’ve been contemplating a revision of this piece for a long time, to include this real-time score idea, write some new modules, add some Chinese text, incorporate multiple screens of real-time generated imagery, and blow past the 6:40 PechaKucha time limit to create a full, hour-long presentation. If anyone would like to sponsor and/or host such an event, please let me know!

EndWar Audio Post-Mortem

For those who are curious, here are the slides from my Boston Post-Mortem presentation last week. It was a conscious affectation to only capitalize the first word of every heading, and I now regret it, but I’m too lazy to go back and change them all. Also, I’m sorry the font size changes so much from slide to slide. (For more on EndWar audio in prose version, check out this EndWar audio page I put up a while back.)

The talk focused mainly on the audio deployment mechanisms we developed for the game, since I think that was some of the most innovative and fun work we did, and hopefully also the most portable to other projects people may be working on. I spent about half of the time going into some detail about our music system, which I feel was one of EndWar’s key audio innovations. It’s a little tricky to share the music demo I did (mocked up in Max/MSP) online, but I found some gameplay footage on YouTube that showcases the same set of music in the final game. (Music composed by Alistair Hirst and Matt Ragan of Omni Audio!)

It’s actually an instant replay of an online match on PS3. This means that, unlike the actual game, the player is controlling the camera movement here. This provides a good opportunity to hear how the music evolves, depending on what’s going on. There are a few sounds that seem to come out of nowhere; they would normally accompany interface events, but in replay mode, the interface is suppressed. Note also the guitar squeals that indicate you or your enemy has lost a unit (depending on the squeal).

Here’s another video of actual gameplay footage on the Xbox.

Note that the volume periodically ducks down quite suddenly; that’s because this person is using the game’s voice command feature to control his or her troops. When you pull the trigger to talk, other sound ducks down for clarity, but of course in this excerpt, you don’t hear the player talking, hence the dropouts.

This video goes from the little intro movie to the main menu music to the loading music to some actual gameplay. The loading music in particular illustrates our music system’s scalability, as this was done with just a few kilobytes of audio data in memory, algorithmically permutated, nothing streaming. Feel free to search for other gameplay excerpts on YouTube, too.

Anyway, the talk went really well, with a big turnout, an attentive crowd, and a lot of interesting conversations afterwards. Thanks to Darius for inviting me to share my work, and thanks to everyone who attended!

The Point of Departure

My solo show is confirmed, so stoked, here’s the full press release…

The Point of Departure: Ben Houge Solo Exhibition
November 6-December 5, 2010
True Color Museum 本色美术馆
219 Tongda Rd
(at the intersection of Jiushenggang Rd, near Guoxiang)
Wuzhong District, Suzhou, China
苏州市吴中区通达路 219 号本色美术馆(近郭巷)

Composer and digital media artist Ben Houge presents the culmination of his six-month residency at Suzhou’s True Color Museum with a solo show entitled “The Point of Departure.” The focal point of this exhibition is a new, real-time 18-channel video installation entitled Self-Portrait, Dusk, at the Point of Departure, an ambient work that applies concepts from videogame design and granular synthesis to video, to make a poignant moment last forever. Also included in the show are selected videos and digital prints providing a survey of Ben’s visual output over the past two years.

An afternoon-long digital music festival will celebrate the opening of the exhibition on Saturday, November 6, from 1pm until 7pm. The lineup includes performances by Hangzhou-based digital artists Yao Dajuin 姚大钧 and Wang Changcun 王长存, as well as Shanghai’s Xu Cheng 徐程 (of Torturing Nurse). Ben’s live performances are known to vary widely in style, and he will celebrate this diversity by performing three different sets of music: an ambient electronic set, a synth-pop set of original songs, and an acoustic set of favorite tunes by artists including John Cage and Jay Chou 周杰伦.

Long active in new music circles in China and the US, Ben Houge has been increasingly visible in galleries in recent years, with work exhibited at Art+Shanghai Gallery, OV Gallery, and [the studio] in Shanghai, as well as at the Today Art Museum in Beijing. His video Shanghai Traces, originally exhibited at OV Gallery’s Make Over show last spring, was shortlisted for the Guggenheim’s YouTube Play Biennial and has recently been acquired for permanent installation at Shanghai’s Glamour Bar. Ben has performed around eastern China and at all of Shanghai’s primary live music venues, as well as at the Shanghai eArts Festival, the Mini Midi Festival, Hangzhou’s 2Pi Festival, the Zendai Museum of Modern Art, the Shanghai Conservatory, the South River Art Center, and several NOIShanghai events. This summer he toured Germany with trumpet player Justin Sebastian. Prior to embarking on a full-time career as an artist, Ben spent twelve years designing audio for videogames, most recently serving as audio director of Tom Clancy’s EndWar (Xbox 360/PS3) at Ubisoft Shanghai. The concepts of non-linear, real-time, algorithmic and procedural structure he honed as a videogame developer serve as the point of departure for his more recent work in a broader cultural arena. Much more information about Ben is on his website:

This exhibition and music festival mark Ben’s final public appearances in Shanghai for the immediate future, as he relocates to the USA for much of 2011. The artist would also like point out that a train from Shanghai to Suzhou takes less than half an hour these days, and a round trip ticket is less than 100 RMB. So don’t miss this unique opportunity to experience the various facets of Ben Houge’s evolving oeuvre in one idyllic setting!

In an ancient city renowned for its cultural heritage, True Color Museum is Suzhou’s key destination for contemporary art. Founded by the intrepid music business entrepreneur Chen Hanxing 陈翰星 in 2008 as one of the leading privately owned art museums in China, True Color Museum has exhibited artwork by leading artists from China and around the world, most recently in the acclaimed “Nature of China: Contemporary Art Documenta” exhibition last summer and in Taiwan’s Hsiau Jungching 萧荣庆 solo show (ongoing through November 11). The beautiful museum compound, designed by Chen Hanxing, is a destination in itself, and the museum’s active artist residency program has nurtured the careers of many established and emerging artists. Additional information is available on the museum’s website:

苏州市吴中区通达路 219 号本色美术馆(近郭巷)

个展《起航》是作曲家以及数字媒体艺术家霍杰明(Ben Houge)作为在苏州本色美术馆六个月驻馆经历的浓缩。这次展览的焦点他新作的18个频道录像作品影像装置——《起航点黄昏自画像》, 把电子游戏和粒状合成的概念融合并应用到作品中以营造令人长久感动的氛围 。另外,本次展览中另外一些数字媒体作品是霍杰明过去两年中对此方面的研究。

为庆祝展览开幕,11月6日星期六,将有一场从下午1点持续到7点电子音乐节。会有来自杭州本土声音电子艺术家姚大钧和王长存,以及上海的徐程(来自Torturing Nurse乐队)进行表演。 霍杰明的现场表演一向风格广泛,他将有三次不通风格的表演体现这样的多样性:一次是电子环境音乐,一次是合成器流行歌,一次是我最喜欢的周杰伦和约•翰凯奇歌曲集合。

由于长期在中国和美国新音乐圈活跃,霍杰明的展览的已经越来越多:上海“艺术+上海”画廊、OV画廊、[the studio],以及北京的今日艺术馆。他的录像作品《上海轨迹 Shanghai Traces 》,去年春天参与OV画廊的”Make Over”展览,入围的古根海姆的YouTube播放双年展,最近永久的成为了上海Glamour Bar的室内装置。 霍杰明在中国东部地区和上海所有的主要现场音乐场所表演过。以及上海电子艺术节,迷你迷笛音乐节,杭州二皮音乐节,上海证大现代艺术馆,上海音乐学院,南岸艺术中心等等 。今年夏天,他和小号手贾斯汀塞巴斯蒂安去了德国。 作为一个全职艺术家,霍杰明之前的十二年为视频游戏设计声音,最近在上海的育碧游戏软件开发商担任 Tom Clancy’s EndWar(Xbox 360/PS3) 的音频主管。他作为一个视频游戏开发商磨练出非线性,实时,算法和程序结构的概念,让他在一个更广泛的文化领域工作有一个新的出发点。更多关于Ben请链接:

在这个古老又富涵文化底蕴的城市,苏州本色美术馆因当代艺术而闻名,是一所成立于2008年由企业家陈翰星开办的私人美术馆。本色美术馆展出了来自中国和世界各地的先锋艺术家的作品,近期有今年夏天的“中国性:当代艺术文献展“展览,以及台湾萧荣庆的个展(展出至11月11日)。 美术馆由陈翰星设计,外观造型独特,而其本身就是一个目标:美术馆的艺术家留驻计划为很多知名或新兴艺术家建立了良好的平台。更多关于苏州本色美术馆的消息请链接:

I, Too, Became a Dissident

Here it is: the ambient, algorithmic art video the Shanghai Cultural Bureau doesn’t want you to see!

Transportation Is Getting a New Look from Ben Houge on Vimeo.

This piece was first exhibited as part of the “Re-Visioning History” show that opened on May 22, 2010, at OV Gallery in Shanghai. Less than a week later, representatives of the Cultural Bureau came in and shut down the show, walking off with a print from my video.

It’s not clear why my work was singled out for confiscation. The show was centered around the work of Zhang Dali 张大力 and Ren Hong 任虹, with several other artists (myself included) invited to present new work based on a collection of historical propaganda posters on loan from Madame Mao’s Dowry. Most analysts agree, and I wouldn’t argue, that my work was probably the least politically oriented in the show. It may simply have come down to my work being most portable and closest to the door. Oddly, they only took one of the two prints I made from the video, ignoring the video itself.


The principle objection seems to have been to the work of Zhang Dali. His work was drawn from a 7-year project called “A Second History,” in which he used his connections to gain access to the national archives in Beijing. He managed to locate the original, historical photographs that were used as the basis for various propaganda posters, which he then incorporated into work that presented both versions of history side by side, in digital prints, silk screens, and paintings. He had already exhibited this work several times without major incident, most recently at the Guangdong Museum of Art, though the rumor is that after the Guangdong show he was pegged as someone to keep an eye on. The most plausible explanation I’ve heard for the OV Gallery show’s closure is that the powers-that-be didn’t want to come across as enemies of culture by officially censoring the show (especially with heightened foreign attention during the Shanghai Expo), so instead they got the gallery on technicalities like selling catalogues and exhibiting foreign artists without a license. So it in all likelihood had nothing to do with my art (which does not mean I’m not mentioning it in every grant application I write from this point forward).

You can read more about the incident in That’s Shanghai, Shanghaiist, and the Wall Street Journal.

And then suddenly on June 22, one month after the original opening, the gallery was permitted to reopen. They even returned my piece, which is actually a bit disappointing; I kind of liked the idea of it hanging over some Cultural Bureau functionary’s desk somewhere. There was a little reopening party on June 26, and the show’s run has been extended through August 5, to make up for the period that the gallery was closed.

All of the hoopla surrounding my piece’s confiscation and the gallery closure has somewhat deflected attention from the work itself, with which, in fact, I am quite pleased.

Transportation Is Getting a New Look is a real-time, algorithmic collage of snippets from a 1970’s Chinese propaganda poster entitled “Safeguard the Orderliness of the Revolution: Transportation Is Getting a New Look 革命秩序维护好,交通战线换新貌.” The point of departure was the idea of a city wall covered with posters. Old posters fade and are torn down, new posters cover them up, and a new, unpredictable form emerges from the remnants, an evolving public collage.

Safeguard the Orderliness of the Revolution: Transportation Is Getting a New Look 革命秩序维护好,交通战线换新貌

My video employs custom computer software (developed in Jitter) to algorithmically emulate this process. The program excises sections of the original propaganda poster and pastes them onto a new digital canvas in constantly varying configurations. The composition unfolds in six overlapping “phrases” of about one to two minutes, each of which define an area to be statistically filled with snippets of the original image in varying sizes and densities. Sometimes the program focuses on one part of the source image, resulting in a consistent shape or repeated gesture. Sometimes the differences are greater: a small detail may be enlarged, or an image may be reduced to a texture or color. As new images are overlaid, the foreground is constantly receding into the background; the present forms a canvas for the future.

As with Shanghai Traces, I feel this is a particularly good pairing of subject and medium. The theme of the show, and the subject of this video, is the process of the present becoming the past, forming history. The medium of generative video provides an apt opportunity to evoke this process by means of another process. The video explores the unique properties of the digital medium; you couldn’t obtain the same results (the repetition of images in varying sizes, scales, and degrees of fadedness, not to mention the systematic evolution over time) using traditional paper collage or any other medium.

The video explores modes of propaganda. Slogans on posters, plastered in profusion, represent perhaps the most common form of propaganda: persuasion through sheer repetition, with no attempt at a reasoned argument. You see that happening in this piece, as the program tends to pick from roughly the same area of the source image for stretches at a time. But while working on this piece, I was also reviewing the work of the Russian Constructivist and Suprematist artists of the early 20th century, who were using pure, abstract forms to convey ideas about relationships and society: structural propaganda, still very much intended to alter society. The blocks of images that comprise my video at times resemble some sketches of Kasimir Malevich. The fragmentation of images in my work can at times feel violent as figures are chopped in half and hand-holding friends are sundered. On the other hand, one figure from the source poster can be cloned indefinitely to form an anonymous crowd or disintegrate into an abstract texture.

I don’t deny it: this is a long excerpt to post on Vimeo. But I also wanted to showcase the large scale ebb and flow of the piece. The pacing of this work is slower than other works of mine, such as Shanghai Traces, and things can take longer to come into focus. The viewer must constantly reevaluate the evolving, emergent structure of the composition. There’s a constant fluctuation between foreground and background as the images slowly fade out, and the rate of fade itself is constantly varying. Independent elements may suddenly coalesce into a balanced structure, only to be ruptured by some new element, which may seem out of place until it becomes a key component of some new structure, or it may simply get covered up and forgotten. The eyes and brain are constantly popping between phases of meaning and order, continuously addressing what to me is one of the most important and fundamental structural questions (something I grapple with in audio as well as visual pieces): “What makes things the same, and what makes them different?”

This is not to say you have to watch the video for a long time to appreciate it; part of the point of the piece is that its generative nature allows you to make your own beginning and ending as you enter or leave the installation, so feel free to start the video somewhere in the middle and watch for as long as you want. The piece also works in an excerpt as brief as a single frame, as you can see in this Flickr gallery.


The video was originally exhibited without sound. For this excerpt, I’ve added some ambient light traffic, recorded from my rooftop studio at True Color Museum, Suzhou, China, where I’m doing an artist residency through the end of July 2010.

BTW, if you read Chinese, you’ll spot some odd characters popping in from time to time. These are from the second batch of simplified Chinese characters (known as “二间”) that were announced in 1977 and then rescinded in 1986, which indicates a time frame in which this poster was originally published. 皃 has since been reverted to its original form 貌, 乙+心 is now 意, 尸+一 is now 展, etc. 片 is also now written slightly differently. For an exhaustive accounting of which characters were changed when, check out this site!

Self-Portrait, Dusk, at the Point of Departure

Hey, wow, new video! This piece is a bit quirky and personal, so I should probably fill in a bit of context. But first, imagine that you are viewing this piece on a huge bank of 24 TV screens, the sole light source in a huge, black warehouse, which is how I would ideally like to present it. [Note that you can turn HD on/off in the video below; it will load faster with HD off, but if you’re up for it, turn HD on, click the icon to the right of the play bar to make it full screen, and turn scaling off.]

Self-Portrait, Dusk, at the Point of Departure from Ben Houge on Vimeo.

I find myself constantly refuting the notion that art made with computers is somehow cold, impersonal, rational, unfeeling, etc., etc. In general, I refute the idea of absolutes in art, that a work must be, for example, either rational or emotional. In my work, both elements are present, and this one swings perhaps farther than most to the emotional side.

All art (including digital art) has some kind of inspiration, and in this case I was inspired during my trip to St. Paul, MN, last winter by the intricate patterns formed by barren tree branches, and how those patterns would shift with just the slightest change in perspective or movement of the branches. I think the first time I consciously started paying attention to tree branch patterns, I was looking out the window of Famous Dave’s on 7th in St. Paul, where I was having lunch with my parents, my brother, my sister-in-law, and my two little nieces. Later I noticed that the same kinds of patterns were occurring right outside my brother’s living room window. I spent a lot of time, last winter in St. Paul, sitting in the stuffed chair of my brother’s living room, working on my computer, opposite this window (to the point that the chair came to be referred to as “Uncle Ben’s office”), and as I gazed at the branches outside, I kind of started to identify a bit with this tangled mess of branches and what they might represent.

I was working on a couple of video projects during my two months in St. Paul, notably Shanghai Traces, and also collecting source material for my foolhardedly ambitious plan to produce backdrop videos for my live pop show. I really wanted to capture some of the unique topographical features of winter in St. Paul (i.e., snow), but I could never seem to find just the right combination of meteorological conditions and presence of mind to go out and actually tape them. So in the end I spent the last 10 minutes of my St. Paul visit standing in my brother’s snow-covered front lawn, videotaping those branches as the sun was setting, just before I hugged everyone goodbye and my brother drove me to the airport.

So my new video installation takes those ten minutes and makes them last forever. A lot of still art can be said to freeze a moment in time, but that’s not the same thing as prolonging a moment indefinitely. In a photograph, for example, whatever was happening at the moment when the photograph was taken is not happening anymore; it’s been stopped. But here, the moment is still happening, and it will never stop happening. It’s not the same thing as looping a video segment, either. In a loop, it would happen repeatedly, which is not the same thing as happening continuously. As in Shanghai Traces, I think this is a really good pairing of subject and medium.

How is this miraculous feat accomplished? By using the same techniques I’ve developed to make sound continue indefinitely in videogames over the past thirteen years or so: shuffling, staggering, offsetting, layering. These techniques are some of the most fundamental in my toolbox, but they’re endlessly applicable to a wide range of real-time organizational challenges. In this piece, each of the 24 screens is independently picking a section of the video to play for a certain amount of time, then picking a new section to play, and so on. The duration and position in the original video are not completely random, but constrained by previous behavior, so that the overall distribution of images across all 24 channels is constantly shifting. It’s very similar to the granular synthesis techniques I’ve used in my audio works, mixing together little chunks of a larger sound to kind of homogenize it into a steady texture (see the sustained textures in Radiospace for a good example).

As is quite obvious, the original video was shot without a tripod, which gives the piece a performative element (not that the world needs another flimsy performance video document). The unsteadiness in my hand as I’m holding the camera is the other subject of the piece, creating motion and the subtle changes of perspective that (in addition to passing breezes) animate the primary subject matter. It focuses attention back on the person holding the camera and the minor endurance test of holding the camera still for 10 minutes in below freezing weather. This idea of endurance echoes comments by Richard Karpen and Mike Min (that the drama of a performance arises from the struggle of a person pushing against his or her limitations). In other words, the motion of the camera in the video is a visualization of my own failure to hold it still, despite my best efforts, which you are free to view as a metaphor for the attempt to hold back time itself.

The end result displays all kinds of interesting formal and textural qualities, byproducts of the same behavior being multiplied across 24 screens. The original video was shot at dusk, so there’s a gradual transition from yellow to blue hues; as my piece runs, the various screens are constantly changing their position along the spectrum, forming new groups and contrasts. The motion of the different screens prompts a different organizational tendency, a kind of counterpoint, sometimes seeming to move together, at other times in contrary motion. When screens pop to a new image, a rhythmic texture emerges as well. The eye is drawn to the sudden popping of a screen to a new point in the source video, but because the new image shares the same perspective as the previous one, it can create a kind of paradox; you know something’s changed, but you’re not sure what. The eye and brain are constantly engaged (although on this small video rendering it may be hard to tell; again, think of a big bank of TV monitors), as the viewer is constantly challenged to re-evaluate what’s the same and what’s different as groups form and dissolve.

The audio for the piece is basically just the audio from each of the 24 individual screens mixed together. It happens to include the sounds of several different transportation mechanisms, which nicely underscores the idea of imminent departure. Occasionally you’ll notice the audio cutting out or in at the same time as one of the screens popping to a new image, reinforcing the structure of the piece. I wasn’t completely happy with the sound I captured on the camera’s little built in microphones, so I wanted to filter it a bit, and once I got into filtering, I really liked the mood I got by notching certain harmonic sets of frequencies. But I also really liked the neutrality of the unfiltered sound, and I couldn’t decide if this was too much meddling or not, so in the end I have it both ways, with the notch filters algorithmically fading in and out. The filters’ base frequency changes at longer intervals, which gives the piece a higher level structure and periodically refreshes the ears by establishing a new tonal center. For a public installation, I would revisit the filtering behavior; ideally, if I could present this piece in the big warehouse I’m dreaming of, I’d tune the piece to the room’s resonances.

I have no idea when I’ll actually have a chance to mount this as a public installation. Ideally, it should be displayed on a big bank of 24 TV screens mounted in an 8 x 3 array in a huge, dark, empty space. (If it strikes you that such a bank of TV’s would resemble the banks of monitors displaying airline departure times at an airport, you might be interested to know that in fact I did the first draft of this software while waiting overnight at Dulles International Airport in Washington, DC, for my connecting flight to Nairobi last January 29, seated across from just such a bank of monitors.) I like the idea of encountering it first from a distance, the images gradually coming into focus as you approach, with the bank of screens generating the only light in a room so big and dark you can’t see the walls. There should be a bench in front of the screens, or pillows, so people can hang out for a while, or maybe some stuffed armchairs, like at my brother’s place!

This is kind of the worst possible combination for a digital installation: expensive, but subtle. Typically, if someone invests in a big, 24-screen video wall, I guess they want something big and flashy, not quiet and contemplative like this. But if anyone would like to be the first to present it, that honor is yours for the taking!

I’d like to dedicate this piece to the Minnesota Houges, with love and gratitude.

Shanghai Traces

My newest video piece, Shanghai Traces, is just finishing its 7+ week run at OV Gallery in Shanghai.

It was part of the Make Over show, curated by OV’s Rebecca Catching as a response to the remarkable face lift that Shanghai has been undergoing for many months in anticipation of hosting the World Expo this year. Construction has been even more madcap than usual, with new subway lines being built, streets being repaved, and housing blocks being repainted several lanes deep (typically dingy bricks painted brick red, with the gaps between them painted caulk white). The Shanghai apartment I vacated last December was 23 floors up, more or less just across the river from the manic construction at the Expo grounds, and the dust that accumulated was amazing. I was considering bottling it and selling it, like Mount St. Helens ash.

Unfortunately, I managed to miss the entire Make Over show, as I was traveling first in the USA, then in Kenya (more on that later), but it seems to have been pretty well received; I’ll try to get some press up here soon. For those who, like me, were unable to check it out in person, here’s a sample rendering of my piece:

It’s a real-time, algorithmic (or, if you prefer, generative) video running continuously in Jitter on a computer screen in the gallery, which means that this video represents a unique configuration of images that will never occur again. I find the topic particularly well-suited to the medium of real-time video, as this is the way people move through a city, constantly coming and going; you never know who you’re going to bump into on the sidewalk. It’s a never ending process. The same thing’s happening in the video: images fall in endless variation, their traces mingle, and then they disappear.

Here’s the blurb I wrote for the show catalogue:

Perhaps to a greater extent than most cities, many of Shanghai’s residents are just passing through. From the poorest migrant worker to the flushest CEO, people come from all over seeking opportunity, adventure, and fortune. Shanghai Traces is a meditation on the manner in which these passing characters leave their mark on the city. The tumbling images are cobbled together algorithmically in real-time, mirroring the interactions of the city’s inhabitants in ever-changing patterns and configurations. Some stay, many move on, but every life leaves a trace, however fleeting.

My video ran as a companion piece to furniture designer Jutta Friedrichs’s sculpture Paved Landscape. She conducted a series of interviews with street vendors who are being evicted from the city center during the Expo as part of the city’s beautification plan, and brought home an assortment of their brightly colored wares. She photographed each item for me to incorporate into my video, and then she encased them in resin and set the resin in a concrete shell with a plant inset. Here’s a photo:

Jutta Friedrichs's Paved Landscape
Jutta Friedrichs's Paved Landscape

She also compiled a book describing the project and collecting the interviews she conducted. (Contrary to what That’s Shanghai reported, that part of the project was hers, not mine.)

Shanghai Traces exemplifies a lot of the ideas I’ve been using in my audio work, in videogames as well as installations. The computer is selecting parameters to keep things unpredictable, but items aren’t falling completely randomly; rather, you can notice that certain parts of the screen develop their own rhythms, sometimes speeding up or slowing down. There’s a list with a range of values for each item; you’ll notice that some items never change color, or that some items tend to be consistently bigger than others.

In addition to the video itself, I also made some limited edition, hi-res prints (88.9 cm x 66.7 cm), captured from the video and printed on archival paper, which are available for purchase from OV Gallery. They make excellent gifts. Here are two examples:

Shanghai Traces 2010119121916
Shanghai Traces 2010119121916
Shanghai Traces 2010119120335
Shanghai Traces 2010119120335

See some more on my Flickr feed!

The piece was originally conceived as a video only piece; no sound. I think in this silent form it works very well in a gallery setting, especially in a group show. After certain past experiences, I’ve gotten very picky about sound in galleries where other pieces are present. Nonetheless, when I started thinking about posting the video online, I decided to add an audio component, so folks wouldn’t waste time wondering what was wrong with the sound instead of just watching. So I implemented a simple idea I’ve been wanting to play with for a long time: from a stream of speech, little excerpts are plucked out and kind of frozen in time, suspended and fading out. In this case the stream of speech is a continuous babble of fragments chosen statistically from recordings of the interviews Jutta conducted. There are 30 layers of this behavior going on at once, creating a nice, ambient din. I like the way the delay time of the little frozen fragments matches the rate at which the vendor objects are falling down the screen. Further, the frozen bits sometimes take on a mechanical quality, evoking the jackhammers that have been enthusiastically tearing up the streets as part of the city’s renewal.

Like any good project, this one leaves me with lots of ideas for further work. I love the density of this piece, but after this experience I’m really curious to do some smaller scale studies, to really get into details of maybe just pairs of objects, and to finely hone the combinations of color and movement (which was kind of the opposite of the objective here). It would also be fun to contrast different types of behavior as part of a larger piece, perhaps spread across multiple video screens.

Big thanks to Rebecca for instigating this exhibition and inviting Jutta and I to participate! Thanks also to Jutta for being a sounding board for ideas, and for helping with the source photographs and audio!

It’s for this experimental film that nobody knows about and which I’m still figuring out what’s going to go in my experimental film.

Hey, look at this thing I made:

Study for Insomnia from Ben Houge on Vimeo.

This is what I spent the last 4 days doing, instead of everything else I should have been doing instead. I’m calling it “Study for Insomnia.”

It began as a demo for artist Cindy Ng 吴少英, whose video work is presented in the same room as my 路口 sound installation at Art+Shanghai (you have until November 1 to see the show, so hurry!). She was asking how my piece was constructed (audio “phrases” strung together and overlapped in real-time by a computer, providing constant variation), and I suggested that it would be possible to use the same technique to deploy her video work. I made a quick demo to show her last weekend before we collaborated on one of her “Ink Walk” performances, and she was intrigued to the point that we’re now planning a full-on collaboration. I’ve continued to flesh out my little demo over the last few days into something I’m pretty happy with, and it ought to serve as a pretty good template for whatever we end up doing together.

Like most of my electronic pieces, there’s no pure synthesis going on; everything is manipulated from source recordings, in this case, video and audio recordings of rumpled bed sheets (yes, the inspiration came while lying awake a few nights ago). The unpredictable patterns that result are similar to other kinds of natural phenomena from which I draw inspiration (e.g., rain, fireworks, traffic), chaotic on a small scale, but consistent on a large scale. Like my other installation work, the piece can continue indefinitely, with no beginning or end; here, for convenience, I’ve recorded only a brief excerpt.

I guess you could call this my first video piece. I’ve been using Jitter, the video-processing add-on for Max/MSP for over a year, although this is the first time I’m presenting the results publicly. The first time I used it was in the software I developed for my performance (together with Yan Jun 颜峻 and Bruce Gremo) of Christian Marclay’s Screen Play at the Shanghai eArts Festival in October 2008, but it was used only to prepare cues for myself, not projected for the audience to see. (In fact, in the very first version of this little demo, I used a snippet from Screen Play as video source, a shot of rolling waves.) I’ve also used Jitter in some consulting I did for a videogame company last winter, analyzing audio signals to generate game levels.

I’m not sure yet if this will ultimately serve as the background texture of a bigger piece later on, or if it’s fine just as it is, as a super ambient standalone piece. I could see it working as a backdrop for a live performance. Or it could work in a gallery setting (ideally with 4 channels of sound). For what it’s worth, in terms of scale and mood, the 4-day process of putting this program together felt very much like writing a pop song.

What’s happening in the program is pretty simple. I’m picking little bits of my original bed sheet video and slowing them down, rotating them, and fading in and out from black. There are four asynchronous layers of this activity happening at once. The speeds are different, and there are slight hue offsets for each phrase. I adjust the contrast, hue, and saturation, and I add some feedback. For each video phrase, I pick an audio phrase from my original wave file (recorded separately from the video), and fade in and out, in time with the video. I’m also analyzing the average brightness of each video stream and using that to control the pitch of a bank of filters that the audio runs through before hitting the speakers. For each phrase I pick a different pitch multiplier (over roughly 4 octaves of a just intonation minor scale, not that it matters). Hue, feedback, filter Q, cutoff frequency, and probably a few other things are slowly modulated by random offsets to keep things interesting. I found there are lots of settings that produce different outcomes, all acceptable, so I implemented the old John Cage dictum: “Say we have a problem and 100 solutions; instead of chosing just one of them, we use them all.”

I ran into a tricky little trigonometry problem while working on this piece. I had to figure out the zoom ratio while rotating the images so that I wouldn’t go off the image and add weird black edges to the composite. I pondered long and hard, and I think I was at least looking in the right direction, but I eventually had to enlist the help of my pal Micah Sittig, who teaches physics over at SMIC Private School, and to whose class I gave a little presentation on music acoustics last March. He solved the problem in about a minute. It’s good to have smart friends. Thanks, Micah!

Crazy coincidence: yesterday while working on this piece, I thought I’d take a break and listen to something new, and a CD of music by Esa-Pekka Salonen came to mind. I picked up this disk following one of the premiere performances of his fellow Finn Kaija Saariaho’s second opera Adriana Mater in Paris in 2006, but I had never listened to it. I feel like a CD, if I have any reason to believe it will be good, really ought to have a focused and concentrated first listening, and consequently, I have a huge stack of shame of CD’s I’ve purchased, but haven’t yet had the time to devote to a proper first listen. Especially with all the hoo-ha about the new music director Gustavo Dudamel taking Salonen’s place at the Los Angeles Philharmonic, I’ve been thinking lately I really ought to listen to it. So today I picked it up and unwrapped it, and to my amazement, the last piece is also called Insomnia! I’d already been planning to call this new piece of mine something about insomnia for a few days, so I swear I didn’t steal the idea, unless it was buried somewhere deep in my subconscious. Anyway, I just listened to it, and it’s really a really nice piece, though somewhat more detailed than mine. Isn’t that nuts?