The Return of Synth-pop

Allow me to share a few words concerning my synth-pop debut at the sold-out Antidote Electronic Music Festival in the Shanghai water town suburb of Zhujiajiao last Saturday.

This was the first time I’ve done a solo set like this since a three-song open mic night performance at the Art Bar in Seattle on July 17, 1997. And by “set like this” I mean full-on synth-pop, with me singing and playing some keyboard parts live on top of bright, intricate, rhythmic backing tracks (essentially the Depeche Mode recipe).

First step was to properly record all of the songs, and the last one, “Prebound,” was finished just over a week before the festival. I’ve started doing three mixes of each new song I record: one full version, one karaoke version (you never know), and one “music minus one” version, in which I mute the main keyboard tracks that I want to play live.

Once all five songs in my short set were written and recorded, I wrote three patches in Max/MSP to help me pull them off live.

The first patch behaves very similarly to Windows Media Player or Winamp or whatever: simple transport controls (play, stop, pause, resume), with a big slider at the bottom to instantly access any part of the song (primarily for rehearsing). I use this to play back the “music minus one” mix.

Then I wrote a simple sample-playback synth in Max. I wasn’t about to haul all of my synthesizers out and set them up on stage, so I sampled the eight or so sounds I required. When MIDI note information comes in from my five octave M-Audio Axiom keyboard controller (connected via USB), my program maps the sampled notes across the full keyboard range, with a simple attack/decay envelope applied. The result is generally not quite as dynamic or vibrant as the original sound, but close enough.

Last I wrote a patch that would track my current position in each song and load different sounds into my sampler at the necessary times. (My playback patch outputs the current song position in milliseconds, making this pretty easy to do.) If you start playing in the middle of a song, it’s smart enough to look back and see what the current patch should be and load that. I didn’t use any of Max’s sequencing objects for this, just a simple collection.

None of this sounds super impressive, I guess. Altogether it probably took me about four days to do. I suppose a lot of people would have done this in a sequencer like Cubase, and I imagine that could work well enough (although I find Cubase a terrible patch librarian). The main advantage for me was that everything could be completely automated, so that on stage I just had to type 1, 2, 3, 4, or 5 to load one of my five songs, the hit the space bar to start, and all of my patch changes happened automatically, so I could focus instead on singing in tune.

The show went alright, not bad for a matinee. I was up first, so folks were still trickling in, but those who were there seemed to dig it well enough. There were a few flubs that I need to iron out with more practice. It was a terrible idea to follow “口口口口口口口口” with “Prebound;” after shouting at the top of my voice, it was very hard to keep that low falsetto in tune. Also, I borrowed a keyboard stand (from the friendly folks over at Resist! Resist!, resplendent in their fine debut performance!), and it was a little short for me, so my whole posture felt out of whack (though I tried to pass it off as an intentionally splayed and petulant rock stance); it was really silly of me not to practice and perform with my own keyboard stand.

Next steps: practice, buy my own keyboard stand, add more songs to the set, and update my sampler to handle keyboard splits! Then when all that’s solid, I’m going to work on real-time algorithmically generated visuals, but that’s a ways off, I think.

Big thanks to Michael and the Antidote crew for inviting me to participate, a super swell time!

口口口口口口口口

Hey, check out my new song, “口口口口口口口口.”

Perhaps some explanation is in order, especially for those readers living outside China (such as the Russian spambots that frequent my blog—hi, fellas!). The Chinese character 口 is written “kou” in Chinese pinyin, and it’s pronounced like “comb” without the “mb.” It’s pronounced with a falling-rising tone, classified as the third of Mandarin’s four tones. Especially when first learning Chinese, I would sometimes involuntarily bob my head while speaking to help me reproduce the proper inflection.

I’m no professional etymologist, but basically 口 represents a kind of archetype mouth. It’s not used to refer to the mouth of a person (that would be 嘴, or zui, also third tone), but it’s used for doors (门口), entrances and exits (进口 and 入口, respectively), mouths of rivers, and such. It also functions as a component (or “radical”) of more complicated characters, usually indicating that the character has something to do with the mouth (as in the aforementioned 嘴), or that it’s an interjection of some kind.

Most foreigners living China will be familiar with the phenomenon of receiving a Chinese text message on a phone that doesn’t recognize Chinese characters. The text is therefore displayed as a series of boxes, which, as the astute reader will have noted, closely resembles the character 口. So I used to make a little joke of feigning ignorance on the subject, delighting myself and friends with the apparent ability to read a text message comprised entirely of 口’s. And that, pretty much, is the idea behind this song.

That’s Torturing Nurse’s Xu Cheng 徐程 on guitar. I wanted to do a noise solo, thinking about garbled communications and such, and while pondering how to accomplish this, I decided, well, why not just turn to the pros? I had gone to Xu Cheng’s house for a Torturing Nurse rehearsal in 2006 (Torturing Nurse being Shanghai’s seminal noise band, who just celebrated their 5th birthday 2 weeks ago) and recorded some of his guitar playing for the piece Mobile 3 that I performed at that year’s 2Pi Festival in Hangzhou. I asked Xu Cheng if I could reappropriate some of this material for this song, and he graciously agreed. I think it fits the song super well (in fact, I think the end result is probably more successful than Mobile 3).

I had a bit of an agenda for harmony in this song. A lot of pop music I hear seems to rely on volume, distortion, and aggressive delivery to convey, you know, angst or tension or whatever. But to me, harmony is the real source of angst in, for example, Nirvana’s best songs, distinguishing them from, you know, Warrant, or so many punk bands that posture angst on top of common practice era chord progressions that could have been lifted right from the pages of Mozart. So especially given that my sound palette mostly revolves around shiny synthesizer tones, I wanted to try to get the frustration and uncertainty of the song across harmonically.

The song is basically built around a whole tone scale, though I switch whole tone scales a few times. This shifting between whole tone scales (there are only two) happens with increasing frequency in the longer, louder second verse section, trying to settle into something that won’t be pinned down, but at the very end approximating enough of a major scale to suggest a half cadence. There’s usually a constant drone in the background, and tension derives from these two musical ideas trying to fit together somehow, to forge some meaningful relationship. I think this is the neat thing about harmony; it’s not just a metaphor for something not fitting in; it is literally the same thing.

At the same time there’s only one chord type in the whole song, a major triad, and it’s always presented in root position. I did something similar in “Hack Coo!” from Stranger Personals, a setting of personal ads from The Stranger for voice and piano, where almost defiantly optimistic major chords are lost in a cascade of other notes, depriving them of their tonal moorings. Since the roots of the chords conform to alternating whole tone scales, but the chord type is major, the hegemony of the whole-tone scale is constantly being thwarted by the fifth of the chord. At the same time, the constant transposition of this immutable voicing causes harmony to move towards the realm of timbre (like a pipe organ, or Ravel’s doubling of horn with piccolos on the upper partials in Bolero), so that the chord starts to fuse into a single musical entity.

The bridge breakdown is the only part of the song that exists entirely in a whole tone scale, with no perfect fifths to get in the way. While this keeps it from resolving in a traditional tonal way, the fact that it belongs all to one scale provides a kind of respite from the conflicts of the rest of the song, creating this brief cocoon of tentative intimacy before exploding again.

Around the time of his death earlier this year, I was rereading George Perle’s The Listening Composer, in which he points out the prominent role that symmetrical structures play in the music of Berg, Varese, Stravinsky, Bartok, and other giants of the early 20th century (an aspect that unifies these rather diverse composers). [A symmetrical structure is basically an interval sequence that eventually gets you back where you started. If you move by half-steps or fourths or fifths, you get the whole 12-tone chromatic scale; but if you move by whole steps, you get the 6-tone whole tone scale; a half-step plus a whole-step will get you the octatonic scale, etc.] So I still had these ideas on the brain, although I’m sure this song wouldn’t have earned much more than an eye roll from Mr. Perle. While the verses are mostly wandering adrift in whole tone land, the chorus and breakdown shout-out sections are working through different cyclical structures; for example the breakdown repeats the same material at (negative) minor third transpositions until arriving back on the initial pitch. The chorus pattern basically short-circuits a circle of fifths progression by the introduction of a minor third, so that the figure leads straight to the tritone transposition and back again. (For another, prime example of symmetrical partitioning in a pop lick, check out Prince’s “P Control” from 1995’s The Gold Experience.)

My original concept for the vocal delivery of the song was to have it kind of shouted, kinda rap or sprechtstimme, to keep things floating and unresolved, and at several points during production, I fought the urge to turn it into a conventional melody. There kind of is a bit of a hidden melody, a simple, slow-moving ascending figure in the fuzzy drone part during the verses, but I only vaguely follow the contour of it, not matching any pitches. As is probably quite apparent, I was thinking very much of Elvis Costello, in particular “Pump It Up” and “Playboy to a Man.” That squawking sound is something I’d heard Prince do (and Elvis, on rare occasion), but never figured out how to do it (by inhaling) until going back to the source, some old James Brown recordings. Of course the shout-out stuff is very Prince inspired.

Can we have that 800 number again?

口口口口口口口口

Excellent.

[Addendum 9/22/2010: I forgot to mention there’s also an oblique reference to Sylvia Plath’s poem “Metaphors.” Extra points if you can find it!]

DJ Biff Jorgensen, SPS

Here’s a sample of what you get if you ask me to DJ:

Lusine, Ask You
Laurie Anderson (featuring Lou Reed), In Our Sleep
Vampire Weekend, Oxford Comma
Duran Duran, Big Bang Generation
Salif Keita, Africa
Prince, 3121
Jolin, 说爱你
Vanessa Paradis, Divine Idylle
Johnny Cash, Daddy Sang Bass
Carl Stone, Flint’s
DJ [Yamatsuka] Eye, Moth
Pierre Henry/Fatboy Slim, Psyche Rock
Wayne Horvitz & Pigpen, ‘Cause I’m in Love Yeah
Serge Gainsbourg, Love on the Beat
Big Audio Dynamite, Rush
Ben Houge, Jessica’s Scissors
Aphex Twin, 4
Christian Marclay, Frederic Chopin
Karlheinz Stockhausen, “Struktur XV” from Kontakte
Jay Chou, 牛仔很忙

I made my unwitting DJ debut at the launch party for the Mommy Foundation at CANART last Friday. I was originally asked to just do my thing, so I was planning to play the same type of ambient stuff I played at 2 Kolegas and D-22 in Beijing last month, basically presenting some of my real time ambient sound installation pieces as performances. But I increasingly got the sense that more danceable party fare was what was desired (especially after I heard the Antidote folks had already turned them down), so I ended up just playing a bunch of my favorite MP3’s. I actually prepared way more than what’s in this list, but didn’t have time to get to a lot of it; in particular, there should have been more Chinese rock. Also some Curtis Roads. And Christophe. And Naked City. Wait, I want to do it over!

I think I’m going to write a Max patch to help with transitions and stuff in the future, though that means I’ll have to rerip a bunch of music as MP3’s instead of WMA’s. Anyway, let me know if you want me to spin at your bat mitzvah or whatever; I’m officially on the market!

I did take advantage of the fact that a bunch of guys pretty much cleared the room by playing about 10 levels of Rock Band halfway through the evening to test the waters with some of my installation pieces, and actually, for a mellower, end-of-party scenario, Radiospace in particular focus grouped pretty well in a gallery environment, especially when I let folks tune to their own radio broadcasts.

And let me share a related observation (not necessarily leveled at this particular event). It seems a lot of arts folks, when planning art parties, after carefully coordinating all of the other details of the event to deliver a certain aesthetic experience, will nonetheless go for the knee-jerk, “Ooh, and we need a DJ!” solution when it comes to planning their party’s sonic environment. I encourage everyone to be equally vigilant when selecting a sound of their arts events, and to seek out music and performers who are aesthetically matched with the rest of the experience. There’s such a tremendous range of possibilities for party music other than the electronic dance music default, and art folks in particular should be sensitive to this fact. It always strikes me as odd when people who delve deep into visual aesthetics and issues and movements don’t exhibit the same curiosity and rigor towards sound.

Consult your qualified music consultant today!

Fat Art Lessons

(Before I dive in, let me draw your attention to a recap of the Fat Art show I did for the China Music Radar blog! Also, if you want a thorough description of the installation I did for the show in collaboration with Chen Hangfeng, check out my previous post on the subject.)

I’m trying to imagine what my reaction would be if I were to check out the Fat Art show as an impartial observer. According to the show introduction, “Music to My Eyes is an art exhibition with a difference: in each of the works created for the project, sound is an integral part of the visual presentation.” But it’s really not such a unique concept; I’ve seen many shows that try to do more or less the same thing, one at Duolun a few years back, that tent annex at the Shanghai Biennial in 2004, a recent Shanghai MOCA show, etc.

Not only is it not such an original idea, but it’s also not particularly well-advised; every time I go to one of these multimedia installation spectaculars, the result is cacophony, where no piece has the sonic space it requires to say its piece, and if a good piece is even to be found, it’s usually lost in the din. Even the current Nam June Paik show at Beijing’s Central Academy of Fine Arts suffers from this problem (it’s the worst of both worlds, in fact; the sound is thin and weak, yet still manages to bleed through from piece to piece).

And now, I, too, have fallen victim.

A critical aspect of my piece is that it’s processing real-time sound coming from two microphones positioned around the gallery. It’s an audio corollary to a kaleidoscope, performing the same kind of function that Hangfeng’s video piece does, fracturing the everyday world into something unexpected and, I think, beautiful. So when noise from other pieces in the show makes it impossible to tell that the sound on the speakers is a manipulated live feed, or even to distinguish what sound is coming from my installation as opposed to the piece next door, the point of the piece is lost. In this context, I have to say, my piece fails.

But of course, some of life’s most important lessons are born of failure, so I’ve done my best to derive some helpful maxims, some logistical, some practical, some aesthetic, to keep in mind for future work. The comments that follow are not necessarily limited to Fat Art, but are culled from my experiences and frustrations with a number of sound art exhibitions over the years, issues that were very much on my mind while installing our piece.

Most important: make sure you’re prepared to handle the unique challenges presented by a show of all sound-producing works.
I’ve never seen a show all sound-producing works in which the pieces didn’t bleed into each other and diminish the experience. I think this is simply because most galleries and show organizers are not equipped to deal with sound. To successfully pull off a show like this requires the expertise of an acoustician and an audio engineer.

An understanding of the sound that will be generated by each piece is essential in determining the layout of the show, not just from description, but from actually hearing it (a challenge, of course, when the pieces are being developed specifically for the show, but one that must be addressed). Someone with some acoustical background should work to improve the acoustics of the gallery space (typically an afterthought) and help to sonically isolate pieces (typically a stab in the dark). An audio engineer’s expertise is needed when laying cables and positioning speakers, as well, to ensure that, for example, power cables and audio cables are not all run side by side (thus incurring interference). Finally, someone with trained ears also needs to be there to mix and set levels for the entire show in a systematic manner.

Given the proliferation of sound art, I’d say there’s a need for a new type of specialized “sound curator” to emerge to handle issues related to sound art and sound in galleries.

Not everything in a show about sound needs to generate sound.
There seems to be a pervasive assumption that if one sound-producing work is good, then a whole show of them is even better. But even apart from the practical considerations of acoustical isolation, the issue of aesthetic isolation remains. Exhibitions often derive their strength from a dialog between pieces, but in most new media shows, where pieces are often corralled into little cattle stalls (as with a recent exhibit at the South River Art Center), the goal is usually just the opposite.

A show about music or sound doesn’t need to include exclusively sound-producing works. There are so many mute images that nonetheless suggest rhythm or sound or music in their form or subject matter, and a sound installation might well benefit from proximity to visual works. I remember seeing an excellent Christian Marclay show at the Seattle Art Museum in 2004 that pulled this balance off expertly. Liu Ye’s paintings didn’t really need to have music by the artists depicted playing in the same room on an endless loop; maybe another piece in the show could have served as a soundtrack. A record label like Modern Sky could also capitalize on its album art, for example, the fine prints Jonathan Leijonhufvud created for the latest ReTROS album (actually on sale just around the corner at the Today Art Museum gift shop).

Bring your own gear.
We initially agreed that Hangfeng and I would provide all of our own equipment. This would allow us to work with the final hardware and ensure everything was functioning properly before arriving in Beijing. But then, in an effort to cut costs, it was decided that the museum would furnish all the gear instead. So I was greeted with a pair of “Vocal King” karaoke monitors when I arrived at the space, not exactly the “Tannoy 5A or equivalent” I had specified. And all of the mismatched projectors that had showed up for the various pieces that required them (including six for our piece) had to be sent back, as they were not up to snuff. Getting the cheapest gear ended up costing more money, time, and frustration in the end.

On the other hand, I believe Mathieu Borysevicz located all of his own gear, which allowed him to get up and running a lot sooner. I’m going to insist more firmly on this in the future.

Pay attention to acoustics.
According to the Fat Art magazine that doubles as the show’s catalogue, the Xinjiang artist Aniwar intended to create “a realm complete and utter silence,” in which the only sound would be “the rise and fall of the breath, the pounding of blood in the veins, the roar of silence in the ears.” His comments closely mirror John Cage’s often repeated account of his visit to an anechoic chamber (in which, instead of silence, he was surprised to hear two sounds: the high frequency buzz of his nervous system and the low frequency throb of his circulatory system [a claim which always struck me as somewhat scientifically specious]) to an extent that I doubt is coincidental.

But if you want to build an anechoic chamber, you don’t turn to an abstract painter. His main technique was to line the walls with bolts of felt. In the end, the installation doesn’t even look finished, with bolts of felt standing next to the entrance, which suggests to me that when the other pieces started making sound (including a video looping right outside his room’s open door), he kind of gave up on the idea.

Given this approach, complete silence was clearly not going to happen. But when I popped in to check on progress halfway through installation, I was nonetheless struck by the sudden change in acoustical space. Museums (especially the renovated warehouses and factories popular in China) are generally not conceived with a consideration for acoustics, and with all the construction and yelling going on as people were setting up their pieces, his room did have a markedly different feel. I think a lot of the time people don’t pay attention to the acoustical characteristics of the space they’re in until it changes (unless it’s really bad). So this could be something fun to play with in the future, either as an artistic parameter on its own, or just to help set off some other sound-producing element.

Mapping space and time
It’s actually a testament to the catchiness of Ge Fei’s piece that I still don’t mind hearing it on the CD accompanying the Fat Art magazine, even after a week of it seeping through the walls of our installation and interfering with my own sound. You have to read the magazine to realize that the sound is actually derived directly from the painting by Xu Ruotao next to which it was looping. The methods used are not described, but I imagine the technique was to use a tool along the lines of Metasynth to transform an image into a sound. The result was a five minute file that endlessly looped on a portable music player.

From my perspective, the act of mapping between different sets of data is at the core of digital art, full of fascinating challenges and possibilities. (This kind of mapping, in fact, inspired the name of this very blog.) But from my conversations with others (even other artists conversant in new media), it seems there are many assumptions regarding the mapping of images and sound that go unquestioned, though they are far from the only approaches possible. I guess this stems from our general familiarity with the two most common methods of representing sound pictorially: music notation (x equals time, y equals pitch) and waveform displays (x equals time, y equals amplitude). But there is no innate characteristic of the x axis (or the y axis, or the color depth or brightness of any pixel or any other parameter) that signifies time. Ge Fei’s suggestion that the painting is five minutes long is therefore completely arbitrary (and I’d say, having spent some time with the painting myself, perhaps a bit generous).

Experiencing this piece made me ponder that there are a lot of ways to map space to time in a real-time system, something to potentially explore in future work. It seems to me it would have been more effective if the mapping were happening continuously in real-time, so that you could experience the sound as you experience a painting: as long as you want to, making your own beginning and end as you come and go. You could even use some head-tracking routine to generate music based on the area of the canvas being examined.

Plan for adequate ventilation.
Our room was a small, custom-built hexagon inside one of the main galleries. I’m not sure if it’s due to the quality of construction materials used, or due to the lack of ventilation, but after a while the room started to stink, to the point where I saw a few people enter, take one whiff, make a face, and leave immediately. In the end the imperfect solution was to put the curtain aside to air the room out when the museum was closed.

This would have been less of an issue if the walls weren’t feebly trying to block out sound from other pieces; they might have been made from a more porous material. But if the walls must serve as soundproofing, then issues of ventilation (one of the trickiest issues in building a sound booth, as I learned when we installed one at my former office at Sierra) must also be addressed.

A more whimsical thought: while our experience brought the subject of smell to the fore, my friend Defne has also been collaborating with a perfumer to create the smell of the moon for her upcoming Futurist event. Tagging the subject for future research…

Even if you don’t need to be there, be there.
There’s really no reason I should have had to be on site for much of the set-up of our piece; there were workers there to hang cables, paint walls, etc. Most of the time I felt I could have been more productive fine-tuning my Max patch back at the hotel room than hanging around on-site amid the cacophony and astringent fumes of construction.

But not only was I able to catch some mistakes in installation (you can’t run power cables and audio cables next to each other), but when I was there, our piece’s needs simply got more attention. If something wasn’t happening, and I started doing it myself, help would suddenly materialize.

This really isn’t unique to putting on an art show; it’s general project management, just as true for a videogame. In fact, I was struck several times by the similarities between setting up for a big event like this one, and getting a videogame out the door.

And a few longstanding maxims were reinforced.
No loops!
If I have a mantra, this is it, finely honed from 12 years of audio development for videogames (i.e., real-time, digital systems). I think anyone who decides to loop a piece to make it run indefinitely in a gallery fundamentally doesn’t understand the medium of installation. I’ll expound upon this more in another post; basically a loop is the least creative answer to a very interesting question.

The refrigerator door effect
The only pieces that were really interactive at the show were Wang Bo’s and (maybe, depending on how it was supposed to be working) Yan Lei’s. Wang Bo’s piece included some of his cartoon characters rendered in life-sized plastic that cried out in pain when struck. This type of interaction, so distressingly common in digital artworks, is analogous to a refrigerator door: when you open the door, the light goes on, a simple one-to-one correspondence that, once observed, offers very little in terms of replay value. Further, behind these plastic figures, an animation of the same characters being menaced by a monstrous figure also looped, so that the piece actually broke two of my cardinal rules. And it must be said that the basic audio elements on such incessant display were poorly balanced in volume and timbre, offering no illusion that they were emanating from organic personages in a common acoustical space.

Let film be film, and let installation be installation
I was planning a big post on this topic to register my disappointment with the SH Contemporary show last fall; maybe I’ll still get around to it. What I noticed there was that almost all of the video art, except for Bill Viola and a Korean artist who’s name I’ve forgotten, was unduly beholden to the conventions of film. I don’t want to get mired down in semantics, but for me the most useful distinction is that film (including “films” shot on digital cameras) is about providing surrogate eyes, occuring in a dark room that is designed to make you forget you’re in a room at all; by contrast, video installation exists in a space, as an object.

To me, Mathieu’s piece falls squarely into the former category; I think it’s an eloquent film, and here as well as in other works of his that I’ve seen, he shows a particular knack for multiple channel narrative. But since his film so clearly presents a narrative arc, I found it frustrating to encounter his piece at the top of the stairs to the exhibitions second floor, where you’re almost guaranteed to start watching the film somewhere in the middle, then watch through to the end, then keep watching from the beginning until you get to the point where you came in, then try to piece the whole thing together into a coherent narrative in your head.

I’ve been proposing a simple solution to this problem for years: a countdown timer to the next show time! I’ve yet to see anyone try this.

I actually thought Sun Lei’s and Pei Li’s pieces both worked better as installations, even though they were also looping, since they were dramatically flat, more a series of tableaux than a story. It doesn’t really matter when you come and go.

In closing
Hope this doesn’t all come off as too grumpy; as my surliest composition professor, Richard Karpen, once said, if everyone simply applauds and says, “Great piece,” you never make any progress. In the end, despite some frustrations, it was a fun and rewarding experience, and I got a chance to work with many lovely people in the process, deepening my relationship with Hangfeng, getting to know other artists like Sun Lei and Pei Li, plus all the indefatigable folks who organized the show, Karen (particularly spry in addressing unforeseen challenges during set-up, and an unwobbling pivot throughout development) and her lovely assistant Lauren, Shen Yue and the tireless Ji Su from Modern Sky, the gregarious Liu Yitao from TAM, and so many others. Let’s do it again sometime!

Kaleidoscope Wallpaper

My collaborative installation with Chen Hangfeng, which I guess we’re calling Kaleidoscope Wallpaper, or maybe just Kaleidoscope, is up and running at the Today Art Museum. We’ve been here in Beijing setting it up since April 11th, and the show opened to the public on April 17, and after a weekend of opening activities, including a lecture by Hangfeng and I yesterday, he’s back to Shanghai, while I’m sticking around to perform another laptop set at D-22 next Thursday night. The show runs through May 3. Get a taste of what our piece sounds like here.

I wrote at length about the work as it existed when I performed it at Brainwave Communication last month. Most of what I wrote then is still true, but I thought I’d take a moment to bring you up to date with the final piece, as it is running in the museum right now, across the pedestrian walkway from where I sit typing this at Unconditional Love Coffee. Before reading the following, you might want to brush up.

When I did the piece as a concert piece at Brainwave Communication (and again last week at 2 Kolegas in Beijing), I was providing the high level brain (such as it is) of the piece, deciding when to turn things on/off, setting volumes, etc. Since I’m not on hand to perform this role as the piece runs continuously as a gallery installation, I wrote another high level “brain” to serve this purpose. At periodic intervals (between 45 seconds and 2 minutes), it essentially flips a coin to decide if each of the 3 main filter behaviors (steady chords, independent swoops, or rhythmic pulses) should be on or off. When the steady chords are selected, another choice is made to turn on one, two, or three layers of chords, which may overlap independently. If all three behaviors turn up negative, a fourth, unique behavior is triggered: the source sound is turned up to prominence, giving listeners a chance to hear what’s behind the filters more clearly and make the connection to the real-time audio feed from the mics. I also take this opportunity to pick a new scale (harmony still functioning as described in my original post) for when the filters come in again.

So the basic idea of four “scenes” is still kind of true, but they’re not completely equal. The fourth scene, which is the unmanipulated audio stream from the mics, serves as a buffer between larger spurts of filter behavior (i.e., the other three scenes), and also between different harmonic areas. This unfiltered audio signal is actually constantly present, though at a lower volume most of the time, with random functions making it louder at some times than others, independent of the other filter behaviors.

Most of my work since Brainwave Communication has been on the audio manipulation before it gets to the filters, actually, which still pretty much function as they did back then. The audio signal path is basically this:

The signal comes in from two mics (one inside, one outside the gallery).
The volumes are adjusted (i.e., cranked way up).
The two channels are mixed together by a random function; each mic gets a different, constantly changing mix.
The channels are compressed (like, severely, to try to handle anything that might be coming in).
Each channel gets two random delays, which are constantly changing, up to six seconds. When the rate of delay changes, a natural artifact is that the pitch also changes, which results in a cool, kind of scrubbing effect. The delays are intended to fracture time, analogous to how a kaleidoscope fractures space. The volume of the delayed signals also fades in and out according to a random function.
Then the signal goes to the filters, and the output of the filters plus and a bit of the unfiltered signal (volume varying, as mentioned above) is sent out to the speakers.

I actually spent almost two weeks working on two additional features that I eventually cut from the piece, although I’ll probably be able to put them to good use in the future. I was starting to fear that it would be too quiet in the gallery at times, that there wouldn’t be enough of a raw signal for the filters to work on. So I developed this idea of short term and long term memory, to supplement the present in which the piece was originally designed to operate. The short term memory would periodically record the signal from the mics into a set of buffers, triggered when the signal coming in crossed a certain threshold, so that you’d only record loud sounds, which would then be periodically played back. This memory would be erased every night when the computer is turned off, so you’d only hear sounds recorded earlier in the day. The long term memory would sense when the signal coming in had dropped beneath a certain threshold for a certain amount of time, and then supplement that signal with material recorded earlier; the plan was to record during the opening week festivities, so you could also sometimes hear back to the beginning of the installation. Kind of interesting ideas, but in practice, they only served to homogenize the output, so that the piece always sounded more or less the same, and they obscured the main idea of the piece, which is that the sound is being processed in real time, like a kaleidoscope.

There are a couple of areas I’ve tagged for future research. (For me at least, a good piece is a kind of snapshot of a way of thinking, and it ought to foment ideas for future development.) In general I want to have more coordination between the different behaviors in the piece. An early idea I never got around to implementing was that the filter swoops would sometimes move in little duets, if they happened to start moving at the same time. Another idea is to expand the rhythmic behavior, so that not all six filters are necessarily going at once, and that they’re not all necessarily going at the same beat multiple. In general I want to expand the beat multiple idea, to allow for more sophisticated, non-integer tempo relationships. And the big thing I really want to push towards is a more organic high level evolution of the piece, so that it’s not just a timer deciding when things should change, but that the behaviors somehow decide themselves when to give way to something new. I have some strategies in mind I want to try out for how to accomplish this, but probably for the current installation, a timer works best; it’s definitely safer, to guarantee the piece doesn’t get stuck doing something really boring.

So that’s basically it. I’ve got to say that it’s not as successful in the final installation as I had hoped, since the other pieces nearby really interfere with what was meant to be a subtle, slowly changing, ambient experience. I’ve never been to a show of all sound-producing works that handles this challenge well, and Music to My Eyes is no exception. But I’ll rant more about that another time. If you want to hear the piece, it’s up at the Today Art Museum through May 3, and it’s free, so swing on by. Or come hear me play D-22 next Thursday to hear the piece done as a performance!

Song and Me

I’m getting all the boring singer/songwriter patter out of the way here, where it’s easy to ignore, so I won’t bore everyone at this Sunday’s gig. If you want, just skip to the end of this post for a peek at Sunday’s set list. Don’t forget the details: this March 22, 2009, Yu Yin Tang, 1731 Yan’an Xi Lu (near Kaixuan Lu), 8pm, 30 RMB, opening for 10!

I’ve been writing pop songs since about 6th grade. That would be around 1986, when I was about 11. My first song was called “Blue Eyes,” co-written with fellow missionary kid Andy Laesch. We had a band we called Center of the Earth, which we, as pious MK’s, eventually decided had infernal undertones, so we renamed our duo Outer Space. We wrote a bunch of songs of which I could still hum a few bars, with titles like “Electricity” and “Midnight Spooks,” and we recorded them into a little boom box, doubling up on vocal duties, with me accompanying on the little Casiotone keyboard I got for my 9th birthday, shortly after my family moved to Liberia.

I saw it one day—blue eyes
I knew it right away—blue eyes
So clear from the start—blue eyes
It brought love to my heart—blue eyes

Oh, oh, blue eyes
Oh, oh, blue eyes
Blue eyes

Even before that, I remember putting together an instrumental, auto-chord extravaganza, featuring such titles as, “Just Noise???” now lost to the ages.

I kept writing songs in junior high and high school, while attending the International Christian Academy in Bouaké, Côte d’Ivoire, switching to Christian themes, along the lines of the Petra, Michael W. Smith, Steve Taylor, Benny Hester, Steve Camp, White Heart, and Randy Stonehill cassettes I was listening to at the time. By far the standout hit of those boarding school years was “Rainbow,” co-written with “Guitar Man” Dan Pinkston (who now has a DMA in music and teaches at Simpson College). We performed this snappy tune with our band The Utensils (which at times also included staff members Kurt Werner and Brad Trosen on bass) around our school campus, in chapel, or just for pals.

My songwriting hit a new apex with “Epilogue,” mostly composed on the plane trip from Africa back to the US in the summer of 1990, following my 10th grade year. I spent my last two years of high school in Seward, NE, and I would often play my little songs on my friend Kathryn’s piano (much more often, in fact, than I actually had a willing audience). “Epilogue” was generally the most warmly received (unless I tossed in some Richard Marx).

I got my first synthesizer in the summer of 1990, the mighty Roland D-20 workstation (with a built-in 8 track sequencer + drum machine), and I set about sequencing synth-pop arrangements of my tunes, producing the better part of two “albums” in these two years. You will never hear them. The first, Nine Generic Love Songs, included “Epilogue” and was written during my junior year for the girl for whom I yet pined back at boarding school. The second was pulled together during my sophomore year of college and was eventually entitled Titled Untitled, comprising 17 songs mostly written during my senior year of high school, although two songs dated from my boarding school years (including a synth-pop version of “Rainbow”), and a few newer tunes also slipped in. While working on the first master in January 1994, I felt embarrassed that I was spending so much time on such ancient material, and that my college girlfriend was unrepresented, so I added the track “I Tell Her Everything,” by far the best thing on the album.

I did some really wacky stuff in high school, digging deep into the synthesis potential of my Roland D-20 and experimenting with odd meters, sudden harmonic shifts to distantly related key areas, microtonality, polytonality, even random procedures, a sign of things to come, I guess, as in the instrumental track “Genevieve” (the middle name of a girl I smooched at show choir camp), which dates from late 1991. The percussion tracks were recorded as a series of overdubs with the volume turned off, so I didn’t know where I was playing in relation to the beat or previous takes, an idea I think I got from a Keyboard magazine article.

At first I recorded my sequences and overdubs on a little cassette 4-track I borrowed from my high school band teacher. Later in college I bought a second-hand Tascam 238 Syncaset 8-track tape recorder and made new recordings; I continued to remix and rerecord these songs for quite a while, eventually bolstering Nine Generic Love Songs with four thematically related “outtakes,” and finally producing a digital master after moving to Seattle in 1996. Good practice, I guess.

I continued to write songs after commencing studies at St. Olaf College, but as a composition major, I was also starting to branch out into other kinds of writing. I would often try to slip some of the new ideas I was learning in music theory class into my songs, such as a German augmented sixth chord in “One-sided” (written as a homework assignment) and common tone modulation by way of an augmented chord in “The Verge of a Girlfriend.” The instrumental track “Jim/James” was the result of a homework assignment to write a minuet and trio (co-written by classmate JP Moninger, my partner on the assignment), and also snuck onto Titled Untitled. Most of my new songs were for my college sweetheart, with a few exceptions. I once wrote a grunge song for the cover band in which I played, Dirty Bath, entitled “Kill Fred,” a hateful diatribe against an incompetent sound engineer we had at one gig.

Put a gun to his head
Kill Fred
Make him bleed; it’s so red
Kill Fred

It was actually kind of a funny song (in Phrygian mode, which we had recently been studying).

After college, I moved to Seattle, and I kept writing songs, almost exclusively, and perhaps somewhat neurotically, about the college girlfriend who broke my heart in the end. Eventually I started to write about other people, but almost invariably the subjects would revolve around my striking out with girls, though I tried to maintain a modicum of wit about it. One happy exception was “First Dance,” composed for the wedding of Cheryl (my former French teacher from boarding school) and James Cloyd.

At any point since high school, if you had asked me what my next album was going to be called, I would have been able to tell you. After Titled Untitled, I had planned a sprawling quadruple album entitled Our Unique Culture, which would bring me totally up to date with everything worthwhile I’d ever written, or even started to write. In college, I was working on an album called Whatever, which later became Stark Originality. I’m sure there were others album titles I’ve forgotten. At one point I planned a rock album. Then in the early Seattle years, it was a 12-song concept album about the aforementioned college sweetheart entitled, I’ll Never Make the Same Mistake Twice Again (hmmm, bitter much?). Then I thought I’d better just sweep everything I had into one collection and move on to something new; at first I was going to polish and rerecord everything and call the compilation Jot Down a Quick Note, later shortened to Jot Quicky. But in the end, I just burned CD-R’s of whatever half-baked demos I had laying around for friends, christening the 15-song compilation Dumb Songs and Demos.

After a while, somewhere around 2000, I just stopped writing songs. I’d gotten busy composing the string quartet soundtrack to the computer game Arcanum (2001), which took me most of 2000 to complete. I was also writing a lot of choral music for the church choir I was in. I’d continue to have ideas for songs, but I’d never flesh them out or record them. At the time I felt demoralized by indifference, but in retrospect, I was doing almost nothing to get my songs heard by anyone outside the circle of my immediate acquaintances. The last song I wrote in Seattle was “Kiss Locally,” sparked by the way my pal Mike was able to breathe new life into some of my older tunes with killer rock arrangements. (And these arrangements are finally seeing the light of day as 3 Heart-Shaped Cookies, available now!)

In retrospect, my last year or two in Seattle was ripe for a pop renaissance. I had finally assembled a perfect little computer-based home studio, my longstanding goal since moving to Seattle, rounded out by the acquisition of the Roland JP-8000 and JV-2080 that I was able to retain from my studio at Sierra when they finally shuttered their Bellevue office. And I was starting to perform regularly in a new band, Subpoenaed Lemur, at the instigation of my dear pal Korby; I don’t know if I can truthfully say we were garnering a following, but we were playing around town quite a bit and having a blast. But at the same time I was in the throes of my master’s degree in composition at UW, while continuing to work full time in the games biz, leaving little time for pop dalliances. It’s an irony of history that just as the last pieces of my home studio fell into place, I had shifted focus almost entirely to doing computer music in Max/MSP. And to this day I feel shame that I never pulled my weight in the band; continued respect to Korby for doing all the booking, preparing all our backing tracks, and running all the rehearsals.

When I decided to move to China in 2004, I had to figure out what to do with my studio gear, and each option seemed like a losing proposition. I could sell it all and lose money and regret it later; I could pay to store it as its value steadily declined; or I could pay to ship everything over to China. In the end I brought it all with me to Shanghai, where it languished in unopened boxes for about 3 years, as I continued to focus on computer music.

I guess what got me writing songs again was a trip to Vietnam with Jutta in October 2007. It sounds silly and cliché, but it was a time of intense emotion, and I didn’t know how else to express what I was feeling than in a song. By the time we returned to Shanghai, “My Heart is a River in Flood” was pretty much sketched out, though it took a few more months to work out some harmonic details and record it. In the meantime I had started writing “EndWar,” which despite its commercial provenance was genuinely the result of good, old fashioned passion and inspiration (working on one game for 3.5 years will do that to you). And “Jessica’s Scissors” ensued shortly from a brazen bar boast; our friend Jessica, an instructor at the Vidal Sassoon academy, was celebrating her birthday at Logo, and I offered to write her a song in exchange for a free haircut, perhaps not the best bargain I’ve ever struck. All of this was enough to finally pull my studio gear out of mothballs and wire everything up.

And so pop songwriting has once again finally come to the fore. I’ve had this idea of doing an album about my experience living in Shanghai ever since I got here, but it’s languished on the back burner for years. But now I’m committed to finishing it in 2009; once I’m back from setting up this installation in Beijing next month, I will be all about Shanghai Travelogue. All the new songs I’ve been writing recently (two are already lined up and ready to record) are going towards that release.

Despite this long history of pop songcraft, I can probably count on one hand the number of times I’ve performed a set of my pop songs in public (excluding the songs we did with Subpoenaed Lemur: “Love on TV,” “Kiss Locally,” “Late Life,” and “Our Newfound Skill”). I did a short ½ hour set on some festival line-up in college, a few tunes at another open mic night in college, 3 more at an open mic night shortly after moving to Seattle at the Art Bar on 2nd…maybe that’s it? (After the Art Bar performance, the host, who I think may have been Ted Narcotic, afterwards commented, “Hmm, you’ve got kind of a Tiny Tim/John Tesh thing going on there, don’t you?”)

So perhaps this Sunday will be my first full gig of original pop songs ever. Took me long enough!

Here’s the set list, with approximate dates of composition. If a song title is highlighted, click on it to listen!

Love on TV* (1997)
Our Newfound Skill (1998)
Late Life* (~1999)
I Tell Her Everything (1994)
Like Vaseline (~1999)
Kiss Locally* (2003)
Cold (2009)
I and My Neurosis (~1999)
First Dance (~1998)
My Heart is a River in Flood (2007)
I’m Not Drinking Alone (When I’m Thinking of You) (~1997)
Jessica’s Scissors (2008)
EndWar (2008)

* included in the brand new rock’n’roll EP 3 Heart-Shaped Cookies

This show will be “quasi-acoustic,” meaning that I’ll be singing and accompanying myself on a keyboard with no computer trickery. It would have been fully acoustic if Yu Yin Tang had a piano. Later this year I’m planning to make the leap to full-fledged synth-pop performances. I always felt ashamed to be performing alone with only sequenced accompaniment (despite the fact that Depeche Mode and Nine Inch Nails have made quite profitable careers based on this approach). Now that I’ve witnessed the “electronica” revolution of 1999, followed by my discovery of China’s karaoke culture, I think it’s time for me to overcome those old reservations.

Wow, what a long, boring post. Thank you very much for listening. Good night. Enjoy your steak.

EndWar Audio Demystified

I finally got around to adding an EndWar audio page to my website, which collects some of the more salient information regarding my work on this game for the better part of the past four years.  Check it out, if you care.

I also added some of the more insightful press blurbs pertaining to EndWar audio that I could find (omitting the mean ones) on my press page.  Generally, folks seem to be pleased with the audio (if they notice it at all).

Also, Ubisoft has launched a contest, inviting everyone to create user videos for the “EndWar” song I wrote and performed in collaboration with 99 Men (obligatory 3 Heart-Shaped Cookies plug), which means now anyone can download and enjoy this lusty refrain. Hurry, you have until February 17 to submit your entry!

And let me add a preemptive disclaimer about the following video, now making the rounds on YouTube in conjunction with the aforementioned contest. This performance was absolutely not intended for public consumption. It was a last minute stab at cheap entertainment for a team party.  I came straight from working at my desk, and I didn’t rehearse a lick, which explains the ill-timed invitations to clap or sing along, as well as the ill-advised air guitar.  Also, Mike (of 99 Men) transposed the song down from e minor to d minor, which rendered the bridge too low for falsetto and too high for full voice (Mike sings the bridge on the recording).  That said, the Shanghai skyline backdrop is kinda cool.

3 Heart-Shaped Cookies

I just uploaded my new EP (I guess it’s my first real album) to a bunch of sites for digital distribution. It will take a few weeks, I guess, to percolate through the system, but theoretically you should soon be able to download it on iTunes, Napster, Rhapsody, eMusic, and Amazon. Drop me a line if you wish to be notified when this happens. I hope to follow it up with a physical release in the coming weeks.

3 Heart-Shaped Cookies

 

I just spent a long time setting up this special webpage for the album, so I won’t duplicate all the info here. It’s actually a collection of 3 older tunes I did with my pal Mike Caviezel (mastermind behind the band 99 Men) back in Seattle. I teamed up again with Mike to produce a song that plays over the end credits in EndWar, so it seemed a good time to get these prior collaborations out in the open.

And though it certainly wasn’t the most important thing going on in the world yesterday, Tom Clancy’s EndWar, my life for the past 3.5 years, was finally released (release date chosen by the French). Go buy it!

The Power of Music

I just got back from giving a presentation on sound art at Raffles Design Institute on the campus of Donghua University, about a block from Yu Yin Tang. The hallways are emblazoned with photographs of people like Jean-Paul Gaultier and Stella McCartney, and the glass door to each room is inscribed with maxims such as “Globalization is possible when a brand is built into a cultural stereotype,” “Brands are relationships; there’s nothing else,” “Success financially is a measure of creative success; it is the same in all art” (that one was Stella’s), and my favorite, “The power of music is branding.” It was even creepier than the thought of classrooms full of students being serious about fashion marketing.

But my personal interactions were all quite agreeable. I was there at the behest of the effervescent multimedia instructor and aspiring DJ Raquel Assis, to speak to her “New Media and Environments” class. My presentation seemed to be quite well-received, although I always feel a bit like I’m cheating, since all I have to do is say, “I’ve been making videogames for the past 12 years,” and everyone snaps to attention.

Most of the questions afterwards came from other professors sitting in. One asked how living in China has influenced my work, which was an interesting question, because the piece I had just played was “Radiospace,” which on the surface (in this particular rendering) has lots of snippets of Chinese speech and pop songs, but is actually not about content at all. It’s a real-time program that uses radio broadcasts as source material for audio manipulation, and was actually completed back in Seattle, where the program mangles Britney Spears instead of the Jay Chou. A lot of my work is kind of ambivalent about content, focusing instead on structure, and that’s particularly true of this piece.

But back to the question, while I am an enthusiastic student of Chinese culture, I don’t hear a specific Chinese influence on my work in any overt sense, like using pentatonic scales or whatever (I wouldn’t have to come to China to pick that up, anyway). When I hear something in Chinese music that is relevant to the issues that concern me, I do, of course, take note (I cited a recording of a Buddhist ritual in a Shanghai temple on the French Ocora label, “Chine Fanbai: Chant liturgique bouddhique” [I guess that’s 梵呗], as an example of a static musical behavior). But I’m probably more influenced by the rush of modernity, trying to parse and correlate multiple streams of information, the hum and buzz of a huge, constantly evolving city like Shanghai.

Another question was about the limits of randomness in a closed computer system, and I replied that art made from random numbers is just like art made from pipe cleaners. There’s nothing particularly interesting about pipe cleaners, but I’m sure somebody can make some really neat things out of them; the art emerges from form and relationships. There’s plenty of randomness in a computer. You can progress from sample transforms to synthesis, continuing to insert random choices at every stage of your audio-generating function until you’re sending a stream of random numbers directly to your sound card, the very definition of white noise. If you’re not content with the randomness within your computer, then you can introduce the outside world into the system by hooking up a camera or microphone, which yields information that is potentially more random yet also more consistent than a random number generator.

In all these cases, what’s more important than generating noise (random numbers) is deciding how that noise is used, what the random numbers are hooked up to, how they’re constrained, and what happens in between random choices. If you’re getting random information from a camera, you have to decide where you’re pointing your camera. All works, no matter how aleatory, are framed in some way; they come with some context or expectation as to the circumstances in which they will be experienced. That’s where you see the hand of the artist, and this is why even performances of John Cage’s 4’33” tend to sound pretty similar, despite the fact that ostensibly any sound in the world is admissible.

I’m not sure that I answered either question very clearly, especially considering how long it took me to summarize my responses for posting here.

Two very shy girls came up afterwards and said they had done a sound art piece called “Uneasy” that is designed to make you feel uncomfortable. They’re going to send me an MP3. I can’t wait to hear it!

Below are my annotated notes (yes, that’s notes on notes, or, if you will, the derivative of notes; I think this is what Mike Min meant when he was babbling about calculus in art), and I’m sorry, I’m not going to take the time to clean them up into essay form for you.

Introduction

Hi, I’m Ben Houge

Example of my work, provide some context

[play Breaking New Ground]
Set it up: SICIW, 100% Design, arctic concept
Imagine 12 speakers

Take a vote: is it music? [Response: no]

Done in Max/MSP
6 wind generators
3 chime generators
1 insect-flute generator

All the parameters of music are still there
Strong harmonic basis

So you could consider it a piece of music
Unlike most traditional music, no beginning and end; runs all day, and start/stop is determined by viewers individually entering/leaving, like sitting on a park bench
No loops, just algorithmic behaviors, many elements up to chance
Site-specific spatial element; can’t listen to it in your home on a CD player; have to go there

Background, how I got here

Piano lessons
Got a synthesizer in high school
St. Olaf College, major in Music Theory and Composition
Focused on electronic music, csound

UW, MM Composition
Sound synthesis, algorithmic processes, aleatory music, Max/MSP

Got into videogames
In Seattle for 8 years
Sierra Entertainment
Such games as LSL7, KQ8, Arcanum, Half-Life: Op4

[play Arcanum main theme]
It’s a nice enough little theme, but I was not happy with the implementation of my music in the final game; everything looped
It’s probably around this time that I really became convinced that the future of game audio was in finding unique deployment methods appropriate to the medium

In Shanghai for 4 years
Ubisoft
Tom Clancy’s EndWar, out today!

Parallel artistic trajectory

Early on in my career I was a content provider: music composition, sound design, dialog editing, lip syncing, etc.
Eventually became an audio director, where my job was simply to make the game sound good
Producing very little original content, rather directing other content providers, e.g., working with pals from Seattle on EndWar music (though I still got my song in there for the end credits).

First thought of games as a stepping stone into film, but soon discovered what a fascinating world it was.
Whereas film is codified and calcified and super competitive, grammar of games is still being defined; more fun challenges

[Play sci-fi ambience, built entirely from random and statistical deployment of synthesized sounds]

First heard of John Cage in college, several years later, while working in games, read Silence.
Recommend to all aspiring videogame audio designers
Started making the connections, also reading Feldman
Sound Currents, Seattle school

Began finding a focus for my art, and my extracurricular work, which had previous been mostly sacred choral music, began to focus more and more around the issues I was facing in the evolving medium of games.
Sierra paid for my Master’s, continuing education program; since I had already been working in games for a while, I knew where I wanted to focus
During my Master’s I realized my mission wasn’t to write a symphony or smart little chamber piece, but to connect these dots.

[Play A Reading from _____/Variations on _____, live radio performance on KEXP’s Sonarchy]

Is this music? At the time, I would have said yes, but BMI didn’t think so.

Why “sound art”?

Most open term, least expectations.

Throughout music history, there’s an increasing tendency to incorporate new sounds into music
Mozart’s Turkish cymbals, hunting horns, etc.
Technological advancements: saxophone, sirens, electronic instruments (Theremin, ondes Martenot)
Recording technology, musique concrete, tape music, computer-generated sound, sampling, DJ’s
Now experience almost all music in recorded format anyway

Now just about any sound can be considered of musical use (pop music ahead of classical music in that sense)
So what’s the important distinction?

Not bound by medium; more likely to experiment with cite-specific sound producing configurations.
Not necessarily electronic, speakers, Trimpin, acousmatic
Sound art is not necessarily even sound producing, Christian Marclay’s instrument sculptures and manipulated album covers
Also mention his video work and our recent Screen Play performance

An aspect I find particularly intriguing is the idea that sound art is in some way an “object,” rather than a “piece.”
Music has almost exclusively been about an organization of sound with a beginning and end, unfolds as an event.
But from my game experience, I’ve become concerned with organized sound that is indefinite, that continues until a user decides when to leave.
In this regard it’s more like an ambience, like sitting on a park bench and listening for a while, then leaving when you’ve had enough.
And similarly, it’s like looking at a painting in a museum; so “sound art” seems apt.

Because of all this, my work is just one possibility of what sound art might be
Most of my work has been sound producing, but I’m expanding
Working on giraffes [algorithmically generated digital prints], expressing the same ideas of organization in different media
I kind of still consider this music, too, but most people reasonably wouldn’t, so sticking with “sound art” for now.
Also coming soon: video works.

My issues

Dynamic behaviors
No loops!
“Just loop it” is the dumbest answer to a very interesting question, first thing people think of, cocktail party response
A more interesting answer involves algorithmic behaviors (do not fear algorithms; it’s just a method of doing something, like a fugue)
Also, do not fear randomness; very fertile artistic medium, lots to do with it.
Randomness as an artistic medium; more than just a random number generator
Levels of linearity; usually a continuum, not on-off; find the right granularity for your project

Shuffling, additive systems
Combinatoriality [I’ve stolen this word from 12-tone theory; I like my definition better]: exponential increase in possible outcome when you mix several layers of indeterminate behavior
Responsive; deprived of pre-rendered dramatic trajectory, allows you to switch on a dime

Multimedia
This ability to switch allows you to closely follow another stream of information; this is what happens in a videogame
Mapping from one stream to another; multimedia works, real-time
Coordination between different streams

[Play Mobile 3]

Explain that this is a live performance at the 2pi festival 2006 [and not one of my greatest works, but gets the idea across]
Also served as a prototype for EndWar music system

Granular synthesis
Also interested in what happens when you have lots of similar objects doing slightly different things
Small variations, lend interest, thwarting computer’s ability to be too perfect, the warmth of an acoustic performance
Heterophony, flocking behavior, Zhang Yimou, Curse of the Golden Flower

[Play Radiospace]

Having a big visual art show soon, visit my web site for the latest.
Also, I just started a blog. Visit!

Questions?

[Bonus: Play “EndWar” main menu music for big finish]

Breaking New Ground

Background

Breaking New Ground is an installation I did in collaboration with Jutta Friedrichs of MÜ Furniture to represent 100% Design at Shanghai International Creative Industry Week. It ran from October 16-21, 2008, way up on Lingshi Rd., just north of Shanghai Circus World. My job was the audio component, a sound installation of six discrete channels synthesized in real-time and deployed algorithmically.

Jutta quit her job as an award-winning product designer a few months ago to launch her own furniture company, MÜ. Her public debut was at the 100% Design Shanghai show last June (and she’s racked up something like 15 glowing magazine write-ups in the short time since). 100% Design is an annual design show based in London, and it seems to be a pretty big deal in design circles. This was the first year they mounted one in Shanghai.

Shanghai International Creative Industry Week is exactly what it sounds like. I don’t know how long they’ve been doing it, but last year’s was held at a big, renovated slaughterhouse on the north side of town, super funky building, but kind of an incoherent and sprawling exhibition. This year’s event was held in a brand new complex, one of those just-add-water Shanghai developments that apparently wants to be known as “The Factory.” A month before the exhibition Jutta had to wear a hardhat and climb piles of sand to visit the space, and the morning of the show landscaping was still being installed.

The Concept

Anyway, Jutta’s pals with the 100% Design folks, and they wanted to have a big display at SICIW to remind people of their presence in Shanghai as a champion of cutting edge design, and in particular to plug next year’s exhibition. So they invited her to design their space (a large, tall cement room, 240 square meters), and she came up with a kind of arctic theme, with furniture (highlights selected from the previous 100% Design show) variously floating on or bursting up through shards of Styrofoam masquerading as ice.

For my sound component, the basic idea was wind and wind chimes, to support the artic theme. There’s also this kind of insect-flute thing that occasionally floats in over the top. When we first got the sound up and running and I looked around the room, I was very pleased at how well the sound and visuals collaborated to evoke a unique ambience that was quite distinct from the rest of the show.

The Set-up

I programmed the sound behavior in Max/MSP, and it exemplifies the maxims I’ve developed from doing game audio design for the last 12 years. The sounds are completely synthesized in real time and deployed algorithmically, to ensure that nothing ever loops. The six channels are completely independent (technically superior to Dolby Digital, since Dolby is a compressed format, and I’m sending out 6 discrete channels of uncompressed PCM audio data). The goal was to create an ambient, natural-sounding environment that’s consistent yet unpredictable, analogous to what you might hear sitting on a bench in a park (if the park was in Antarctica).

The sound was generated in Max, came out of a MOTU UltraLite sound card, ran through a Yamaha MG16/6FX mixer (not strictly necessary, but handy for testing and setting up), then out to some big BAL AP1600 amps, then out to the speakers. There were actually 12 speakers, with each channel output from the computer routed to two different speakers. The speakers used were these small, black Bose speakers with no model numbers, suspended along a ledge that ran around the perimeter of the 240 square meter room, about 3 or 4 meters off the ground, a perfect height for this piece. Except for the sound card (which I had to buy at the last minute), the rest of the equipment was rented.

What’s Going On

Harmony

Though it’s not immediately apparent, there is a firm harmonic underpinning to the work. It’s all in just intonation (i.e., small number frequency ratios, no temperament). There’s a fundamental frequency (a low A=110 Hz) that changes on a random timer, something like every 1-2 minutes. It can pick from among 5 different pitch multipliers (1, 1.125, 1.25, 1.375, 1.5), which correspond basically to scale degrees 1, 2, 3, a sharp 4 (not represented on a piano keyboard, but corresponding to the 11th harmonic), and 5. There is a constant low wind drone (filtered white noise) that constantly plays this pitch, doubled at the octave, to kind of anchor the rest of the sound.

All other sounds are multiples of this pitch, and when it changes, it really refreshes the whole piece. I first discovered this effect in 2000/2001, when I was developing a DirectMusic score for a cancelled Xbox project called Jonny Drama (making extensive use of VBScripting to try to get some interesting juxtapositions to arise from asynchronous music deployment). It feels kind of like you’re cleaning out your ear after you’ve become accustomed to hearing one harmonic center for a long period of time.

Wind

There are six independent wind generators, one for each channel of audio output. The synthesis couldn’t be simpler: a noise generator and a resonant band-pass filter (noise~ and reson~). I use a random walk (drunk) to pick harmonics of the fundamental frequency (6th-16th), changing on a random timer (and the timer itself uses a random walk, so sometimes changes are more frequent than others, creating a drunk envelope). Low frequencies are weighted to be more likely than higher frequencies. The first version I did had the pitch constantly changing, but Jutta felt (and I agreed) that the effect was too creepy, a kind of ghost-like moaning, so I decided to have each wind generator swoop to a new pitch, then sit there for a while, with all pitches being multiples of a fundamental, making the whole piece very stable and consonant, while still evoking wind that blows whithersoever it will.

I spent way too much time on a subtle function that allows the wind to swoop to a new pitch in a more natural way. Rather than simply sliding from one pitch to another, it picks a new pitch somewhere close to the desired pitch, then gradually hones in, kind of like a pendulum coming to rest. It’s hardly perceptible in the final piece, but I’m happier knowing it’s there. When the wind’s pitch is swooping, I open up the Q on the filter a bit, so it’s a noisier sound, becoming more pitch-focused only when there’s an important pitch to sound.

Chimes

The chimes are also super simple. When I started work on this piece, I began by trying to develop a software synthesizer modeled on my beloved Roland JP-8000 (still recovering from a power surge at the 2006 Ubisoft company party), but I quickly realized this was not the best investment of my limited time, so I stopped with the basics: simple synthesis objects, amplitude envelope, filter. So in the end all chimes are just filtered triangle waves.

The chime deployment is really the interesting part. There are 3 chime generators, each associated with a pair of wind generators. They’re on random timers to decide when to start. When they start, they pick one of the associated wind generators, and from then on the density is linked to the wind’s pitch (perceived velocity). They have a drunk envelope (i.e., drunk target and drunk time to get there) that tells them whether to get louder or quieter, and if they get quiet beyond a certain threshold they turn themselves off. The chimes are where you can really appreciate the “cleaning out the ear” effect I described earlier, if the fundamental multiplier changes while the chimes are playing.

Insect-flute

The insect-flute is probably the least successful element of the piece. It felt like it needed another layer, but deciding exactly what that layer should be was a bit tricky, especially given the relatively short time frame. Jutta had this concept of “bees in a bag,” to create a feeling of percolating excitement bubbling over, with one bee every now and then escaping, which is a rich concept and could probably be a piece unto itself, but in this implementation, the sound is a little too similar to wind, so it’s kind of in this middle ground that’s not quite merging with the other sounds into a new aggregate, yet not quite putting them in some kind of interesting contrast. I had the idea of something really solid and statuesque that the other sounds would kind of waft around, but it was hard to fit that into this ambient conception; it would be fun to do a variation as a concert piece with an acoustic trumpet or something.

Anyway, bees in a bag instantly suggested granular synthesis, which I’ve been doing a lot of lately, so I created some drunk envelopes to constrain the various parameters and put limits on it, so that if it gets below a certain threshold it will turn itself off, but if it gets above a certain threshold, the length of the grains increases, the pitch and volume variation narrows, and it bursts into song! The “songs” are based on Markov chains, derived from some short melodies I wrote for this piece. This was the last element of the piece I added, and it caused me a lot of grief to debug (and I can lay a small part of the blame on a bug in Max 5.02 that was fixed in 5.05). In my final implementation, I feel the Markov chains still sound too random; I’ve just scratched the surface of how they can be used, and plan to explore it further in subsequent pieces (using second order chains, coordinating melody and rhythm, lots of stuff to try).

Also, I was using a sample of me blowing on a bottle of Qindao beer as a basis for granular synthesis, and the sound is too similar to the wind blowing; it would have been nice to have something more contrasting.

Observations

First thing I noticed from doing this project is that I need a new computer. This program brought my 4-year old Pentium 4 single core laptop to its knees with frequent audio dropouts, but the new laptop volunteered by one of the organizers to use at the exhibition didn’t even break a sweat, averaging around 20-30% of the CPU.

I also confirmed once again that you can’t just send an email and expect your technical needs to be covered. Despite stressing numerous times that we needed a computer and 6-channel sound card, no computer was to be found when the contracted audio team arrived to set up. As far as I can tell, it is only through sheer luck that they happened to be setting up the 12 speakers in pairs to accommodate 6 discrete sends (although I had originally requested 8). The organizer eventually found a laptop, but I had to go out and locate and buy a new sound card for this show at my own expense (and thanks to my former Ubisoft colleague Zhang Lei for helping me track down the card on short notice). Next time I’ll do better to insist that I talk with the audio contractors myself in advance, or even better, supply my own equipment and let them rent it from me instead.  At least I was wary enough to start on-site set-up two days earlier than planned, to prepare for just such an eventuality.

I was surprised that it was a bit of a tough sell to do an ambient installation; towards the beginning, folks kept wanting me to turn it up. I think that’s maybe just what everyone in Shanghai is used to hearing, ugly sounds blaring out of store fronts. Subtly’s a hard sell, but from my experience in games (and singing in choirs and going to bars and in many other contexts), volume doesn’t guarantee excitement or interest or anything, really, other than volume; after a while, you just tune it out. A well-made piece doesn’t need to keep trying to draw attention to itself. In the end, I think people came to appreciate what the piece was about, and visitors to the exhibition seemed to like it.

Future Directions

There’s a bunch of other stuff I would have liked to do in this piece, but there wasn’t enough time.  But I think a sign of a good piece is that it points you in a clear direction for future work.

In general, I want to work towards more coordinated systems. I think that’s the next challenge, once you’ve got some interesting dynamic behaviors going on: adding more layers and coordinating them.  There was some coordination going on already (chime density linked to wind velocity), but there could have been more (e.g., connecting chime and insect-flute start/stop times to some aspect of wind behavior instead of random timers). I also would have liked to make the 6 wind generators work together in kind of coordinated network, to get the feeling of gusts of wind moving through a space.

The insect-flute thing might have been more interesting if there were several layers of it going on, coordinated to do chorale-like things, or even counterpoint. I want to experiment with that more, but adding two more layers of processor-intensive granular synthesis would have been way beyond my laptop’s capabilities, so I’d have to use a different synthesis technique.

It would have been nice to have a more sophisticated synthesizer for my chimes; FM in particular would be an obvious choice for such sounds.  I’ll keep working on my virtual JP-8000.

As I mentioned, I barely scratched the surface with Markov chains, but I’m planning to explore them in detail in a subsequent work.