Kaleidoscope Wallpaper

My collaborative installation with Chen Hangfeng, which I guess we’re calling Kaleidoscope Wallpaper, or maybe just Kaleidoscope, is up and running at the Today Art Museum. We’ve been here in Beijing setting it up since April 11th, and the show opened to the public on April 17, and after a weekend of opening activities, including a lecture by Hangfeng and I yesterday, he’s back to Shanghai, while I’m sticking around to perform another laptop set at D-22 next Thursday night. The show runs through May 3. Get a taste of what our piece sounds like here.

I wrote at length about the work as it existed when I performed it at Brainwave Communication last month. Most of what I wrote then is still true, but I thought I’d take a moment to bring you up to date with the final piece, as it is running in the museum right now, across the pedestrian walkway from where I sit typing this at Unconditional Love Coffee. Before reading the following, you might want to brush up.

When I did the piece as a concert piece at Brainwave Communication (and again last week at 2 Kolegas in Beijing), I was providing the high level brain (such as it is) of the piece, deciding when to turn things on/off, setting volumes, etc. Since I’m not on hand to perform this role as the piece runs continuously as a gallery installation, I wrote another high level “brain” to serve this purpose. At periodic intervals (between 45 seconds and 2 minutes), it essentially flips a coin to decide if each of the 3 main filter behaviors (steady chords, independent swoops, or rhythmic pulses) should be on or off. When the steady chords are selected, another choice is made to turn on one, two, or three layers of chords, which may overlap independently. If all three behaviors turn up negative, a fourth, unique behavior is triggered: the source sound is turned up to prominence, giving listeners a chance to hear what’s behind the filters more clearly and make the connection to the real-time audio feed from the mics. I also take this opportunity to pick a new scale (harmony still functioning as described in my original post) for when the filters come in again.

So the basic idea of four “scenes” is still kind of true, but they’re not completely equal. The fourth scene, which is the unmanipulated audio stream from the mics, serves as a buffer between larger spurts of filter behavior (i.e., the other three scenes), and also between different harmonic areas. This unfiltered audio signal is actually constantly present, though at a lower volume most of the time, with random functions making it louder at some times than others, independent of the other filter behaviors.

Most of my work since Brainwave Communication has been on the audio manipulation before it gets to the filters, actually, which still pretty much function as they did back then. The audio signal path is basically this:

The signal comes in from two mics (one inside, one outside the gallery).
The volumes are adjusted (i.e., cranked way up).
The two channels are mixed together by a random function; each mic gets a different, constantly changing mix.
The channels are compressed (like, severely, to try to handle anything that might be coming in).
Each channel gets two random delays, which are constantly changing, up to six seconds. When the rate of delay changes, a natural artifact is that the pitch also changes, which results in a cool, kind of scrubbing effect. The delays are intended to fracture time, analogous to how a kaleidoscope fractures space. The volume of the delayed signals also fades in and out according to a random function.
Then the signal goes to the filters, and the output of the filters plus and a bit of the unfiltered signal (volume varying, as mentioned above) is sent out to the speakers.

I actually spent almost two weeks working on two additional features that I eventually cut from the piece, although I’ll probably be able to put them to good use in the future. I was starting to fear that it would be too quiet in the gallery at times, that there wouldn’t be enough of a raw signal for the filters to work on. So I developed this idea of short term and long term memory, to supplement the present in which the piece was originally designed to operate. The short term memory would periodically record the signal from the mics into a set of buffers, triggered when the signal coming in crossed a certain threshold, so that you’d only record loud sounds, which would then be periodically played back. This memory would be erased every night when the computer is turned off, so you’d only hear sounds recorded earlier in the day. The long term memory would sense when the signal coming in had dropped beneath a certain threshold for a certain amount of time, and then supplement that signal with material recorded earlier; the plan was to record during the opening week festivities, so you could also sometimes hear back to the beginning of the installation. Kind of interesting ideas, but in practice, they only served to homogenize the output, so that the piece always sounded more or less the same, and they obscured the main idea of the piece, which is that the sound is being processed in real time, like a kaleidoscope.

There are a couple of areas I’ve tagged for future research. (For me at least, a good piece is a kind of snapshot of a way of thinking, and it ought to foment ideas for future development.) In general I want to have more coordination between the different behaviors in the piece. An early idea I never got around to implementing was that the filter swoops would sometimes move in little duets, if they happened to start moving at the same time. Another idea is to expand the rhythmic behavior, so that not all six filters are necessarily going at once, and that they’re not all necessarily going at the same beat multiple. In general I want to expand the beat multiple idea, to allow for more sophisticated, non-integer tempo relationships. And the big thing I really want to push towards is a more organic high level evolution of the piece, so that it’s not just a timer deciding when things should change, but that the behaviors somehow decide themselves when to give way to something new. I have some strategies in mind I want to try out for how to accomplish this, but probably for the current installation, a timer works best; it’s definitely safer, to guarantee the piece doesn’t get stuck doing something really boring.

So that’s basically it. I’ve got to say that it’s not as successful in the final installation as I had hoped, since the other pieces nearby really interfere with what was meant to be a subtle, slowly changing, ambient experience. I’ve never been to a show of all sound-producing works that handles this challenge well, and Music to My Eyes is no exception. But I’ll rant more about that another time. If you want to hear the piece, it’s up at the Today Art Museum through May 3, and it’s free, so swing on by. Or come hear me play D-22 next Thursday to hear the piece done as a performance!

Silence or Brainwave Communication or Sneak Preview?

Here’s a quick preview of what I’m going to be presenting at this Sunday’s “Silence or Silence or Brainwave Communication” show, which you should totally attend (details here).

I’m currently working on a collaborative project with my pal Chen Hangfeng 陈航峰, an installation for the Today Art Museum in Beijing, going up next month; he’s doing the visual part, and I’m doing the audio part. The point of departure was the notion of a kaleidoscope, which fragments and transforms the world around you into something unexpected, strange, and beautiful. To achieve the same kind of effect in sound, I’m writing a program in Max/MSP that will take a live audio signal coming from a microphone positioned around the gallery and manipulate it in funky ways. In the end it will have four different behaviors (what I’m calling “scenes”) that can be deployed independently on the two speakers in the little room we’re building. I’m planning to preview two of these at my performance this Sunday.

The basic sound production method is to use resonant filters to emphasize certain very specific frequencies present in the original signal. When the incoming signal includes one of the frequencies I’m looking for (and in the kind of noisy, ambient sound I’m pumping into the computer, this is quite likely), the filter will be excited, and you’ll hear a tone at that frequency that also retains the amplitude contour of the signal going into it. It’s not a particularly new or difficult idea—I think Jean-Claude Risset was one of the first do put this idea to musical use in the 60’s—but it’s still effective.

Setting up the filters (six of them, referencing the hexagon forms seen in kaleidoscopes) is pretty straightforward; the guts of the piece are in how and when I set those frequencies.

I’ve got an overarching harmonic plan based on just intonation that’s a bit complicated, but it basically involves picking a new scale for each “scene change.” I’ve got 5 scales to choose from. Different scales imbue the changing scenes with different moods or feelings. A fundamental pitch is selected from the chosen scale. Then each of the filters picks a multiple of that fundamental pitch (based on the scale) as its base pitch. Then it does these shorter volleys of some behavior (the specifics of which vary from scene to scene), which involve choosing another multiple (still based on the same scale) from that pitch. So you’ve got a scale degree multiplied by a scale degree multiplied by a scale degree, which is kind of analogous to the way images are reflected back and forth in the mirrors of a kaleidoscope.

I’ll be previewing two of these scene behaviors this Sunday. One is a kind of swooping, sustained thing, where each of the filters are behaving independently, coming in and out at different times, hovering around different octave offsets, very mellow and ambient. The number of swoops per volley, time of the sustain, time of the sweep, time between volleys are all controlled algorithmically, slowly changing over time. The other behavior is a more rhythmic thing, different patterns pulsing on the different filters, playing off each other, with gradually changing patterns, density, tempo multipliers, pauses also controlled algorithmically. Still mellow and ambient, but with a beat!

I’ve done a couple of pieces exploring this notion of algorithmically generated rhythmic patterns, and my first efforts were pretty unsatisfactory, notably in a piece called “Study for Eventual World Domination,” which I did at the 2Pi Festival in Hangzhou in 2006. Some of the patterns were interesting, but they kept coming out in a stream that became boring after a while, homogonous in its constant randomness, since it lacked any mid-level coherence. So I’ve come up with a system that will still generate algorithmic patterns, but store them in a list, so that they can recur and play off of other patterns, and so far this method is proving much more satisfactory. It’s actually a cross between the way I’m storing radio signals in buffers in Radiospace (randomly choosing one of the six buffers to rewrite every 20 seconds, allowing for this same kind of mid-level coherence) and how I was choosing cells to play and juxtapose in the EndWar music system.

To hear the other two scene behaviors, you’ll have to come to Beijing next month and check out the installation! The show opens April 17. I’ll post more about all the specific details later. In the meantime, see you Sunday, 8pm, Yu Yin Tang!