Kaleidoscope Wallpaper

My collaborative installation with Chen Hangfeng, which I guess we’re calling Kaleidoscope Wallpaper, or maybe just Kaleidoscope, is up and running at the Today Art Museum. We’ve been here in Beijing setting it up since April 11th, and the show opened to the public on April 17, and after a weekend of opening activities, including a lecture by Hangfeng and I yesterday, he’s back to Shanghai, while I’m sticking around to perform another laptop set at D-22 next Thursday night. The show runs through May 3. Get a taste of what our piece sounds like here.

I wrote at length about the work as it existed when I performed it at Brainwave Communication last month. Most of what I wrote then is still true, but I thought I’d take a moment to bring you up to date with the final piece, as it is running in the museum right now, across the pedestrian walkway from where I sit typing this at Unconditional Love Coffee. Before reading the following, you might want to brush up.

When I did the piece as a concert piece at Brainwave Communication (and again last week at 2 Kolegas in Beijing), I was providing the high level brain (such as it is) of the piece, deciding when to turn things on/off, setting volumes, etc. Since I’m not on hand to perform this role as the piece runs continuously as a gallery installation, I wrote another high level “brain” to serve this purpose. At periodic intervals (between 45 seconds and 2 minutes), it essentially flips a coin to decide if each of the 3 main filter behaviors (steady chords, independent swoops, or rhythmic pulses) should be on or off. When the steady chords are selected, another choice is made to turn on one, two, or three layers of chords, which may overlap independently. If all three behaviors turn up negative, a fourth, unique behavior is triggered: the source sound is turned up to prominence, giving listeners a chance to hear what’s behind the filters more clearly and make the connection to the real-time audio feed from the mics. I also take this opportunity to pick a new scale (harmony still functioning as described in my original post) for when the filters come in again.

So the basic idea of four “scenes” is still kind of true, but they’re not completely equal. The fourth scene, which is the unmanipulated audio stream from the mics, serves as a buffer between larger spurts of filter behavior (i.e., the other three scenes), and also between different harmonic areas. This unfiltered audio signal is actually constantly present, though at a lower volume most of the time, with random functions making it louder at some times than others, independent of the other filter behaviors.

Most of my work since Brainwave Communication has been on the audio manipulation before it gets to the filters, actually, which still pretty much function as they did back then. The audio signal path is basically this:

The signal comes in from two mics (one inside, one outside the gallery).
The volumes are adjusted (i.e., cranked way up).
The two channels are mixed together by a random function; each mic gets a different, constantly changing mix.
The channels are compressed (like, severely, to try to handle anything that might be coming in).
Each channel gets two random delays, which are constantly changing, up to six seconds. When the rate of delay changes, a natural artifact is that the pitch also changes, which results in a cool, kind of scrubbing effect. The delays are intended to fracture time, analogous to how a kaleidoscope fractures space. The volume of the delayed signals also fades in and out according to a random function.
Then the signal goes to the filters, and the output of the filters plus and a bit of the unfiltered signal (volume varying, as mentioned above) is sent out to the speakers.

I actually spent almost two weeks working on two additional features that I eventually cut from the piece, although I’ll probably be able to put them to good use in the future. I was starting to fear that it would be too quiet in the gallery at times, that there wouldn’t be enough of a raw signal for the filters to work on. So I developed this idea of short term and long term memory, to supplement the present in which the piece was originally designed to operate. The short term memory would periodically record the signal from the mics into a set of buffers, triggered when the signal coming in crossed a certain threshold, so that you’d only record loud sounds, which would then be periodically played back. This memory would be erased every night when the computer is turned off, so you’d only hear sounds recorded earlier in the day. The long term memory would sense when the signal coming in had dropped beneath a certain threshold for a certain amount of time, and then supplement that signal with material recorded earlier; the plan was to record during the opening week festivities, so you could also sometimes hear back to the beginning of the installation. Kind of interesting ideas, but in practice, they only served to homogenize the output, so that the piece always sounded more or less the same, and they obscured the main idea of the piece, which is that the sound is being processed in real time, like a kaleidoscope.

There are a couple of areas I’ve tagged for future research. (For me at least, a good piece is a kind of snapshot of a way of thinking, and it ought to foment ideas for future development.) In general I want to have more coordination between the different behaviors in the piece. An early idea I never got around to implementing was that the filter swoops would sometimes move in little duets, if they happened to start moving at the same time. Another idea is to expand the rhythmic behavior, so that not all six filters are necessarily going at once, and that they’re not all necessarily going at the same beat multiple. In general I want to expand the beat multiple idea, to allow for more sophisticated, non-integer tempo relationships. And the big thing I really want to push towards is a more organic high level evolution of the piece, so that it’s not just a timer deciding when things should change, but that the behaviors somehow decide themselves when to give way to something new. I have some strategies in mind I want to try out for how to accomplish this, but probably for the current installation, a timer works best; it’s definitely safer, to guarantee the piece doesn’t get stuck doing something really boring.

So that’s basically it. I’ve got to say that it’s not as successful in the final installation as I had hoped, since the other pieces nearby really interfere with what was meant to be a subtle, slowly changing, ambient experience. I’ve never been to a show of all sound-producing works that handles this challenge well, and Music to My Eyes is no exception. But I’ll rant more about that another time. If you want to hear the piece, it’s up at the Today Art Museum through May 3, and it’s free, so swing on by. Or come hear me play D-22 next Thursday to hear the piece done as a performance!