Silence or Brainwave Communication or Sneak Preview?

Here’s a quick preview of what I’m going to be presenting at this Sunday’s “Silence or Silence or Brainwave Communication” show, which you should totally attend (details here).

I’m currently working on a collaborative project with my pal Chen Hangfeng 陈航峰, an installation for the Today Art Museum in Beijing, going up next month; he’s doing the visual part, and I’m doing the audio part. The point of departure was the notion of a kaleidoscope, which fragments and transforms the world around you into something unexpected, strange, and beautiful. To achieve the same kind of effect in sound, I’m writing a program in Max/MSP that will take a live audio signal coming from a microphone positioned around the gallery and manipulate it in funky ways. In the end it will have four different behaviors (what I’m calling “scenes”) that can be deployed independently on the two speakers in the little room we’re building. I’m planning to preview two of these at my performance this Sunday.

The basic sound production method is to use resonant filters to emphasize certain very specific frequencies present in the original signal. When the incoming signal includes one of the frequencies I’m looking for (and in the kind of noisy, ambient sound I’m pumping into the computer, this is quite likely), the filter will be excited, and you’ll hear a tone at that frequency that also retains the amplitude contour of the signal going into it. It’s not a particularly new or difficult idea—I think Jean-Claude Risset was one of the first do put this idea to musical use in the 60’s—but it’s still effective.

Setting up the filters (six of them, referencing the hexagon forms seen in kaleidoscopes) is pretty straightforward; the guts of the piece are in how and when I set those frequencies.

I’ve got an overarching harmonic plan based on just intonation that’s a bit complicated, but it basically involves picking a new scale for each “scene change.” I’ve got 5 scales to choose from. Different scales imbue the changing scenes with different moods or feelings. A fundamental pitch is selected from the chosen scale. Then each of the filters picks a multiple of that fundamental pitch (based on the scale) as its base pitch. Then it does these shorter volleys of some behavior (the specifics of which vary from scene to scene), which involve choosing another multiple (still based on the same scale) from that pitch. So you’ve got a scale degree multiplied by a scale degree multiplied by a scale degree, which is kind of analogous to the way images are reflected back and forth in the mirrors of a kaleidoscope.

I’ll be previewing two of these scene behaviors this Sunday. One is a kind of swooping, sustained thing, where each of the filters are behaving independently, coming in and out at different times, hovering around different octave offsets, very mellow and ambient. The number of swoops per volley, time of the sustain, time of the sweep, time between volleys are all controlled algorithmically, slowly changing over time. The other behavior is a more rhythmic thing, different patterns pulsing on the different filters, playing off each other, with gradually changing patterns, density, tempo multipliers, pauses also controlled algorithmically. Still mellow and ambient, but with a beat!

I’ve done a couple of pieces exploring this notion of algorithmically generated rhythmic patterns, and my first efforts were pretty unsatisfactory, notably in a piece called “Study for Eventual World Domination,” which I did at the 2Pi Festival in Hangzhou in 2006. Some of the patterns were interesting, but they kept coming out in a stream that became boring after a while, homogonous in its constant randomness, since it lacked any mid-level coherence. So I’ve come up with a system that will still generate algorithmic patterns, but store them in a list, so that they can recur and play off of other patterns, and so far this method is proving much more satisfactory. It’s actually a cross between the way I’m storing radio signals in buffers in Radiospace (randomly choosing one of the six buffers to rewrite every 20 seconds, allowing for this same kind of mid-level coherence) and how I was choosing cells to play and juxtapose in the EndWar music system.

To hear the other two scene behaviors, you’ll have to come to Beijing next month and check out the installation! The show opens April 17. I’ll post more about all the specific details later. In the meantime, see you Sunday, 8pm, Yu Yin Tang!