Breaking New Ground
Breaking New Ground is an installation I did in collaboration with Jutta Friedrichs of MÜ Furniture to represent 100% Design at Shanghai International Creative Industry Week. It ran from October 16-21, 2008, way up on Lingshi Rd., just north of Shanghai Circus World. My job was the audio component, a sound installation of six discrete channels synthesized in real-time and deployed algorithmically.
Jutta quit her job as an award-winning product designer a few months ago to launch her own furniture company, MÜ. Her public debut was at the 100% Design Shanghai show last June (and she’s racked up something like 15 glowing magazine write-ups in the short time since). 100% Design is an annual design show based in London, and it seems to be a pretty big deal in design circles. This was the first year they mounted one in Shanghai.
Shanghai International Creative Industry Week is exactly what it sounds like. I don’t know how long they’ve been doing it, but last year’s was held at a big, renovated slaughterhouse on the north side of town, super funky building, but kind of an incoherent and sprawling exhibition. This year’s event was held in a brand new complex, one of those just-add-water Shanghai developments that apparently wants to be known as “The Factory.” A month before the exhibition Jutta had to wear a hardhat and climb piles of sand to visit the space, and the morning of the show landscaping was still being installed.
Anyway, Jutta’s pals with the 100% Design folks, and they wanted to have a big display at SICIW to remind people of their presence in Shanghai as a champion of cutting edge design, and in particular to plug next year’s exhibition. So they invited her to design their space (a large, tall cement room, 240 square meters), and she came up with a kind of arctic theme, with furniture (highlights selected from the previous 100% Design show) variously floating on or bursting up through shards of Styrofoam masquerading as ice.
For my sound component, the basic idea was wind and wind chimes, to support the artic theme. There’s also this kind of insect-flute thing that occasionally floats in over the top. When we first got the sound up and running and I looked around the room, I was very pleased at how well the sound and visuals collaborated to evoke a unique ambience that was quite distinct from the rest of the show.
I programmed the sound behavior in Max/MSP, and it exemplifies the maxims I’ve developed from doing game audio design for the last 12 years. The sounds are completely synthesized in real time and deployed algorithmically, to ensure that nothing ever loops. The six channels are completely independent (technically superior to Dolby Digital, since Dolby is a compressed format, and I’m sending out 6 discrete channels of uncompressed PCM audio data). The goal was to create an ambient, natural-sounding environment that’s consistent yet unpredictable, analogous to what you might hear sitting on a bench in a park (if the park was in Antarctica).
The sound was generated in Max, came out of a MOTU UltraLite sound card, ran through a Yamaha MG16/6FX mixer (not strictly necessary, but handy for testing and setting up), then out to some big BAL AP1600 amps, then out to the speakers. There were actually 12 speakers, with each channel output from the computer routed to two different speakers. The speakers used were these small, black Bose speakers with no model numbers, suspended along a ledge that ran around the perimeter of the 240 square meter room, about 3 or 4 meters off the ground, a perfect height for this piece. Except for the sound card (which I had to buy at the last minute), the rest of the equipment was rented.
What’s Going On
Though it’s not immediately apparent, there is a firm harmonic underpinning to the work. It’s all in just intonation (i.e., small number frequency ratios, no temperament). There’s a fundamental frequency (a low A=110 Hz) that changes on a random timer, something like every 1-2 minutes. It can pick from among 5 different pitch multipliers (1, 1.125, 1.25, 1.375, 1.5), which correspond basically to scale degrees 1, 2, 3, a sharp 4 (not represented on a piano keyboard, but corresponding to the 11th harmonic), and 5. There is a constant low wind drone (filtered white noise) that constantly plays this pitch, doubled at the octave, to kind of anchor the rest of the sound.
All other sounds are multiples of this pitch, and when it changes, it really refreshes the whole piece. I first discovered this effect in 2000/2001, when I was developing a DirectMusic score for a cancelled Xbox project called Jonny Drama (making extensive use of VBScripting to try to get some interesting juxtapositions to arise from asynchronous music deployment). It feels kind of like you’re cleaning out your ear after you’ve become accustomed to hearing one harmonic center for a long period of time.
There are six independent wind generators, one for each channel of audio output. The synthesis couldn’t be simpler: a noise generator and a resonant band-pass filter (noise~ and reson~). I use a random walk (drunk) to pick harmonics of the fundamental frequency (6th-16th), changing on a random timer (and the timer itself uses a random walk, so sometimes changes are more frequent than others, creating a drunk envelope). Low frequencies are weighted to be more likely than higher frequencies. The first version I did had the pitch constantly changing, but Jutta felt (and I agreed) that the effect was too creepy, a kind of ghost-like moaning, so I decided to have each wind generator swoop to a new pitch, then sit there for a while, with all pitches being multiples of a fundamental, making the whole piece very stable and consonant, while still evoking wind that blows whithersoever it will.
I spent way too much time on a subtle function that allows the wind to swoop to a new pitch in a more natural way. Rather than simply sliding from one pitch to another, it picks a new pitch somewhere close to the desired pitch, then gradually hones in, kind of like a pendulum coming to rest. It’s hardly perceptible in the final piece, but I’m happier knowing it’s there. When the wind’s pitch is swooping, I open up the Q on the filter a bit, so it’s a noisier sound, becoming more pitch-focused only when there’s an important pitch to sound.
The chimes are also super simple. When I started work on this piece, I began by trying to develop a software synthesizer modeled on my beloved Roland JP-8000 (still recovering from a power surge at the 2006 Ubisoft company party), but I quickly realized this was not the best investment of my limited time, so I stopped with the basics: simple synthesis objects, amplitude envelope, filter. So in the end all chimes are just filtered triangle waves.
The chime deployment is really the interesting part. There are 3 chime generators, each associated with a pair of wind generators. They’re on random timers to decide when to start. When they start, they pick one of the associated wind generators, and from then on the density is linked to the wind’s pitch (perceived velocity). They have a drunk envelope (i.e., drunk target and drunk time to get there) that tells them whether to get louder or quieter, and if they get quiet beyond a certain threshold they turn themselves off. The chimes are where you can really appreciate the “cleaning out the ear” effect I described earlier, if the fundamental multiplier changes while the chimes are playing.
The insect-flute is probably the least successful element of the piece. It felt like it needed another layer, but deciding exactly what that layer should be was a bit tricky, especially given the relatively short time frame. Jutta had this concept of “bees in a bag,” to create a feeling of percolating excitement bubbling over, with one bee every now and then escaping, which is a rich concept and could probably be a piece unto itself, but in this implementation, the sound is a little too similar to wind, so it’s kind of in this middle ground that’s not quite merging with the other sounds into a new aggregate, yet not quite putting them in some kind of interesting contrast. I had the idea of something really solid and statuesque that the other sounds would kind of waft around, but it was hard to fit that into this ambient conception; it would be fun to do a variation as a concert piece with an acoustic trumpet or something.
Anyway, bees in a bag instantly suggested granular synthesis, which I’ve been doing a lot of lately, so I created some drunk envelopes to constrain the various parameters and put limits on it, so that if it gets below a certain threshold it will turn itself off, but if it gets above a certain threshold, the length of the grains increases, the pitch and volume variation narrows, and it bursts into song! The “songs” are based on Markov chains, derived from some short melodies I wrote for this piece. This was the last element of the piece I added, and it caused me a lot of grief to debug (and I can lay a small part of the blame on a bug in Max 5.02 that was fixed in 5.05). In my final implementation, I feel the Markov chains still sound too random; I’ve just scratched the surface of how they can be used, and plan to explore it further in subsequent pieces (using second order chains, coordinating melody and rhythm, lots of stuff to try).
Also, I was using a sample of me blowing on a bottle of Qindao beer as a basis for granular synthesis, and the sound is too similar to the wind blowing; it would have been nice to have something more contrasting.
First thing I noticed from doing this project is that I need a new computer. This program brought my 4-year old Pentium 4 single core laptop to its knees with frequent audio dropouts, but the new laptop volunteered by one of the organizers to use at the exhibition didn’t even break a sweat, averaging around 20-30% of the CPU.
I also confirmed once again that you can’t just send an email and expect your technical needs to be covered. Despite stressing numerous times that we needed a computer and 6-channel sound card, no computer was to be found when the contracted audio team arrived to set up. As far as I can tell, it is only through sheer luck that they happened to be setting up the 12 speakers in pairs to accommodate 6 discrete sends (although I had originally requested 8). The organizer eventually found a laptop, but I had to go out and locate and buy a new sound card for this show at my own expense (and thanks to my former Ubisoft colleague Zhang Lei for helping me track down the card on short notice). Next time I’ll do better to insist that I talk with the audio contractors myself in advance, or even better, supply my own equipment and let them rent it from me instead. At least I was wary enough to start on-site set-up two days earlier than planned, to prepare for just such an eventuality.
I was surprised that it was a bit of a tough sell to do an ambient installation; towards the beginning, folks kept wanting me to turn it up. I think that’s maybe just what everyone in Shanghai is used to hearing, ugly sounds blaring out of store fronts. Subtly’s a hard sell, but from my experience in games (and singing in choirs and going to bars and in many other contexts), volume doesn’t guarantee excitement or interest or anything, really, other than volume; after a while, you just tune it out. A well-made piece doesn’t need to keep trying to draw attention to itself. In the end, I think people came to appreciate what the piece was about, and visitors to the exhibition seemed to like it.
There’s a bunch of other stuff I would have liked to do in this piece, but there wasn’t enough time. But I think a sign of a good piece is that it points you in a clear direction for future work.
In general, I want to work towards more coordinated systems. I think that’s the next challenge, once you’ve got some interesting dynamic behaviors going on: adding more layers and coordinating them. There was some coordination going on already (chime density linked to wind velocity), but there could have been more (e.g., connecting chime and insect-flute start/stop times to some aspect of wind behavior instead of random timers). I also would have liked to make the 6 wind generators work together in kind of coordinated network, to get the feeling of gusts of wind moving through a space.
The insect-flute thing might have been more interesting if there were several layers of it going on, coordinated to do chorale-like things, or even counterpoint. I want to experiment with that more, but adding two more layers of processor-intensive granular synthesis would have been way beyond my laptop’s capabilities, so I’d have to use a different synthesis technique.
It would have been nice to have a more sophisticated synthesizer for my chimes; FM in particular would be an obvious choice for such sounds. I’ll keep working on my virtual JP-8000.
As I mentioned, I barely scratched the surface with Markov chains, but I’m planning to explore them in detail in a subsequent work.