Three Studies for Disklavier and Mechanical Chihuahua

[Note: this article was originally drafted on July 23, 2015.]

After teaching at Berklee College of Music’s campus in sunny Valencia, Spain, for the past two years, I’m gearing up to head back to the US this fall, where I’ll be returning to my prior post at Berklee’s Boston campus (although I’m switching departments, from Film Scoring to Electronic Production and Design). Since graduating our latest class of master’s students last week, I’ve been spending time archiving some of my projects from the past two years. One of the quirkier among these is my Three Studies for Disklavier and Mechanical Chihuahua from early 2014.

I composed these super short pieces (about twenty seconds altogether) at the invitation of my friend the mad genius Ranjit Bhatnagar (we had the pleasure of bringing him over to Valencia as a visiting artist in the fall of 2013). He issued an open call for compositions that he could use as part of his installation Short Ride in a Fast Chihuahua, which was presented at the Qubit Machine Music festival, February 12-14, 2014, in New York City. The constraints were extreme: he needed compositions of exactly 26 regular beats at a tempo of 320 bpm, so that each beat could be triggered by a yip from his mechanical Chihuahua (detected via microphone). That comes out to just over six seconds, if you add some time for the resonance of the instrument. A composer was free to place as many notes of varying velocities on each beat as desired.

A Disklavier, as you may know, is a fully functional acoustic Yamaha grand piano that can also be controlled remotely, whether via real-time signals or a pre-recorded file (using the MIDI protocol). It’s like a fancy player piano. I suppose the first thing that occurs to most composers writing for this instrument is to seize the chance to devise something that would be impossible for a human performer to execute, perhaps along the lines of Conlon Nancarrow’s masterful player piano studies from the 50’s and 60’s or the more recent black MIDI phenomenon.

Given Ranjit’s constraints, a sophisticated rhythmic study of the type that Nancarrow frequently undertook was impossible, but harmony was wide open. So I thought it would be interesting to explore the relationship between harmony and timbre (which can kind of be thought of as the same thing) and see if I could perform a kind of additive synthesis on an acoustic piano.

Getting Max to talk to the Disklavier
Getting Max to talk to the Disklavier

This was the process for composing my first and third studies. First, I wrote a simple contrapuntal passage in two (first study) or three (third study) voices. Then I wrote a Max patch that, based on each note I composed, calculated a bunch of additional notes ascending from the original note, to be played at the same time, decreasing in volume as pitch increases. I believe I used a couple of different formulas, but all based on the way the harmonic series works (i.e., fundamental frequency multiplied by a regular sequence of numbers starting with 1). Since each “partial” generated in this way is not a simple sine wave, but another piano tone with its own rich timbre, the music gets dense in a hurry, but there’s a hard upper limit—the piano’s high C8—that keeps things from getting too crazy. There’s also necessarily a lot of rounding off of pitches going on, so that they land on a pitch the piano can play, with the result that on the upper end of the piano, most of the keys are being played, although, crucially, at different velocities.

So the result is that, for each of the fifty or so notes I originally composed, my program spit out around seven hundred notes for the piano to play for each of my six-second, twenty-six beat compositions. My original compositions were actually quite simple and modal, avoiding big leaps in pitch, in an effort to help all of the different pitches perceptually fuse into one complex timbre as they move around in parallel. (I may have been thinking a bit of the first of Stravinsky’s Three Pieces for String Quartet.)

Original sketches for studies #2 and #1 (I don't think I wrote out #3). (BTW, the scrawls on the top of the page are from shortly after I fractured my humerus at the end of 2012, hence a little sloppy.)
Original sketches for studies #2 and #1 (I don’t think I wrote out #3). (BTW, the scrawls on the top of the page are from shortly after I fractured my humerus at the end of 2012, hence a little sloppy.)
First Study
First Study
Third Study
Third Study

And I was perhaps even a bit surprised to hear that it actually works! If you focus on, for example, the middle voice of the third study, which starts on the fourth beat with a diatonic run from C below middle C up to F, it doesn’t sound like a normal piano sound, but something brighter, almost more like a Rhodes. The same is true, maybe a bit less pronounced, in the bass part of the first study.

To hear the difference, compare with the second study. This one was composed the old fashioned way; I wrote the parts out on paper and just wanted to see what I could do by employing the whole range of the piano at once, so maybe this one’s a bit closer to a Nancarrow piece. Here, the timbre of the piano is untransformed; it still sounds like a regular piano, albeit a very busy one.

Recording the Disklavier on the Ann Kreis Scoring Stage
Recording the Disklavier on the Ann Kreis Scoring Stage

Berklee staff engineer and assistant professor and all around friendly and quite capable guy Chris Wainright was the engineer; as I recall, his description of this music was that it evoked an ice cream truck being pushed off a cliff. We used a pair of Royer 121 ribbon mics (if you see photos of the session, there’s a third mic set up, but we only used two for this recording) and recorded on Berklee Valencia’s Ann Kreis Scoring Stage into an Avid System 5 mixer. Ostensibly, this session arose out of Chris’ desire to test a faulty ribbon mic, although I’m not sure how ideally my music ultimately suited this purpose. We didn’t use any reverb or EQ or any other post-processing; I thought these recordings were dense enough as they were.

Of course the idea of using a piano to synthesize new timbres has been explored before. I would perhaps be remiss not to mention the work of the Austrian composer Peter Ablinger who got a lot of attention for his piece “DEUS CANTANDO,” which presents the arresting phenomenon of a speaking piano, in 2009. His custom mechanical piano, designed by Winfried Ritsch, appears to have much greater resolution than a Disklavier (each note can be pulsed up to 16 times per second), allowing him to recreate the sound of a child’s voice reading “the text of the 2009 Declaration of the International Environmental Criminal Court that was founded at the World Venice Forum pursuant to an initiative of Adolfo Pérez Esquivel and the Dalai Lama” (all of this according to this Ars Electronica citation).

Thanks so much to Ranjit for the invitation to do this wacky thing! I invite you all to check out his fantastic work (maybe start with his brilliant Twitter bot Pentametron, which retweets, in rhymed pairs, tweets that happen to be in iambic pentameter).

The Tomb of the Grammarian Lysias

[Note: this little essay was originally published on April 24, 2015, FWIW.]

One of my primary compositional projects of 2014 was a setting of Constantine P. Cavafy’s poem The Tomb of the Grammarian Lysias for voice and audience members’ mobile devices. Cavafy has been described as first great modernist poet of Greece, and he wrote this poem sometime between 1905 and 1915 while living in Alexandria. I premiered my setting at the joint International Computer Music Conference/Sound and Music Computing Conference in Athens last September and performed it again at the Beijing Central Conservatory of Music’s MusicAcoustica festival in October. Then in January I presented it in a new Web Audio version at the first Web Audio Conference in Paris, co-hosted by IRCAM and Mozilla.

Soundcheck at the University of Athens. Photo by Charles Nichols.
Soundcheck at the University of Athens. Photo by Charles Nichols.

The soloist sings without amplification, and the accompaniment of the piece consists entirely of fragments of his (i.e., my) voice deployed algorithmically from the mobile devices of the audience. As the voice of the soloist (singing in Greek) is recorded and transmitted to the phones and tablets of audience members, a heterophonous, chant-like texture emerges. In works like this, I explore crowd-deployed speaker networks as a highly flexible and portable alternative to traditional electro-acoustic sound reinforcement infrastructure. This underexplored configuration, which I have been investigating in the App Choir ensemble I founded here at Berklee’s Valencia campus, allows engaged audience members to enable the performance of a piece in a way that is somewhere between active participant and passive listener, and an unexpected intimacy results as the sound of a performer’s sung voice emerges from a listener’s very personal device. After the Paris performance, someone commented that this was one of the rare concerts at which the audience wasn’t fiddling with their phones and tweeting or texting, since they were actually using their phones to allow the music to happen.

The audience during my performance at the Web Audio Conference (Mozilla headquarters). The sound man is bored, since he had nothing to do during my piece. Image by Paul Adenot.

The audience during my performance at the Web Audio Conference (Mozilla headquarters). The sound man is bored, since he had nothing to do during my piece. Image by Paul Adenot.

I started composing this setting simply because I was inspired by the poem and recent circumstances, and I was also curious to explore this unique format, with no particular performance prospect in mind. When I saw the call for scores for the International Computer Music Conference in Athens on the theme of “From Digital Echoes to Virtual Ethos,” it seemed such a perfect match: my setting was in Greek, based on a tuning system first articulated by Ptolemy in the second century AD, consisting entirely of digital echoes of a soloist’s voice on audience members’ mobile devices. My piece was accepted and included in the opening concert of the conference on September 14, 2014, in the historic Ceremonial Hall of the University of Athens.

The poem might seem an odd choice, but I was drawn to it for several reasons. It describes a space, variably navigable, and presents a non-prioritized list of different types of writings, which, together with the Greek term γραφές that has been translated as “variant readings,” suggested an intriguing approach to text setting that incorporates some kind of variable, real-time process, such as I often employ in my work.

I just received word that a paper I co-authored with Javier Sanchez (who helped tremendously with the code architecture) describing this composition has been accepted to the ISEA 2015 conference in Vancouver this August, so I’ll save the technical discussion for then. [Update: You can read that paper here!] For now I’d like to share a bit about the particular constellation of personal associations that kept this text in my mind as 2013 drew to a close.

In French, a bachelor party is an “enterrement de vie de garçon:” the burial of the carefree life of a young man. While it’s an amusing term, the funerary imagery underscores what a dramatic transition the nuptial proceedings represent. (Although since Jutta and I had just a short time with friends in Berlin before our wedding in August 2013, we ended up simply combining our bachelor and bachelorette parties into a group dinner.) We honeymooned in Greece, the first visit for both of us. On the island of Santorini, we bought books of Greek poetry at Atlantis Books in Oia, following the recommendations of the proprietor. We watched the sun set from the roof of the bookstore and drank assyrtiko, and I read Constantine P. Cavafy’s Ithaca to a small group of tourists.

The caldera of Santorini. That's Oia off to the right.
The caldera of Santorini. That’s Oia off to the right.

Then on November 22, my grandfather died. In all of his 106 years, it is unlikely anyone ever referred to him as a grammarian. But I was reminded of something Laurie Anderson said, that when her father died, it was like a whole library burned down. Art Houge was of 100% Norwegian stock, and I’m pretty sure he had never been to Greece, maybe not even to Europe. But I had recently read an article about the exceptional longevity of people living on certain Greek islands, and I thought of the photographs we saw adorning cremation boxes in an isolated chapel on the hills of Amorgos.

A cemetery on Amorgos
A cemetery on Amorgos

I probably read “The Tomb of the Grammarian Lysias” for the first time on Amorgos, or if not, shortly after. I can’t remember if it immediately struck me as well suited to a musical setting, but at some point the commemorative aspect of the poem became associated with a recording of Greek Orthodox music I bought at the Nikos Xilouris shop in Athens on the final leg of our trip. I decided to set it in the original Greek, so I asked my student Niko to read the poem for me to record, and I wrote a simple app to play individual phrases at the touch of a button, which I used to practice reciting the text on my twenty-minute walk to work every day, gradually getting a sense of the inflections and formulating a melody.

In my video game work and sound installations, I explore sound as a landscape, rather than a narrative, or maybe landscape as a kind of narrative. This poem describes a space, a library within a city, with indications of the relative positions of things. There’s a description of the kinds of texts found in the library, listed in an arbitrary order, as though selected from browsing the stacks. Lists, like a litany or a mantra, can evoke a kind of interior landscape, a state or mood. I remember Psalm 150, which I set for The Esoterics using similar techniques in 2004, in which the psalmist exhorts us to praise the Lord with a long list of instruments, but I’m sure we’re not intended to follow the instructions sequentially.

The poem provides a space that is a memorial, a place you come to remember, as books and writing are ways of remembering. I consider my setting, composed at a time of significant life transition and dedicated to the memory of my grandfather, as a way to pass among these books.

Λυσίου Γραμματικού Tάφος
Κωνσταντίνος Π. Καβάφης

Πλησιέστατα, δεξιά που μπαίνεις, στην βιβλιοθήκη
της Βηρυτού θάψαμε τον σοφό Λυσία,
γραμματικόν. Ο χώρος κάλλιστα προσήκει.
Τον θέσαμε κοντά σ’ αυτά του που θυμάται
ίσως κ’ εκεί — σχόλια, κείμενα, τεχνολογία,
γραφές, εις τεύχη ελληνισμών πολλή ερμηνεία.
Κ’ επίσης έτσι από μας θα βλέπεται και θα τιμάται
ο τάφος του, όταν που περνούμε στα βιβλία.

The Tomb of the Grammarian Lysias
Constantine P. Cavafy

Very near to the right of the entrance to the library
of Beirut, we buried wise Lysias,
a grammarian. The spot is very well suited.
We placed him near to those things that he may
still remember there – commentaries, texts, technologies,
variant readings, volumes filled with Hellenistic studies.
And also this way, his tomb will be seen and honored
by us, when we pass by the books.

Translated by Niko Paterakis and reproduced with kind permission.

Amorgos

 

Food Opera Manifesto

On May 22, 2012, I realized a longstanding dream when I collaborated with local chef Jason Bond (of the widely acclaimed Bondir Restaurant in Cambridge) on a new multimedia composition entitled Food Opera: Four Asparagus Compositions. Chef Bond crafted a four course, asparagus-based tasting menu, and I provided real-time algorithmic accompaniment, responding to cues from diners and servers to score a meal as I would a video game.

We premiered our work at the Harvard University Graduate School of Design’s 40K Studio to invited members of the Harvard community, as part of a program curated Jutta Friedrichs, Elisabeth MacWillie, and Sara Hendren, students (at the time) in the new program in Art, Design and the Public Domain. The event was widely acclaimed in such news outlets as NPR, Grub Street, and the Australian Broadcasting Corporation’s First Bite, in addition to this nice write up on Harvard’s site.

Food Opera: Four Asparagus Compositions; photos by Melissa Rivard & Andrew Janjigian

(Thanks to Melissa Rivard & Andrew Janjigian for the photos!)

We’ve just announced a second food opera event entitled Sensing Terroir: A Harvest Food Opera on Tuesday, November 13, this time open to the public, and the complete information for that event (including reservation hotline; seating is pretty limited) is over on the Bondir website. This time around we’re partnering with Artists in Context to tell the story of local farming and sustainable food sourcing, incorporating field recordings and interviews with regional suppliers into the emergent soundscape to investigate dining as a communicative medium. The event is once again being produced by the deft and intrepid Jutta Friedrichs, and Stephan Moore (Merce Cunningham’s former soundman, Issue Project Room curator, and accomplished composer in his own right) will also be supporting with sound design and custom speakers.

But before we get into that, I first want to take a moment to pull together some key concepts surrounding this unique project.

Friends can testify that I’ve been talking about this idea for years; the first specific conversation I can pinpoint was sometime in 2006. I’ve long appreciated fine food, and somewhere along the line I realized that enjoying a well-crafted meal was an inherently time-based experience, akin to ballet, music, or film, but tailored to the sense of taste. This is true not only in the succession of courses, but in the way a course evolves, as flavors meld, textures break down, and hot and cold converge to room temperature. Even psychologically, our perception of a new dish changes as we become accustomed to it. Once I acknowledged this, the desire to compose music to accompany a meal, just like a dance or film score, followed naturally.

For a long time I called this concept a “dinner symphony” or “restaurant symphony.” The etymology of the word “symphony” conveys the idea of several elements coming together in a harmonious way. But I came to feel that the primary association of the word is with sound as a unified medium, whereas “opera” (which literally means merely “work”) has more of a multimedia/multisensory association, which more accurately evoked what I was after with this project. Hence, “food opera.” (I’ve noticed, however, that this term causes problems when I’m discussing the idea in Chinese, where the word for opera, 歌剧, explicitly connotes singing; I’m open to suggestions for a less misleading translation.)

Super-caramelized white asparagus; photos by Melissa Rivard & Andrew Janjigian
Super-caramelized white asparagus; photos by Melissa Rivard & Andrew Janjigian

This is perhaps the first time in history that it’s been possible to create a customized, responsive sonic food pairing for each individual diner. This is a new genre that has only recently been possible with any degree of refinement, due to the development of responsive digital systems and advancements in speaker technology. At the heart of this endeavor is the goal of respecting the integrity of the ancient social institution of communal dining. This is fundamentally different from the notion of dinner theater, in which some action is taking place away from the table; instead, in a food opera, the plate is the stage. It’s also very different from having a string quartet sitting next to your table, or a violinist or mariachi band wandering through the restaurant. In order to not impinge on the dining experience, the sound must be electronically mediated; the very presence of a live musician distorts the calculus of the meal to an unacceptable degree.

Instead, diners are free to talk, uninhibited, just as they would at any other meal. Food opera supports spontaneous interaction. In this way, music is on equal footing with the food. Chef Bond talks about how, in his observation, the awareness and appreciation of food happens intermittently, during pauses in the conversation. In our collaboration, music is not foregrounded, and it should absolutely not distract from the rest of the experience. Instead, it has an ambient quality; to quote Brian Eno, it should be as interesting as it is ignorable.

There’s an additional challenge in scoring a meal, one that is different from writing music for film or ballet, but actually quite similar to my work composing video game music over the past sixteen years: the element of indeterminacy. You can’t know in advance how long a diner will take to finish a course, or when the next dish will come. There’s an element of interactivity not only in food consumption rate, but also in the trajectory a diner chooses through the choices on a menu. So the concept from the beginning was to use video game scoring techniques to provide a customized soundtrack for each diner. Here again, the logistical challenges in asking a live performer to provide such nuanced, personalized accompaniment become clear; this experience could not exist without computers and speakers.

I feel that one thing taste and sound have in common is that they’re both inherently abstract, unlike a painting or a sculpture. (Perhaps this is why, as I’ve joked with composer friends in the past, the only thing worse than a bad music review is a bad restaurant review; these experiences are hard to distill into words.) Before the advent of recording technology, representational sound was by far the exception (think of the timpani evoking rolling thunder at the end of “Scène aux champs” in Berlioz’ Symphonie fantastique). Most music is about the abstract relationships between pitches and rhythms that add up to melody and harmony, and it’s similarly almost impossible to think of a taste that is about anything other than itself.

Chef Jason Bond and me
Chef Jason Bond and me; photos by Melissa Rivard & Andrew Janjigian

So in this project, our focus is on perception and experience, not on preconception or association. The last thing I’d want to do is to bring out a sea shanty to evoke seafood, for example, or, I don’t know, a polka to evoke Polish sausage. (This kind of ethnic shorthand is in fact the most widespread kind of food/music pairing, and to me the least interesting.) This is why I sought out a collaborator who eschews traditional dishes in favor of exploring new forms and combinations; it would be much less interesting to compose music for something like a Caesar salad, with which most people already have some kinds of context or expectation. Instead, in the course of our first collaboration, Chef Bond and I realized that many of the same abstract structural notions and terminology apply to cooking as well as to music, and we built on such overlapping concepts as texture, color, density, contrast, pungency, development, and form.

For the Harvard show, due to the relatively small scale, servings were fairly synchronized, and all of the activity was mixed down to six channels of sound, spaced along a long, narrow table with about 10 seats on either side. As each new dish was rolled out, there was a gradual crossfade from one dish to the next, as the servers made their way along the table. However for next month, we’re doing something closer to my original conception, which is for each diner to have a unique channel of audio, and for seatings to be unsynchronized, so that different tables are starting and stopping at different times; at any given moment, each table will be at a different point in the overall meal arc. There will be a central computer system that coordinates the music of all tables to a common pulse and key (again, video game techniques to the rescue). It’s a feature of the piece that sound will blend across adjacent tables; the entire restaurant will be transformed into a lush and active soundscape. The plan is to be able to provide diners with a unique recording of their meal after the fact.

Lynette Roth, curator Harvard museum, Jeffrey Schnapp; photos by Melissa Rivard & Andrew Janjigian
Lynette Roth and Jeffrey Schnapp at the first food opera, Harvard GSD, May 2012; photos by Melissa Rivard & Andrew Janjigian

I did a study back in Shanghai in 2010 with my friend the excellent chef Caroline Steger. We planned a three course meal and invited friends to talk about their impressions. There was scallop with wasabi sabayon, pumpkin soup garnished with lime/cumin-toasted seeds, and asparagus with hollandaise sauce. Based on these conversations, I composed several studies, and one page of music for woodwind trio wound up in last May’s event. I basically analyzed the page I had written, then wrote an algorithm to generate endless variations of it. Other sections used different generative, procedural, or algorithmic techniques, continuing indefinitely while avoiding repetition (a key concern in video game composition).

There’s been a lot of recent interest in the overlap of food and the more traditional arts (or perhaps I should say, those arts that are more traditionally considered arts). Just a few examples off the top of my head include the Science and Cooking series at Harvard (through which I first met Chef Bond), Ferran Adrià’s opening El Bulli to guests at Documenta 12, Marina Abramović’s Volcano Flambé (in which her soothing voiceover accompaniment, delivered via iPod, described the textures of the dessert [“crunchy, creamy, cold…”], and which I got to taste at Park Avenue Winter last year, and which may have made me ill), Heston Blumenthal’s Sound of the Sea (also featuring iPod accompaniment), and Paul Pairet’s multimedia tour de force Ultraviolet, which I got to check out when I was back in Shanghai last summer (it was amazing, and I have lots to say about it, but perhaps we should save that for another time). I feel like what we’re doing is part of a larger movement that is reevaluating the aesthetic potential of taste.

Here’s Jutta’s video, documenting the first food opera!

Food Opera – Four Asparagus Compositions from Jutta Friedrichs on Vimeo.

So to bring us back to the present, the next food opera happens on Tuesday, November 13, at Bondir in Cambridge. It’s a five course $125 prix fixe menu with drink pairings. That figure is not out of line for either a five course meal at a fine restaurant or an opera ticket, so you can think of it as two for the price of one!

In case you’re still not sold on this liberal conception of opera, it seems fitting to close with a link to an article entitled “The New Opera,” written by Gavin Borchert almost 10 years ago in the Seattle Weekly, in which he likens video game development to the early days of opera, with composers “exploring untested ways of combining music, story, and visual spectacle.” Sounds about right!

I leave you with the menu from Food Opera: Four Asparagus Compositions last May:

Soft-cooked pullet egg
Smoked asparagus froth, spiced syrup

Hot white asparagus soup
Miso-hazelnuts, nori, togarishi marshmallow

Warm green asparagus gel
Shaved asparagus, bonito, calamondin lime, madura long pepper, black garlic candy, spruce shoots, Okinawa sugar, sea salt, angelica, largo, matcha ice…

Super-caramelized white asparagus
Bran ash, sesame jelly, lemon mousseline, ginger cake

Mignardises asperges

Arcanum at 60

[Update: In addition to the PC Gamer citation, below, Forbes listed Arcanum as one of the 12 best video game soundtracks of all time in September 2012.]

The March 2012 issue of PC Gamer magazine includes their list of the top 100 games of all time, and Arcanum, for which I composed the soundtrack and designed most of the sound effects, came in at #60. Here’s the entire citation:

Arcanum is a thesis in player-character depth. My first trip through, I murdered Virgil the second he showed up, then went lone-wolf as a half-elf magic user with a penchant for cheap harlotry and booze. My second romp: I rolled as a charismatic capitalist gnome, collecting NPCs to do my technological dirty work. Having the Industrial Revolution stirred in with a reactionary, magic-using population is a setting unrivaled in RPGs. And that music…”

I was really thrilled to see the music called out explicitly (emphasis theirs). I think the only other soundtracks that received mention in their list were Peggle (Beethoven’s Ode to Joy) and Homeworld.

The Arcanum soundtrack is the biggest game score I’ve composed, and it’s been great to see the consistent interest the music has generated over the years, from Paul R. Coats’s saxophone transcriptions shortly after the game was released to the Lively Arcanum group that has been performing this music around Moscow and St. Petersburg more recently (I think this is their official website?).

The music has enjoyed a busy second life in the concert hall. I’ve presented the music live in a number of situations, beginning with the first Sound Currents concert in Seattle with odeonquartet in 2003, and later as part of a game audio panel at Cornish College of the Arts (also featuring Marty O’Donnell, Alistair Hirst, and Scott Selfon). I arranged a string orchestra version that was premiered at the Gewandhaus in Leipzig as part of the “Third Symphonic Game Music Concert” 2005, and last November I presented it alongside some of my more recent ambient electronic work at a concert with students at St. Olaf, a pairing that I felt quite complementary. It’s been played on the radio a few times, and I once got a BMI report informing me that it had been played on television in Finland.

I’ve received a steady trickle of fan mail, and one particularly touching note came from a first lieutenant stationed in Afghanistan who wrote about the cathartic role the music played for him. It’s really an honor to feel that my music has impacted people’s lives in this way.

I believe that the main reason for this music’s durability is that it was composed with a clear vision, a direct response to the unique environment conceived by the game’s creators, Tim Cain, Leonard Boyarsky, and Jason Anderson. In Arcanum, a Tolkienesque world of magic has undergone an industrial revolution. To evoke this historical anachronism, I composed music inspired by the modes and contours of early sacred polyphony, but orchestrated it for string quartet, an ensemble that came into its own around the time of the Enlightenment. This decision stood in contrast to the common choice of epic, orchestral music for role-playing games, and I feel its popularity is linked to its unconventionality.

Last week I was talking to some new students about their first writing assignment for the spring semester. Before they composed a note, I asked them to write down on paper their concept for the music. I want to get them thinking about what the music is meant to convey and what means might best convey it, while at the same time to discourage the kinds of habits that are easy to cultivate when writing with the fingers instead of the head. For a game soundtrack as much as for the concert stage, music that endures must have something to say.

If you want the full scoop on the Arcanum soundtrack (including links to recordings and scores), check out my Arcanum soundtrack page. I’ve also compiled other reviews of the Arcanum soundtrack over on my press page for your convenience.

Jay Chou and the Bastion OST

I’ve been playing a lot of Bastion lately, the indie game by Supergiant that has popped up on a whole bunch of Best of 2011 year-end lists. One of the music tracks has a lick in it that sounded oddly familiar, but I couldn’t place it at first. Then suddenly one day it hit me: Taiwanese pop superstar Jay Chou 周杰伦.

Regular visitors to my website probably know of my marginally unseemly fixation on Jay Chou; I even wrote an article for Time Out Shanghai in 2010 entitled “Why I Love Jay Chou.” He’s a trans-media pop star (as all the biggest ones seem to be these days), recording albums, starring in movies, hawking toothpaste and motorcycles (at $6 million, a record endorsement for an Asian artist). Western audiences who missed him in foreign fare such as Curse of the Golden Flower 满城尽带黄金甲 might know him best for his Hollywood debut as Kato in Michel Gondry’s Green Hornet last year. Studying the lyrics to his songs was my primary method for learning Mandarin, and I still harbor dreams of releasing a tribute CD one of these days. He was even the subject of my first ever post on this blog.

Check out this video for the second track on his November’s Chopin 十一月的萧邦 album from 2005, “蓝色风暴” (Blue Storm). (Note that Chopin is more commonly rendered 肖邦 in mainland China, but 萧邦 seems to work too, as discussed here.) Pay particular attention to the closing moments, from about 4:36.

Now check out this track from the Bastion soundtrack, by Darren Korb, starting around 0:26.

I don’t think the odds are so slim that I might be the first person to notice this, given the slender overlap between Jay Chou and Bastion’s respective fan bases. Clearly, both artists are using the same loop from some sample library. I have no idea which library, but after conferring with some of my Berklee colleagues, the consensus is that the instrument in question is most likely a bouzouki, a fretted Greek lute.

I’m straining to remember, but I don’t think I’ve ever used a canned loop in one of my compositions. (I may have used some stock phrases on King’s Quest back in 1998, but slowed way, way down beyond recognition to create an ominous background texture.) I’m totally down with the idea of creating a meta composition out of several streams of patterns or recorded material, the way that Charles Ives or Luciano Berio or David Shea might weave a larger fabric out of existing sounds; in fact, I think this is pretty much the job description for a video game audio lead. But using a stock loop out of a sample library just takes all the fun out of it. At the very least, if you want to keep it fresh, roll your own loops. Moreover, when creating a composition out of layered loops, it’s too easy to ignore the contrapuntal interactions between the different layers; you can miss the chance to think through all the alternate configurations of notes that might make your musical point more purposefully.

Most critically, there’s a regularity and periodicity that really feels anathema to the subtle irregularities of human performance, and it’s too common to come up with something artificial and rigid, chopped up evenly along the bar lines: every eight beats (or whatever) another layer comes in or out. And when a layer stops, it stops abruptly, with none of the resonance or decay of a natural sound, since it must be truncated precisely on the bar line, in order to seamlessly connect back to the beginning of the phrase. When I was fielding composer demos back at Ubisoft, this characteristic was grounds for immediate rejection.

The same objection applies on a macro scale, too, in game music implementations that simply loop a piece of music indefinitely (Bastion‘s primary mode of musical organization). At best, this kind of repetition can lead gamers to tune out the music, reducing its impact, and at worst, it leads to active irritation. In any event, the power of music to support the emerging drama of a narrative is lost. In fact, the desire to eliminate loops and fades (two of the most common signifiers that you’re listening to a game soundtrack) was a guiding impetus behind the design of the EndWar music system.

At least for Jay, the bouzouki sample is only a minor flourish, in a fairly ridiculous duet with DTMF touch tones, almost a punch line at the end of an eclectic song that started with Gregorian chant.

But anyway, let’s get back to my Jay fetish. Here’s a clip of me sitting in with the house band at Harry’s Bar in Suzhou on a few very loose renditions of Jay tunes towards the end of 2010. (Be patient; you’ve got to breach the Great Firewall for this clip.)

For more, don’t miss my Best of Jay Chou playlist on Spotify!

New Christmas Music

I had a whole essay planned about this, but I don’t think I’m going to get around to it in time to do anyone any good this year, so for now let me simply post my list of somewhat obscure Christmas music that I’ve been casually researching over the years. Thanks to my pal Andrew Sempere for encouraging me to share this! Beginning in the late 90’s, I had the idea to pick up a new album of new (to me) Christmas music every year, and here’s where it’s led me.

Olivier Messiaen, Vingt Regards sur l’‘enfant-Jésus. If I could pick only one Christmas CD for the rest of my days, it would be Pierre Laurent Aimard’s masterful recording of this masterpiece (cheating really; it’s a double CD set), running the gamut from serene to ecstatic. I don’t listen to much solo piano music, but this piece and this performance are so kaleidoscopic, it might as well be an orchestra or a synthesizer.
John Adams, El Niño. Combining traditional Christmas texts with Latin American poetry and stories from apocryphal gospels (e.g., St. Thomas), filling in the story with details like Joseph’s reaction when he came home and found his fiancée great with child. Fantastic recording with Dawn Upshaw and the late Lorraine Hunt Lieberson.
Tallis Scholars, Christmas Carols and Motets. In Dulce Jubilo, Lo How a Rose E’er Blooming, and other, lesser known early Christmas music.
Michael Praetorius and friends, Mass for Christmas Morning. A Lutheran mass as it might have sounded around 1620, with music mostly by Praetorius, but a bit of music by Martin Luther himself. A fascinating speculation, although the dynamic range of this recording is extreme, and the loud bits are a bit distorted.
Hector Berlioz, L’Enfance du Christ. Turning the traditional story into the stuff of opera, in true Romantic fashion, full of political intrigue in the court of Herod and a beguiling trio for two flutes and harp.
George Crumb, A Little Suite for Christmas, A.D. 1979. In the vein of Macrocosmos, and “Lully Lullay, Thou Little Tiny Child” gets a treatment similar to “Death and the Maiden” in Black Angels.
Olivier Messiaen, La Nativite du Seigneur. A relatively early work by Messiaen, a suite for organ. One of these pieces was how I met my good friend Erik Floan; take it from me, evangelical pastors: Messiaen gets butts in pews!
Dale Warland Singers, December Stillness. One of the finest choral CD’s I’ve ever heard, impeccably sung and pristinely recorded. Too much gorgeousness to sum up, with Stravinsky, Penderecki, Paulus, Poulenc, Rachmaninov, though the best pieces are probably by folks you’ve never heard of.
Handel’s Messiah. One year I figured I had to break down and buy it, overplayed as it is (I even sang it a few years in Shanghai with the International Festival Chorus, billed as “Concert of Classical Favorites by Handel” to avoid alerting the censors, who nonetheless called off the show one year). I like Sir John Eliot Gardiner’s recording, although it’s also a bit fuzzy on the loud parts.
J. S. Bach, Advent Cantatas BWV 36, 61, 62. This CD is also pure gorgeousness. BWV 61 and 62 are both based on the chorale tune Nun Komm, Der Heiden Heiland, and BWV 61 holds a special place in my heart, as I sang the bass recitative “Siehe, ich stehe vor der Tür” when we did this cantata at Holy Trinity on Mercer Island (under the direction of the aforementioned Erik Floan).
J. S. Bach, Christmas Oratorio. Masterpiece! Again I go with Sir John, also featuring Anne Sophie von Otter!
Ralph Vaughan Williams, Hodie. I actually never bought this one, but I checked it out from the library; a late work, been thinking I need to give it another go.
Benjamin Britten, Ceremony of Carols. This was last year’s choice, and it wasn’t really new to me, as I’ve sung some of it, and I’ve heard “This Little Babe” sung several times quite masterfully by St. Olaf’s Manitou Singers (including on this year’s Christmas at St. Olaf live simulcast). I got the recording with King’s College Choir and Stephen Cleobury, and I can’t say I’m overly taken with it; I’m tempted to conclude that it’s simply too tricky for children to sing (especially those quick licks in “Wulcom Yole!”).
Oh, and I would be remiss to exclude the various St. Olaf Christmas Festival CD’s to which I have returned over the years.

Spotify makes this whole endeavor less of an adventure, and I’m actually in the middle of an existential crisis about what it means to own a recording. But anyway, here are a couple of new discoveries that I’m in the process of exploring this year:
John Harbison, The Flight into Egypt. Did you know that a Christmas cantata won the 1987 Pulitzer Prize for music?
John Harbison, Christmas Vespers
Hugo Distler, Kleine Adventsmusik
Krzysztof Penderecki, Symphony No 2 “Christmas,” based on Silent Night (!)
(I’ve been on a bit of a Harbison kick lately, hearing him lecture at MIT a few weeks ago, then bumping into him on the subway a few days later; looking forward to his sixth symphony premiere by the BSO in January!)

Speaking of Spotify, I’m working on a Christmas playlist, if anyone cares to listen in.

And a couple of pop tunes to round things out.
Aimee Mann, One More Drifter in the Snow. Not just a good Christmas album, but a good Aimee Mann album, including a new song by Mann and one by hubby Michael Penn.
Amy Grant, A Christmas Album. This has been a Houge family Christmas tradition for about as long as I can remembering paying attention, so it seems uncharitable to omit it here.
Elvis Presley, Blue Christmas. I get a mild, perverse pleasure out of this. It contains the theologically inscrutable lines, “Santa knows that we’re God’s children/That makes everything right/So say your prayers to the Lord above/Because Santa Claus comes tonight.”
Steve Taylor, “Winter Wonderland.” Mariachi style!
Prince, “Another Lonely Christmas.” The B-side to “I Would Die 4 U,” this is a pretty stupid song about a guy whose girl died on Christmas, but a pretty ridiculously intense performance.
Erasure, “She Won’t Be Home.” Don’t judge me.
They Might Be Giants, “Santa’s Beard.” “I saw my baby wearing Santa’s beard/I wish he would go, he’s breaking up my home.”

God bless us, everyone!

29 Giraffes

So you’ve got one more week to view my solo show over at the Axiom Center for New and Experimental Media (through November 6). The centerpiece of the exhibition is my 6-channel, real-time, algorithmic sound installation Kaleidoscope Music, the history, aesthetics, and inner workings of which are amply documented elsewhere (Dig Boston feature, Artforum critic’s pick, Kickstarter project, and several exhaustive blog posts, for starters).

Kaleidoscope Music at Axiom
Kaleidoscope Music at Axiom

Rather, what I’m here to tell you about today are the prints from my 29 Giraffes series that are also included in the show. I usually do a pretty good job of documenting my pieces when they go up, but I never got around to blogging about this series when it was first exhibited at [the studio] in Shanghai, back in August 2009 (check out the press release, flyer, and this fine review from That’s Shanghai magazine). So allow me to take a few moments to fill you in.

First exhibition of 29 Giraffes at [the studio] in Shanghai in 2009
First exhibition of 29 Giraffes at [the studio] in Shanghai in 2009
29 Giraffes represents my first foray into visual art. The earliest source file I could find dates from July 29, 2007. You can see some of the images on Flickr and additional images (including some early tests) on Facebook.

These images were a natural extension of my work in sound, and the original goal was simply to create an album cover for my CD Radiospace 040823 (as featured last night on Gregory Taylor’s radio program Remember Those Quiet Evenings!). That piece performs algorithmically modulated granular synthesis on a live radio signal, sort of sandblasting the sound into new patterns and textures. I got the idea in my head that the album art ought to be generated the same way, and this stubborn notion delayed the release of the album (recorded in 2004) by about five years. It took a while to conceive of how to translate the process into the visual domain, and then another long while before I realized I could use software I already knew (Max/MSP, specifically the lcd object, before I learned Jitter) to pull it off, and then an extended period of experimentation and testing before I felt the results were worth sharing with anyone.

After all that effort, I kind of hate the image I rather hastily chose for an album cover, but I was in a hurry to get the CD pressed in time for the exhibition opening. If I had waited a little longer, this is the image I would have used (and still hope to use for a reissue one of these days).


Giraffe 2009719144455
In the same way that Radiospace samples bits of radio, these images sample fragments of digital photographs. The final images manipulate images of neon lights from Shanghai’s Nanjing Dong Lu pedestrian corridor (the stretch of street where the Sofitel in which I recorded Radiospace 040823 is located), not too far from the Bund (taken on a photo shoot with Jutta for my birthday in 2007, four years and one day ago). The software excises little chunks from these images and statistically arranges them into new patterns, according to various parameters that I can set (min/max size of the image fragment, location in the original image, density, opacity, etc.). The final compositions are comprised of one or more layers (sometimes quite a few) of these statistical passes (horizontal or vertical), which I think of as analogous to brushstrokes, over the black digital canvas.

The boundaries of these digital brushstrokes into which fragments of photographs are statistically pasted are derived from curves I’ve drawn by hand into tables with a mouse. My earliest studies involved Gaussian patterns and other types of statistical distributions, but I eventually decided I wanted to incorporate a more tactile, hand-drawn element. I felt at the time the need to emphasize that these works weren’t simply the cold, rational, impersonal result of a some obscure mathematical formula. Rather, I was involved in an intuitive and iterative process with my software, guiding the generation of new material, and then responding to it to see if I liked it or not, shaping its evolution much as I imagine an artist in a more traditional medium would.

When I moved to Shanghai in 2004, I read that Shanghai was the second largest city in the world, behind Mexico City. These images convey something of the density of urban life I experienced in one of the world’s most bustling metropolises, the exhilarating disorientation and striking juxtapositions. I think of this work in terms similar to those Robert Hughes used to describe the Merz collages of Kurt Schwitters:

Their common theme was the city as compressor, intensifier of experience. So many people, and so many messages: so many traces of intimate journeys, news, meetings, possession, rejection, with the city renewing its fabric of transaction every moment of the day and night…

Kurt Schwitters, Merz 410: Irgendsowas (1922)
Kurt Schwitters, Merz 410: “Irgendsowas” (1922)

The final images have nothing to do with giraffes. When I started developing software to manipulate digital images, my earliest test subject was a photo of a giraffe I took while visiting my parents in Kenya in 2005. I started using the term “giraffe” as shorthand for the whole project, since it was quicker to explain to my Ubisoft coworkers that I was staying in over the lunch hour to work on my “giraffes,” rather than my “algorithmic image manipulation software” or whatever. There aren’t 29 of them either; the number was chosen as arbitrarily as the name, and I kept both to emphasize the idea that arbitrariness (or artistic intuition) is a key part of the piece.

The original giraffe photo I used as the basis for my first visual studies
The original giraffe photo I used as the basis for my first visual studies
Giraffe study, lines between random points in Gaussian distributions, with colors drawn from the original giraffe image
An early Giraffe study, lines between random points in Gaussian distributions, with colors drawn from the original giraffe image

In addition to the first show at [the studio] and the current exhibition, several of these images were included in my solo show “The Point of Departure” at the True Color Museum in Suzhou about a year ago (read the little blurb I wrote about the series at that time here). I also sell these images as limited edition prints on archival paper, and several are already in private collections. If you’re interested, by all means, drop me a line! I guess these are the images CNNGo was referring to when they described my work as “very pretty.”

Going over my notes from the time of the first show, I’m reminded of several follow-up avenues still unexplored. The unanimous feedback I received from that first exhibition was that people would like to see these images larger; at the time I was constrained by what I could fit on one screen at once, but now that I’ve gotten into Jitter, I should look into that. Also, right before I left Shanghai last year, I collected a bunch of additional potential source images from backlit fashion advertisements, and I had the idea of doing digital collages based on awareness of different body parts, but I haven’t jumped on that yet either. As Morton Feldman said, “Now that things are so simple, there’s so much to do!”

Mobile 4

I just got back about three weeks ago from a wonderful, inspiring, and very successful visit to San Diego.  I was in town at the invitation of the San Diego Museum of Art to premiere my new piece Mobile 4 at the museum’s Summer Salon Series.  While I was in the neighborhood, I also had the opportunity to sample some of the energy and diversity of the city’s eclectic arts community.  And I learned what a California Burrito is.

Mobile 4 is a cross between a sound installation and a chamber music piece, scored for ten channels of real-time electronic sound plus a Laotian mouth organ called a khaen (performed by Christopher Adler), guitar (Colin McAllister), and accordion (me).  It was designed to be an ambient experience in a gallery, with musicians and speakers scattered throughout the room, rather than as a concert with a beginning and end.  There was no central stage to serve as a focal point; instead listeners were free to come and go.  The electronic sound was continuous throughout the evening, and for about 45 minutes, we three musicians joined in, adding a living layer to the installation.  Then as we finished in gradual succession, we all just got up and wandered away, perusing the paintings.

Quince, Cabbage, Melon, and Cucumber
Quince, Cabbage, Melon, and Cucumber

The walls of SDMA’s Gallery 16 are lined with Renaissance and Baroque Spanish art, providing an ideal setting: El Greco, Bermejo, Hieronymus Bosch’s Christ Taken Captive, Francisco de Zurbarán’s Agnus Dei (~1640), and my favorite, Juan Sánchez Cotán’s Quince, Cabbage, Melon, and Cucumber (~1602), with which I feel Mobile 4 demonstrates a particular aesthetic affinity.

Specifically, I was in town at the invitation of Alexander Jarman, curator of the museum’s Summer Salon Series.  (Alexander is also an accomplished artist in his own right; while I was in town I got to see some of his collages-in-progress, beautiful stuff!)  I met Alexander while he and his colleague Paul were in Suzhou last year for the opening of an exhibition of works from the San Diego Museum of Art’s collection at the Suzhou Museum (as reported here).  I was just finishing up my residency at the True Color Museum in Suzhou, and the museum director invited them over for an evening of traditional Chinese music, tea, and conversation, over the course of which they also had a chance to check out my Point of Departure show.  After I moved to Boston last November, Alexander and I kept in touch, and as this year’s Summer Salon Series was coming together, he invited me to present a new work.

We were a pick-up ensemble for this performance; Christopher is a friend of a friend of my good friend the writer Lisa Movius from Shanghai, so I dropped him a line after I read on his website about the interesting work he’s doing.  Christopher has spent a good deal of time in Southeast Asia mastering the khaen, a rich and resonant instrument in traditional music of the region; check out his webpage for more information and some videos and recordings of his performing.  The khaen struck me as a good match for the reedy sound of my accordion, and Christopher recommended his frequent collaborator Colin to round out our ensemble.

I was super pleased with how everything went, and I’ll try to get some documentation up online soon.  I had my back to the gallery entrance, but I was informed that there were a lot of people pressing to get in while we were performing.  The piece seemed particularly well-suited to a gallery setting; I almost wished we hadn’t distributed so many chairs, so that people would have been more encouraged to walk around during the piece and look at paintings.


* * * * *

Mobile 4 was a milestone for me on three fronts: new investigations into mobile structure, a real-time score display, and ten channels of algorithmic sound.

On the structural side, there are two primary behaviors in the piece: one is a sustained tone/drone that is kind of brushed in and out, Morton Feldman style; the other is a set of algorithmic melodies that is constantly being updated.  The drones are simply chosen randomly from the notes of the current scale.  Melodies are chosen from a table of available melodies algorithmically generated from the notes of the current scale.  There are five melodies available at any given time.  Periodically the program will replace one of the old melodies with a new one (randomly choosing from available pitches and durations, with the maximum melody length and the time between melodies varying according to a random walk).

Each sound source (instrument or speaker) behaves independently, but all are aligned to the same rhythmic pulse, and the global statistical balance between the melody and drone behaviors varies according to another random walk.  If we happen to venture too far over on the drone side, we enter a transition phase, in which all sound sources gradually converge onto one of the tones in the current scale, which then becomes a pivot tone, allowing for a common tone modulation to a new, algorithmically-generated scale.

The khaen is a diatonic instrument, so Christopher brought three different instruments (G minor, A minor, and Bb minor), allowing access to the full chromatic scale over the course of the performance, though only one diatonic scale at a time.  This limitation was built into the structure of the piece; when it’s time to transition to a new key area, first we decide if we want to transition to a new khaen, and then we pick a pentatonic subset of the available diatonic scale.  (These aren’t standard pentatonic scales, by the way, but any 5-note subset of the diatonic scale is fair game, creating a lot of interesting variety, sometimes with a major feel, sometimes minor, sometimes with a prominent tritone, etc.).  New scales tend to happen every 3-6 minutes (if we go more than 5 minutes without a transition, I start to nudge one to be more likely to occur).  This has the effect of kind of “cleaning out the ear,” similar to what we did in main menu music of EndWar.

Mobile 4 Setup

The cool thing is that the three live performers plus all of the electronic sound are coordinated, even though the melodic and scalar material is being generated on the fly.  We’re all keeping a common pulse, playing from the same scales, from the same pool of melodies, with the same density of musical material, all converging to the same common tones and modulating together.  The result is that musical material is passed all around the room, allowing for a nice, mid-level coherence that keeps the piece from sounding too random or arbitrary.  Accomplishing this kind of coordination is difficult in a traditional open form piece, where the musical material is written out in advance, and even harder to accomplish when there are real-time processes generating new material all the time.

So the centerpiece of this new work was a system for disseminating algorithmically generated melodic content to acoustic performers.  This was accomplished by means of a Jitter patch I wrote that displays the notes to play in a scrolling musical notation, similar to Guitar Hero, but using traditional notation, scrolling right to left.  Using a computer display to guide acoustic performers is something I’ve been mulling over for years; it’s always seemed a clear opportunity to apply design concepts from video games to issues of open form classical composition.  My first practical investigation was when I performed Christian Marclay’s video piece Screen Play the Shanghai eArts Festival in 2008 (you can read about that experience in greater detail here).  I fleshed out this idea further in my Zhujiajiao Drinking Game (2009), which provides indications to performers when they should blow on beer bottles (to produce sound), and when they should drink from them (to change the pitch).  Traditional music notation was a logical next step, and I have plans to expand and apply this system to some piano studies as well as larger ensemble pieces.

More Mobile 4 Setup

Ten channels of real-time, coordinated sound is a new milestone for me.  Previously, six channels was my maximum, in my installations Breaking New Ground (2008) and Kaleidoscope Music (2009, and coming soon to Axiom Center in Boston).  Well, technically, under the hood, my Self-Portrait, Dusk, at the Point of Departure (2010) incorporates 36 channels of audio, but it all gets mixed down to 4 channels before being sent out the speakers.  The exciting thing about having ten channels of sound is that you can really start to articulate a spatial texture, where a listener is not parsing individual signals coming from specific points in space, but there’s a spatial density that emerges, a real sense of depth, kind of like what I was exploring in my 18 channel video piece.

This idea of coordinated multiplicity is really important to my work.  In this piece the electronic sound was generated by very basic synthesis, triangle waves for the most part.  I chose them because the timbres blended well with the reedy tones of the accordion and khaen; the overtones would sometimes fuse into a larger aggregate sound, but then fracture off into different points in space.  If you’re wondering, we used the Anchor AN-1000X speakers, which worked well for this piece in a gallery setting.

As the title suggest, Mobile 4 is the fourth in a series of pieces exploring mobile form.  The title refers to the mobile sculptures of Alexander Calder, an important touchstone for my video game audio work.  A system like the music engine I designed for EndWar is analogous to a mobile sculpture, in that the individual elements of the composition are fixed (the short snippets of recorded phrases), but the relationship between them is in constant flux.  The first of my Mobile pieces was composed in 2004 for flute, cello, piano, and soprano, premiered by the Ensemble Sorelle at the Seattle Art Museum.  The second piece is very open ended, for any number of instruments, based on the cries of street vendors who used to pass through my first Shanghai neighborhood.  Mobile 3 was premiered at the 2Pi Festival in Hangzhou, a laptop composition based on recordings of traditional Chinese percussion instruments plus electric guitar.  In fact, these days most of my pieces contain some element of mobile structure, and I’m not particularly strict about which pieces earn the “Mobile” moniker; I thought long and hard about whether my Zhujiajiao Drinking Game should be titled Mobile 4, but in the end I decided that the social game aspects of that piece were more predominant.


* * * * *

In the process of preparing my performance/installation, I had ample opportunity to explore SDMA, located in the heart of San Diego’s gorgeous Balboa Park (which I’m told is the largest urban park in the US, including several museums in addition to the SDMA, the San Diego Zoo, the world’s largest outdoor pipe organ, and a lovely cactus garden), and I’ve become quite a fan!  In addition to the pieces mentioned above, I found Samuel van Hoogstraten’s Feigned Letter Rack with Writing Implements particularly revelatory, a seventeenth century Dutch work with a flatness and painterly self-awareness that to me seemed to presage cubism and Magritte (respectively).  It also brought to mind this amazing sequence from Andrei Tarkovsky’s Stalker.

Another work that caught my eye was Jean Hélion’s Composition in Color (1934), evoking as it does the kinds of mobile structures I mentioned above.  And speaking of Calder, there’s a great one near the entrance of the SDMA, which I took as an opportunity for a photo op.

Ben and the other Alexander

I also really dug the Rubén Ortiz-Torres show, the big From El Greco to Dalí exhibition (where I made the happy acquaintance of Spanish Impressionist Joaquín Sorolla), and the work of Gustav Stickley, whose furniture exhibition provoked the question addressed by this year’s Summer Salon Series: “What does a city need?”

In between rehearsals, Alexander was my tireless tour guide to San Diego’s busy arts scene. We checked out Double Break, a new art gallery and shop, not far from Balboa Park. I also got to meet super friendly and passionate Jfre from Disclosed Unlocation (we enjoyed a long, whiskey-fueled rap, together with my friend Ellen, closing down the charming dive bar Nunu’s following my performance), David from Agit Prop, and the busy folks at SD Space 4 Art (a live/work space where we got to check out a dance rehearsal in progress). There’s a lot of great energy in San Diego’s arts community, and it felt fantastic to be part of it, if only for one fleeting week.  And everyone with whom I spoke was full of praise for the enthusiasm and imagination that Alexander brings to the scene, through the Summer Salon Series as well as his other diverse efforts.  I heartily join my voice to the throng!

I also popped in (on Alexander’s recommendation) to the San Diego Museum of Contemporary Art’s aptly titled Phenomenal show, featuring works exploring light and perception by James Turrell, Robert Irwin, Craig Kauffman, Mary Corse, and Larry Bell. And I was totally floored by Jennifer Steinkamp’s absolutely jaw-droppingly wondrous Madame Curie, a huge, seven-channel algorithmic (pre-rendered) video installation, commissioned by MCASD specifically for their space, a tour de force, impeccably executed.  I dig it for its formal beauty, the layering of the digital branches and their gentle algorithmic swaying, the fixed perspective that allows for the evocation of a larger virtual space, the sheer scale of the thing, all in addition to the sly and ominous allusion to Marie Curie’s research into radiation.  Digital video done right!

Jennifer Steinkamp “Madame Curie” at MCASD from lemon verbena on Vimeo.

To round out my visit, I made my first visit to the San Diego Zoo since I was quite young.  Having been on safari numerous times on visits to my parents in Kenya, I couldn’t muster much enthusiasm for large animals in contained spaces, but I really enjoyed the aviaries, and kept thinking of Messiaen notating birdsong.  I also really enjoyed the lemurs.

While I was in town, it was great to meet up with my high school pal Ellen, who I met at show choir camp near Estes Park, CO, in the summer of 1991.  For the ten or so people in the world who have my Titled Untitled cassette from 1993, the song “Missing Ellen” was written for her (read more about such juvenilia here).  We seem to meet up every five years or so like clockwork; the previous visit had been in San Francisco during the Game Developers Conference in 2003 or so, until we met for dinner in Boston last month, and suddenly three weeks later our paths crossed again in San Diego!  I’m grateful to her for showing me around Seaport Village and Old Town and generously driving me to/from the airport.

The fantastic last day of my visit ended with a visit to the house of a friend of Alexander’s on Mission Beach, frisbee on the sand, a swim in the chilly Pacific Ocean at dusk, a roller coaster ride, a burrito, and the fireworks from Sea World over Mission Bay, followed up by a bonus second dinner of Vietnamese food.  Can’t wait to go back!


* * * * *

A big thank you to Alexander, Brittany, Greg, and everyone else at the museum for their help and support with this project.  Also a big shout out to Ferino’s Music for repairing my accordion, severely damaged in transit, in time for the gig.  If US Airways ever assures you that their Gate Valet service is safe for musical instruments, don’t you believe them!

What US Airways Did to my Accordion

(Christian Marclay + Guitar Hero)/2

As previously posted, I had the pleasure of doing some shows with my friend Yan Jun 颜峻 from Beijing recently. As I was digging up some links to introduce him to Boston area friends, I found this clip of a performance we did, together with Beijing-based Bruce Gremo, at the 2008 Shanghai eArts Festival (shot by Amsterdam-based artist Emile Zile, who I met after the concert; read his account here). We performed at the gracious invitation of Defne Ayas and Davide Quadrio of Arthub Asia, who curated the Final Cut section of the eArts Festival, which transpired in and around Xujiahui Park in Shanghai and also featured performances by B6, Aaajiao, Feng Mengbo 冯梦波, Dead J, Alizia Borsari, and Elliott Sharp, among others (Elliott Sharp is featured in the second half of this clip).

Here we’re performing a video score by Christian Marclay entitled Screen Play, which consists of a bunch of black and white footage from old movies, mostly evocative of sound in some way (telephones, slamming doors, ocean waves, dancers, phonograph records, etc.), overlaid with simple, abstract shapes in bright colors. The piece is about half an hour long. There are no clear indications how the score should be interpreted; rather, it serves as an inspiration, a framework for improvisation.

As I watch this clip now, my first reaction is, “Wow, it worked!” It’s become something of an established practice to do these kinds of live, improvised accompaniments to new video or old films, but in my observation, there’s one problem inherent in the format: synchronization. No matter how skilled the performer, it takes a certain amount of time to register changes in the video and interpret them as sound. So in preparing for this performance, I specifically set myself the task of finding a solution, and reviewing our work two and a half years later, I’m pretty pleased with the results.

Synchronization requires anticipation. This was one of my primary lessons when I studied conducting back at St. Olaf. In 4/4 time, if you want the orchestra to come in loud on the one, you need to make a big gesture on four of the previous measure; you need to stay a beat ahead. In traditional music notation, sound is represented on a grid in which the x axis is time and the y axis is pitch, so it’s easy to peek ahead on the timeline. Or in waveform representations, x is time and y is amplitude. But a video, unlike a graphic representation of sound on a page, is a time-based medium, and x and y can’t help you; time is time! There’s no way to look ahead and prepare for what’s coming next.


Christian, Bruce, and Yan Jun

To address this issue, I took a tip from some of my favorite videogames, Frequency, Amplitude, Guitar Hero, and Rock Band, developed by Harmonix Music Systems (just up the road here in Cambridge, MA, in fact; I just gave a talk there last month). In these games, as I imagine anyone reading this is already well aware, notes are represented by little colored dots coming towards you on the screen, and when they get to a certain point towards the bottom of the screen, you hit a button on your controller to sound the note. Notes are coming at you on a timeline, so it’s easy to look ahead and prepare for new notes to come, just like in traditional sheet music. This is a true video score.

To approximate this kind of prescience in Christian Marclay’s piece, I wrote a Jitter patch (the first time I used Jitter for a project, in fact) that plays back the movie in 4 separate windows, each window offset by one second. So I was able to see every event in the film coming from three seconds away and count down to activation: 3-2-1-play!

My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch
My Screen Play Jitter Patch

The window in my Jitter patch that displays the largest image (the actual current time) also doubled as my control surface for generating sound. To play along with the video, I was literally drawing on it with my mouse. The timbres I was playing employed very simple synthesis techniques, lots of bright cross modulation, and a bit of granular synthesis. The idea was that my buzzy tones would correspond to the bright, abstract graphics in the score, whereas Bruce (performing on his amazing homemade digital flute controller, the Cilia) would evoke the representational black and white clips, and Yan Jun (working with lo-fi electronics and voice) was more of a moody glue, bridging the two worlds.

I’m a big fan of Christian Marclay. His solo show at the Seattle Art Museum in 2004 is one of the best exhibitions I’ve ever seen, a fecund amalgamation of wit, inventiveness, historical awareness. He represents the full range of what a sound artist can be. He makes sound, of course, in performances, recordings, and installations. But he also makes silent pieces about sound, or about the ephemera surrounding contemporary sound production, and he also makes video pieces that suggest the contours of sound in another medium.

This playfulness is evident in Screen Play in the choice of images, their clever editing, and their relationship to the abstract graphics. He’s clearly toying with different ideas of sonic representation in the way these graphics are deployed, at times stretching five lines across the screen to evoke a music staff, at times drawing a waveform as on an oscilloscope, at times merging into the underlying scene (as when a bright yellow ball becomes attached to a man’s spoon as he’s slurping soup).

Jitter Patch in Action
Jitter Patch in Action

I realize that for Christian Marclay, this synchronization issue is probably not a problem at all. Screen Play was conceived for the kind of freely improvising musician exemplified by downtown New Yorkers like Elliott Sharp. For a certain type of resourceful performer, the video is a way to nudge the music along, to create an overall contour and form that may not have otherwise emerged, and which provides the potential for greater large scale contrast and recurrence than an unmediated free improv jam. It’s kind of like a time-based version of a graphic score, such as Earle Brown’s December 1952, Cornelius Cardew’s Treatise, or Christian Wolff’s Edges.

However, there are a few sudden transitions in Screen Play, in particular a sequence that alternates a slamming door with a full screen of saturated color, that strike me as contrary to this ethos. That bit, and a few others like it, seem to call out for big contrasts and tight synchronization, and I think at these moments one could legitimately criticize the score for setting up an expectation that the performer cannot fulfill. But I’m happy to report that, by applying a simple technique from videogame design, we nonetheless nailed it.

Using my Jitter patch to perform this score actually felt a lot like playing a videogame. It gets at what I consider to be the heart of gaming: to develop a skill and apply it to new challenges. This aspect of gaming is very much like performing music; from a certain point of view, any musical performance can be considered a game. I’d estimate that this modified approach to performing Screen Play lies somewhere near the midpoint between downtown New York free improvisation and Guitar Hero, and I think there’s a lot more interesting work to be done along this continuum.

On the One!
On the One!

Thanks to Defne Ayas and Mathieu Borysevicz, I think, for the pics. And thanks to Arthub Asia for the invitation to participate!

Meet Yan Jun

I was so pleased to have a good friend from China, the Beijing-based sound artist Yan Jun 颜峻, visiting us in Boston for a few days last week. He had stayed with me before in Shanghai, and I think he’s about the sweetest houseguest I’ve ever had. He’s in the US for a few months doing a residency in New York and a bunch of other shows across the nation, and he had a few days to pop up to Boston to perform with me at Outpost 186 (part of the Living Room Music series organized by saxophonist Michael Dobiel) and Whitehaus Family Record. Some documentation is up on Flickr.

Yan Jun at Outpost 186

You can check out Yan Jun’s busy concert itinerary on his blog. He recently played a bunch of shows in the Bay Area with the likes of Fred Frith (they had previously played together at the Sally Can’t Dance festival at Beijing’s D-22 last year) and Bob Ostertag. Next month he heads to Illinois and Ohio, where he’ll be joined two other veteran experimental Chinese artists on select dates. Li Jianhong 李剑鸿 is the organizer of the 2Pi Festival in Hangzhou, which I played in 2006 (and I am always happy to draw attention to the arduous translation I did of his account of his 2006 Japan tour). Wang Fan 王凡, one of Chinese underground experimental music’s earliest pioneers, was part of the Fuzhou leg of the Mini Midi Festival in which I also participated last May.

Afterall recently published this interview with Yan Jun, describing him as “the invisible glue holding together the Chinese experimental music scene,” and I’d say that’s pretty apt. In addition to his own performing, he’s an active organizer of events (at venues including UCCA, D-22, the long-running Waterland Kwanyin weekly event at 2 Kolegas, and the Mini Midi experimental stage of Beijing’s sprawling annual Midi Festival) and publisher of CD’s (Waterland Kwanyin, Subjam). He’s also an accomplished writer, with several books of poetry and a fair amount of criticism to his credit. He used to write about experimental music for Rolling Stone in China; a long time ago, to help me with my Chinese study, I set myself the task of translating his review of a new Ronez CD, and I developed a firsthand appreciation for his dense and literary style. (My post also includes my brief history of Rolling Stone magazine in China, if you’re curious.)

YanJun & Ben at Outpost 186

These days he often plays with feedback in his live sets, pointing a shotgun mic at small speakers with objects placed on them, running the signal through an array of stompboxes. I’ve also seen him incorporate spoken word, field recordings, and found objects into his performances and recordings. He’s done a bunch of installation work, too, including a piece called Wormhole Trip at The Shop in Beijing about a year ago (discussed in this Wire article), which involved contact micing all the pipes and ventilation in the space, with rich, resonant results. While he was in Boston, he gave me some of his new music, including a beautiful group recording called Big Can 大罐 made in a huge, abandoned cistern in Zhujiajiao (just outside of Shanghai), Deep Listening Band style, featuring Yan Jun, Hong Qile 洪启乐, Otomo Yoshihide 大友良英, Sachiko M, Yang Ge 杨戈, Xiao Qiang 小强 (Yang Ge’s wife), GOGOJ, Zhao Junyuan 照骏园 and others.

For more on Yan Jun, check out his blog and SoundCloud pages.

I think I first met Yan Jun at the 2Pi Festival in 2005, the same time I met Marqido (now of 10), Li Jianhong, and my good pal Yang Ge, among others. I don’t actually remember if Yan Jun performed that year, but I do remember that when we all went out to dinner afterwards, he was the natural leader, ordering food for two tables (note that ordering food for a large group of people is as refined an art form in China as calligraphy), and providing my first opportunity to taste warm Chinese yellow wine with ginger slices. As I recall, the first time we ever performed together was in early 2008, when we were both attending a NOIShanghai show at the now defunct Live Bar in Shanghai. Organizer Junky (of Torturing Nurse) asked if we wanted to do something, so we responded with an impromptu vocal duet, partially documented below. Since then we’ve collaborated on a performance of Christian Marclay’s Screen Play at the Shanghai eArts Festival, and Yan Jun invited me to join in the 2010 Mini Midi Festival tour he organized in Shanghai, Zhujiajiao, and around Fujian province.

Both shows last week were musically successful and a lot of fun, with good audience turnouts to boot. At Outpost we set up a 4 channel system so that I could present Lukou 路口 and the concert debut of the audio component of my Self-Portrait installation. To keep people on their toes, I also did two Jay Chou 周杰伦 songs, with Michael Dobiel joining in on saxophone. Yan Jun and I also reprised our vocal improv duet, which has become a staple of ours whenever we do a show together.

Veteran Boston area performer Vic Rawlings (cello and electronics) joined us for the Whitehaus show, and his duo set with Yan Jun was the week’s highlight for me, a perfect combination, subtle, austere, serene. Whitehaus resident Atom opened with four miniatures involving spinning jar lids, harmonica, coins, and bowed metal. I presented Kaleidoscope Music, probably the best rendition I’ve ever done live; wish I had recorded it! The original installation version uses a live microphone feed, but I generally find that problematic in live performance due to the risk of feedback, so here (as at Opensound last month) my solution has been to record some sounds from around the venue prior to the show and use that as the basis for my real-time filtering. In this case I captured some pre-show chatter about Stockhausen’s late work, and my set ended with my voice pronouncing the word “awesome” in sextuplicate.

Yan Jun, Vic, Atom, Ben @ Whitehaus

It was also wonderful spending time with Yan Jun between gigs. I’ve barely been in Somerville four months now, and this was my first chance to play tour guide in my new environs. We checked out the Olafur Eliasson show going up at Harvard GSD (where Jutta’s working on her master’s; I’m not sure if I’ve made this clear in previous posts) and browsed various Harvard Square book and CD shops (I restrained myself from making any purchases, but Yan Jun picked up Ligeti’s string quartets, part of that Sony Classical series). I took him to see John Luther Adams’s Veils and Vesper sound installation at Harvard’s Arts @ Garden 29, the fantastic Stan Vanderbeek show at the MIT Media Lab (he picked up Nic Collins and John Cage books at the MIT Press bookstore), and then we hopped across the river to the ICA (unfortunately the fantastic Mark Bradford show had just closed, but there was enough of their collection on display, plus Gabriel Kuri, to keep our attention, not to mention the fine Diller Scofidio + Renfro building itself, on a fine, cusp of spring day). For our hardcore sonic diversion one evening we checked out one of the fine, free NEC concerts, a program of Britten, Nielsen, and Sibelius. I cooked him burritos and Brussels sprouts and omelets. But mostly it was just good to catch up, to hear what our friends are up to, to kick the tires on my Chinese, and to see a familiar face from my former home of six years. Who knows where, but I hope our paths cross again soon!