Research Blog – III

Greetings!

The last months have a been quite busy (surprise!), with tours in Finland and Germany, exam season at NMH, travelling back to Canada for an artistic residency and some family time and now Portugal for another residency. While I’ve been playing a lot with my laptop setup, I haven’t developed as much as I planned to in the last months, but I have made some important developments!

The biggest breakthrough since last writing came in a lesson with Eirik Arthur Blekesaune. In discussing potential signal routing options for the mapping I have planned on the Monome, he introduced me to the Just In Time programming library, or JITLib. I had used parts of this library to build step sequencers for projects in the past, but Eirik introduced me to the Node Proxy Definition (or Ndef) syntax, and this was a huge revelation! This approach has many built-in convenience methods for routing audio and control signals, and I’ve spent a bit of the last few months exploring the capabilities of this approach in contrast to the server message approach I’ve been using in the past.

Moving forward, I think I’ll end up using a combination of both approaches – while the routing solutions provided by the JITLib are very relevant to my project, I like being able to generate multiple instances of the same synth when working with synthesized sounds (which can create very interesting textures, for example). I think this is better executed by using server messages or node objects that have a fixed envelope and are self-freeing…but perhaps a better grasp of the JITLib (Pdefs and Tdefs, for example) would suggest otherwise.

Other breakthroughs came through spending time this spring using the laptop in compositions. I went to Germany in June to work with the vocal duo Monsters For Breakfast, and each of us created new pieces for our short tour at the end of that month. The compositions ranged from loosely structured improvisations to fixed “traditional” musical material (e.g. rhythms, lyrics). Neither of the vocalists knew my musical vocabulary very well before I arrived, so it was interesting to try to come up with approaches to the electronic elements in their pieces. The process was frustrating at times – we would talk about what a sound could/should be, I would create something in the evening after our rehearsal, and then present it to them the next day. If it still wasn’t the sound we were looking for, I had to repeat the process…and this continued for a few days, working on several compositions concurrently. It was a nice challenge for me, and also pointed me in the direction of working with sounds I wouldn’t normally gravitate towards myself.

This experience repeated itself (to a lesser degree) in Campbellford, Canada during a residency/workshop I attended in the beginning of July. The residency was for musicians who identified as both composers and performers, and we were all tasked with presenting some aspect of our artistic practice to the group during the course of the workshop. I gave a practical demonstration on the potential application of using SuperCollider for generative composition. After this lecture, one of the other composers asked me to perform on his composition later that week. We had very limited rehearsal time, but the process was much the same – he would describe the sound he was looking for, I would come up with my idea of that sound, and then we would try to get closer to his intention. These two experiences working with composed material were very interesting for me; the rehearsal/development/prototype process is very different than working with instrumental music, and can sometimes be a bit frustrating. It has, however, brought up many interesting discussions about how we talk about sounds, especially when working with individuals whose first language is not English.

Another recent highlight came this spring on tour with Monsters For Breakfast. The three of us gave a workshop at the Institut für Musik und Medien in Düsseldorf, presenting our respective processes of composing and performing with this instrumentation, and I also talked about my approach to improvising and composing using SuperCollider. Before the workshop, I was a little anxious, as the director of the department we were visiting is Julian Rohrhuber, one of the developers of SuperCollider…I didn’t want to give the impression that I could teach his students anything that he wasn’t capable of teaching them himself! The workshop was on the weekend, so he wasn’t there after all, and though his students were at a very high level, it seemed that my approach to using this software was interesting and new for them. It was a great experience for me to present this workshop; I left feeling confident in the work I’m doing with this software, and I also felt like everyone involved (the vocal duo, myself, the attendees) had inspired thoughts to offer and share.

To sum up, here are the performances I’ve given with the laptop since my last blog post; most of these concerts consisted of improvised music, but some also included composed material:

25.5 “With|in” premiere @ Only Connect Festival of Sound, Oslo kl. 18

26.5 w/ Fennel @ Victoria Nasjonal Jazzscene, Oslo kl. 20

3.6 w/ Emil Brattested @Victoria Nasjonal Jazzscene, Oslo kl. 2030

18.6 w/ Monsters for Breakfast @  Café Duddel for LAB Days, Köln kl. 18

21.6 w/ Monsters for Breakfast @ Spektrum, Berlin kl. 20:30

22.6 w/ Monsters for Breakfast @ LOFT, Köln kl. 20:30

23.6 w/ Monsters for Breakfast @ Onomato Künstlerverein, Dusseldorf kl. 20

24.6 Creative Lab w/ Monsters for Breakfast @ ON Neue Musik Köln, Köln kl. 11

8.7-13.7 Westben Performer-Composer Residency, Campbellford

4.8 w/ Monsters for Breakfast @ No Noise Festival, Porto kl. 15 & 23

My plan going into the fall semester is to continue to develop flexible synthesized sounds, more processing algorithms, and to finish mapping the Monome with all of these tools and various ways of controlling parameters. Of course, I plan to continue performing, and will try to transition to playing with the Monome instead of the laptop as my interface.

A few days ago, I performed two sets at a festival in Porto with Monsters For Breakfast, and we played the second set with a local percussionist, João Pais Filipe. In addition to being a fantastic drummer, he builds his own cymbals and gongs, and as a result has a very unique voice when performing. We joined him at his workshop/rehearsal space after the festival to play again, and the music was something very special – the voices blended well with his percussion, and the gongs, bells, and cymbals responded in interesting ways to digital processing. We all agreed to work more with this quartet in the fall and into the spring (despite the logistics of living all over the continent), and I’m excited to see what this music can become!

Research Blog – II

Greetings!

It’s been a few months since my last written update, and there are plenty of things to talk about! Since I last updated this blog, I’ve been fortunate to perform with many different groups of instrumentalists and in a variety of settings, and I feel like this pragmatic experience is helping greatly in defining my improvisational approach to laptop performance.

Technically speaking, I’ve programmed a collection of synth definitions in SuperCollider that I use to process live inputs, synthesize sounds, and manipulate pre-recorded audio buffers. Until this point, I’ve been performing by sending server messages from the SuperCollider IDE to activate the synths and manipulate their arguments. This approach can be bit cumbersome and doesn’t really give me the level of reactivity I’d like to have; it does, however, provide me with the flexibility to alter every synth argument to the most minute detail.

I’ve come to realize that transposing this approach onto a physical interface means that I will have to prioritize either spontaneity or the amount of control I have over each argument. I’m currently in the process of mapping these server messages to a Monome 256 controller, which will give me the ability to react to musical situations much quicker than the live-coding approach. However, as the Monome is just a grid of toggle buttons, I have to limit myself to preset arguments for each synth definition – perhaps three “versions” of each synth. I see this as a necessary limitation at the moment, but it will perhaps force me to find creative solutions in performing with a restricted degree of control over synth parameters.

Though I haven’t finished the mapping process yet, I’ve been performing a fair amount with the server-messages approach. Since I last updated this blog, I have performed in the following settings:

23.3 – MAUM concert in Levinsalen, performed with 4 instrumentalists

30.3-31.3 – duo concerts in Denmark w/ saxophonist Anders Abelseth

1.4-7.4 – duo concerts in Berlin w/ vocalist Thea Soti (herself using analogue electronics)

10.4 – duo concert w/ pedal steel guitarist Emil Brattested

19.4-30.4 – Nord+Mix workshop in Vilnius, Lithuania where I performed in 3rd order ambisonics

4.5 – 6 channel collaborative piece with flute, harp, and 5 dancers

For the coming months, I have quite a bit of work to do, and quite a few things to look forward to! First, I will premiere a performative installation at Sentralen for the Only Connect festival. At the Nord+Mix workshop, I was introduced to the concept of spatial modulation synthesis, which I found very interesting. For this installation, we were asked to work with specific “spaces” in the service hallways of Sentralen in Oslo; I’ll try to fully exploit the idea of space by reading excerpts from the english translation of Georges Perec’s “Espèces d’espaces” while the transient information from the text controls the spatial modulation of my voice.

In the end of June, I’m heading to Köln for a series of concerts and workshops with the improvising vocal duo Monsters for Breakfast. Thea Soti, who I worked with in Berlin in April, makes up one half of the duo, and she has been generous enough to arrange a short tour in Germany along with a few workshops where I will present my approach to using SuperCollider in an improvisational context. In preparation for these concerts, I’m hoping to develop a few more synth definitions that I’ll be able to test out over the course of these concerts and workshops.

I have a few other concerts and workshops coming up as well, but I’ll report on those in the next blog update! Until then….

Research Blog – [Documentation]

Here is where I’ll post documentation from the various projects I’m working with as this two-year study progresses:

Abelseth/McCormick: improvising saxophone/laptop duo

Among Us: dance performance involving buffer playback and manipulation, algorithmic synthesis, real-time processing of flute and harp (6 channels)

Emil Brattested: duo with pedal-steel guitar playing composed and improvised material

Fennel: augmenting the “acoustic” nature of this quartet through modest processing

Monsters For Breakfast: improvising vocal duo augmented by laptop

Monsters For Breakfast w/ João Pais Filipe: improvising vocal duo augmented by laptop and percussion

Nord+Mix Quartet: improvisations with soprano flute, alto flute, and viola; rehearsed in stereo, performed in 3rd Order Ambisonics during Nord+Mix workshop in Vilnius

Quintet: improvising ensemble working with semi-composed material

Thea Soti: improvising voice/laptop duo; Thea is working with hardware electronics

Trio w/ Tove Bagge & Guostė Tamulynaitė: improvisations with prepared piano, synthesizer, and viola

Research Blog – I

Greetings!

This blog is a space for documenting my work during my Masters of Music in Performance Technology studies at Norges musikkhøgskole between fall 2017 and spring 2019. I’ll use this space to record both the breakthroughs and challenges I experience during my studies and research as I work towards my Master’s project, to be presented in the spring of 2019.

The original proposal for my master’s project was to create a collection of “improvising” algorithms that could independently interact with improvising instrumentalists. My goal was to use the SuperCollider programming environment to design “instruments” that would use information from analysing the current musical setting to make statistical decisions during performance: when to play, what/how to play, when to stop playing, etc.

I have since decided to go in a different direction; considering how important I consider the community aspect of music making, designing an autonomous digital performer would effectively isolate me from rehearsing and performing with other musicians, countering my own values. While I still believe this could be a future direction to explore, I’m now directing my efforts towards designing a digital instrument that I can actively use in performance.

I see the role of laptop performer as curatorial: not all the specific musical decisions are being made by me in performance, but I choose the frame within which decisions (or content) are made. In the way that a bandleader makes curatorial decisions about which performers, program, or venue to work with, the algorithmic programmer makes curatorial decisions concerning degrees of randomness/density/etc. without necessarily controlling the specific details of each sound event.

With this approach, the laptop performer is in constant dialogue with the software and hardware, both in “rehearsal” or prototyping stages and in performance as well. In a live setting, as the computer is left to decide the details of musical events, the laptop performer (from a curatorial perspective), must decide how contextualise the music created by the computer; this can be done by modifying software parameters, introducing or removing new processes, or by simply turning the instrument off.

As I develop this instrument and my curatorial approach to laptop performance, I’ll try my best to update this blog regularly with video and audio documentation of various performances, my thoughts on the process, and also some of the SuperCollider code driving certain elements of my “instrument.” More to come soon!