Research Blog – IV

Although it seems I’ve been a bit relaxed with my blog posts this fall, the last months have very productive and I feel like I’ve made some great progress in my Master’s work. The spring was a very active period for me in terms of performing, and while deepening my knowledge of what I already knew, I also gained a better sense of the gaps in my knowledge about synthesis, DSP, and programming. I tried to spend my time this fall addressing those areas, both in my practical work and in my lessons with Eirik Arthur Blekesaune and Øyvind Brandtsegg.

One of the main things I’ve been focusing on this fall is deepening my understanding and practical use of algorithmic composition. At the end of September, German improvising vocalist Mascha Corman asked me to perform a concert with her at the end of October in Bern, knowing very well that I wouldn’t be able to be in Switzerland at the time. She wanted me to create a SuperCollider program that would run independently, listen to her performance, and then process and synthesize sounds based on her input. Her additional requests were that it perform differently each time and not become predictable, but also that it maintained some sort of structural integrity. Essentially, what she was asking for was heading in the direction of an Artificially Intelligent program that could consistently interact and respond to repeated behaviour without becoming predictable.

At the moment (and for the foreseeable future), I’m under-equipped to create such a program – and certainly not within the timeframe of a month! What I did instead was create a program that generates nested patterns of events in order to create event phrases. These phrases are then distributed over the length of a performance to create a larger form that should (presumably) contain an inner logic, even if inaudible. Each event triggers the real-time processing of a recorded sound buffer or the performer’s live input .

This approach was a pretty quick-and-dirty solution for the given deadline. The performance went okay – Mascha was quite pleased, and many things happened in the performance that were unexpected (as she wished) but implied a certain sense of interaction/machine listening that she desired. These were essentially coincidences – there was very little machine listening happening in my code, but her perceptual experience (and that of members in the audience) was very interesting for me. As I’ve been exploring algorithmic approaches to composition this fall, the idea of musical semantics keeps arising, and I find it very interesting to see how people respond to randomness – sometimes listeners can assign a great deal of value to elements that have no human agency behind them whatsoever, and I find this fascinating!

At the same time I was making this program for Mascha (which I called GHOST), I used the core of the piece – the form-creation algorithm – to create a purely acousmatic piece, using just processed sound files as material. The nested-pattern-generator (below) assigns a number to sound files and processes contained within a Dictionary[~sounds], and then arranges the numbers into groups or phrases of patterns, sometimes with different lengths, occasionally repeating phrases, etc. I called this piece “pyramidg,” after a built-in sorting algorithm in SuperCollider that was originally at the heart of this piece. After some modifications, I’m now just using the “pyramid” sorting method, which you can see in line 11 (I have to acknowledge that Eirik helped out immensely with the creation of this part of the code):

(
~makeSegments = {arg numSegments = 20;
	var pSteps, result;
	pSteps = ~sounds.keys.asArray.sort;
	result = numSegments.collect({arg item;
		pSteps.scramble.copyRange(0, rand(pSteps.size - 1));
	});
	result;
};

~segments = ~makeSegments.value(5).pyramid(9).sputter(0.25,25);

~makeScoreFromSegments = {arg segments;
	var result, list,start;
	var startTime = ~startTime + 10, nextSubsegDelta = 0.0;
	var addDeltaTimesToSegment = {arg seg;
		var times = ({ exprand(30.0,125.0)} ! seg.size).sort;
		times.put(0, 0);
		[times, seg].flop;
	};
	list = segments.collect({arg it;
		var subsegs = addDeltaTimesToSegment.value(it).collect({arg jt;
			var delta, segnum;
			#delta, segnum = jt;
			[delta, ~sounds[segnum][\start].value]
		});
		subsegs = subsegs.collect({arg subseg;
			subseg = [
				subseg[0] + startTime + nextSubsegDelta,
				subseg[1]
			];
			subseg;
		});
		startTime = subsegs.last.first;
		nextSubsegDelta = exprand(30.0,45.0);
		subsegs;
	});
})

Mascha and I agreed to try to develop this project further (with some more time to experiment, of course), so I went to Bern at the beginning of December to expand upon the GHOST code. We tested a few of the machine listening Classes in SuperCollider and discussed in more detail the kind of performing software she would like to engage with. In the last days, I have begun creating the new version of GHOST (which I’m calling EIDOLON), which has very little to do with the original code but may actually come closer to Mascha’s original request. The heart of this beast is the \analyser Synthesis Definition:

(
SynthDef(\analyser, {
	arg inBus=0,frames=1024,thresh=0.3;
	var in,amp,silence,freq,hasFreq,chain,onsets,density,meanIOI,varianceIOI,time,trig;
	
	in = SoundIn.ar(inBus);
	amp = Amplitude.kr(in);
	silence = DetectSilence.ar(in,0.01);
	# freq, hasFreq = Pitch.kr(in, ampThreshold: 0.02, median: 7);
	
	chain = FFT(LocalBuf(frames),in);
	onsets = Onsets.kr(chain,thresh, \rcomplex);
	# density, meanIOI, varianceIOI = OnsetStatistics.kr(onsets,2.0);
	
	time = Sweep.ar;
	trig = Impulse.kr(density/3);
	
	SendReply.kr(trig, '/analysis', [amp,silence,freq,hasFreq,onsets,density,meanIOI,varianceIOI,time]);
	Out.kr(~ampBus,amp);
	Out.kr(~silenceBus,silence);
	Out.kr(~freqBus,freq);
	Out.kr(~hasFreqBus,hasFreq);
	Out.kr(~onsetsBus,onsets);
	Out.kr(~densityBus,density);
	Out.kr(~meanIOIBus,meanIOI);
	Out.kr(~varianceIOIBus,varianceIOI);
}).add;
)

This part of the code analyses an incoming signal in the following ways:

amp – follows the amplitude of the incoming signal

silence – sends a trigger whenever amplitude falls below a threshold

freq – follows the pitch of the incoming signal

hasFreq – sends a trigger whenever pitch is detected

onsets – sends a trigger whenever an onset is detected

density – amount of onsets within a certain time window (2 seconds)

meanIOI – the average interonset interval in the time window

varianceIOI – the standard deviation of the interonset intervals

This collection of signals is used in two different ways: first, all of this generated data is sent to control-rate busses (Out.kr) to be used as control signals for various processes to be applied to the input signal. Second, the SendReply.kr class is sending all this information via OSC to a “Listener.” The Listener takes all the incoming analysis data and then runs them through a list of conditional statements; these statements initiate and terminate real-time processing and algorithmic patterns of synthesized sounds, along with controlling certain parameters (global envelope durations, for example). The Listener also keeps track of data sent in the previous OSC message, with the intention that I can create conditional statements that react to directional tendencies of the data. For example, if the last amplitude value was much smaller than the current amplitude value, we can assume the input signal is crescendoing – and an appropriate response can be triggered.

Though I’ve only been working on the EIDOLON for about a week now, I haven’t run into any major problems yet. I’m trying to create a large library of processes that the Listener can choose from, and exploring how these processes react to the incoming control data is consuming most of my time at the moment. My plan is to have a functional version by the beginning of January, and then I should be able to begin testing it with performers in Oslo. Mascha is coming to Oslo in February to perform with this program at Kulturhuset, and I’d like to have a working version I can send to her for testing as well.

After finishing an early version of the EIDOLON, I’d like to test it quite a bit in order to deal with all the bugs, but I also want to see how it will respond to various instruments and performers and I want to get to know their experiences performing with it. Will it be “convincing?” Will the outcome be musical? I’ve also been thinking about how it could be possible to develop a version with multiple inputs, which would present several problems to consider: do I analyse the inputs separately or globally? Are processes applied only to input sources that trigger them? I feel as though trying to create an EIDOLON that would perform with two or more performers would dramatically increase the amount of CPU and logic necessary to process the incoming data…but it could be interesting to explore!

For what it’s worth, here are the performances I’ve presented this fall since my last blog post; many of these concerts consisted of improvised music, but the two performances at the end of October were my first experiments with algorithmic/generative pieces:

17.9 «Metamorphic Songs» w/ Unni Løvlid et al. @ Ultimafestivalen

26.10 pyramidg @ Norges musikkhøgskole

30.10 GHOST w/ Mascha Corman @ Café Cairo, Bern

16.11 Abelseth/McCormick @ Lillesalen, Oslo

7.12 w/ Mascha Corman @ House Concert, Bern

Research Blog – III

Greetings!

The last months have a been quite busy (surprise!), with tours in Finland and Germany, exam season at NMH, travelling back to Canada for an artistic residency and some family time and now Portugal for another residency. While I’ve been playing a lot with my laptop setup, I haven’t developed as much as I planned to in the last months, but I have made some important developments!

The biggest breakthrough since last writing came in a lesson with Eirik Arthur Blekesaune. In discussing potential signal routing options for the mapping I have planned on the Monome, he introduced me to the Just In Time programming library, or JITLib. I had used parts of this library to build step sequencers for projects in the past, but Eirik introduced me to the Node Proxy Definition (or Ndef) syntax, and this was a huge revelation! This approach has many built-in convenience methods for routing audio and control signals, and I’ve spent a bit of the last few months exploring the capabilities of this approach in contrast to the server message approach I’ve been using in the past.

Moving forward, I think I’ll end up using a combination of both approaches – while the routing solutions provided by the JITLib are very relevant to my project, I like being able to generate multiple instances of the same synth when working with synthesized sounds (which can create very interesting textures, for example). I think this is better executed by using server messages or node objects that have a fixed envelope and are self-freeing…but perhaps a better grasp of the JITLib (Pdefs and Tdefs, for example) would suggest otherwise.

Other breakthroughs came through spending time this spring using the laptop in compositions. I went to Germany in June to work with the vocal duo Monsters For Breakfast, and each of us created new pieces for our short tour at the end of that month. The compositions ranged from loosely structured improvisations to fixed “traditional” musical material (e.g. rhythms, lyrics). Neither of the vocalists knew my musical vocabulary very well before I arrived, so it was interesting to try to come up with approaches to the electronic elements in their pieces. The process was frustrating at times – we would talk about what a sound could/should be, I would create something in the evening after our rehearsal, and then present it to them the next day. If it still wasn’t the sound we were looking for, I had to repeat the process…and this continued for a few days, working on several compositions concurrently. It was a nice challenge for me, and also pointed me in the direction of working with sounds I wouldn’t normally gravitate towards myself.

This experience repeated itself (to a lesser degree) in Campbellford, Canada during a residency/workshop I attended in the beginning of July. The residency was for musicians who identified as both composers and performers, and we were all tasked with presenting some aspect of our artistic practice to the group during the course of the workshop. I gave a practical demonstration on the potential application of using SuperCollider for generative composition. After this lecture, one of the other composers asked me to perform on his composition later that week. We had very limited rehearsal time, but the process was much the same – he would describe the sound he was looking for, I would come up with my idea of that sound, and then we would try to get closer to his intention. These two experiences working with composed material were very interesting for me; the rehearsal/development/prototype process is very different than working with instrumental music, and can sometimes be a bit frustrating. It has, however, brought up many interesting discussions about how we talk about sounds, especially when working with individuals whose first language is not English.

Another recent highlight came this spring on tour with Monsters For Breakfast. The three of us gave a workshop at the Institut für Musik und Medien in Düsseldorf, presenting our respective processes of composing and performing with this instrumentation, and I also talked about my approach to improvising and composing using SuperCollider. Before the workshop, I was a little anxious, as the director of the department we were visiting is Julian Rohrhuber, one of the developers of SuperCollider…I didn’t want to give the impression that I could teach his students anything that he wasn’t capable of teaching them himself! The workshop was on the weekend, so he wasn’t there after all, and though his students were at a very high level, it seemed that my approach to using this software was interesting and new for them. It was a great experience for me to present this workshop; I left feeling confident in the work I’m doing with this software, and I also felt like everyone involved (the vocal duo, myself, the attendees) had inspired thoughts to offer and share.

To sum up, here are the performances I’ve given with the laptop since my last blog post; most of these concerts consisted of improvised music, but some also included composed material:

25.5 “With|in” premiere @ Only Connect Festival of Sound, Oslo kl. 18

26.5 w/ Fennel @ Victoria Nasjonal Jazzscene, Oslo kl. 20

3.6 w/ Emil Brattested @Victoria Nasjonal Jazzscene, Oslo kl. 2030

18.6 w/ Monsters for Breakfast @  Café Duddel for LAB Days, Köln kl. 18

21.6 w/ Monsters for Breakfast @ Spektrum, Berlin kl. 20:30

22.6 w/ Monsters for Breakfast @ LOFT, Köln kl. 20:30

23.6 w/ Monsters for Breakfast @ Onomato Künstlerverein, Dusseldorf kl. 20

24.6 Creative Lab w/ Monsters for Breakfast @ ON Neue Musik Köln, Köln kl. 11

8.7-13.7 Westben Performer-Composer Residency, Campbellford

4.8 w/ Monsters for Breakfast @ No Noise Festival, Porto kl. 15 & 23

My plan going into the fall semester is to continue to develop flexible synthesized sounds, more processing algorithms, and to finish mapping the Monome with all of these tools and various ways of controlling parameters. Of course, I plan to continue performing, and will try to transition to playing with the Monome instead of the laptop as my interface.

A few days ago, I performed two sets at a festival in Porto with Monsters For Breakfast, and we played the second set with a local percussionist, João Pais Filipe. In addition to being a fantastic drummer, he builds his own cymbals and gongs, and as a result has a very unique voice when performing. We joined him at his workshop/rehearsal space after the festival to play again, and the music was something very special – the voices blended well with his percussion, and the gongs, bells, and cymbals responded in interesting ways to digital processing. We all agreed to work more with this quartet in the fall and into the spring (despite the logistics of living all over the continent), and I’m excited to see what this music can become!

Research Blog – II

Greetings!

It’s been a few months since my last written update, and there are plenty of things to talk about! Since I last updated this blog, I’ve been fortunate to perform with many different groups of instrumentalists and in a variety of settings, and I feel like this pragmatic experience is helping greatly in defining my improvisational approach to laptop performance.

Technically speaking, I’ve programmed a collection of synth definitions in SuperCollider that I use to process live inputs, synthesize sounds, and manipulate pre-recorded audio buffers. Until this point, I’ve been performing by sending server messages from the SuperCollider IDE to activate the synths and manipulate their arguments. This approach can be bit cumbersome and doesn’t really give me the level of reactivity I’d like to have; it does, however, provide me with the flexibility to alter every synth argument to the most minute detail.

I’ve come to realize that transposing this approach onto a physical interface means that I will have to prioritize either spontaneity or the amount of control I have over each argument. I’m currently in the process of mapping these server messages to a Monome 256 controller, which will give me the ability to react to musical situations much quicker than the live-coding approach. However, as the Monome is just a grid of toggle buttons, I have to limit myself to preset arguments for each synth definition – perhaps three “versions” of each synth. I see this as a necessary limitation at the moment, but it will perhaps force me to find creative solutions in performing with a restricted degree of control over synth parameters.

Though I haven’t finished the mapping process yet, I’ve been performing a fair amount with the server-messages approach. Since I last updated this blog, I have performed in the following settings:

23.3 – MAUM concert in Levinsalen, performed with 4 instrumentalists

30.3-31.3 – duo concerts in Denmark w/ saxophonist Anders Abelseth

1.4-7.4 – duo concerts in Berlin w/ vocalist Thea Soti (herself using analogue electronics)

10.4 – duo concert w/ pedal steel guitarist Emil Brattested

19.4-30.4 – Nord+Mix workshop in Vilnius, Lithuania where I performed in 3rd order ambisonics

4.5 – 6 channel collaborative piece with flute, harp, and 5 dancers

For the coming months, I have quite a bit of work to do, and quite a few things to look forward to! First, I will premiere a performative installation at Sentralen for the Only Connect festival. At the Nord+Mix workshop, I was introduced to the concept of spatial modulation synthesis, which I found very interesting. For this installation, we were asked to work with specific “spaces” in the service hallways of Sentralen in Oslo; I’ll try to fully exploit the idea of space by reading excerpts from the english translation of Georges Perec’s “Espèces d’espaces” while the transient information from the text controls the spatial modulation of my voice.

In the end of June, I’m heading to Köln for a series of concerts and workshops with the improvising vocal duo Monsters for Breakfast. Thea Soti, who I worked with in Berlin in April, makes up one half of the duo, and she has been generous enough to arrange a short tour in Germany along with a few workshops where I will present my approach to using SuperCollider in an improvisational context. In preparation for these concerts, I’m hoping to develop a few more synth definitions that I’ll be able to test out over the course of these concerts and workshops.

I have a few other concerts and workshops coming up as well, but I’ll report on those in the next blog update! Until then….

Research Blog – [Documentation]

Here is where I’ll post documentation from the various projects I’m working with as this two-year study progresses:

Abelseth/McCormick: improvising saxophone/laptop duo

Among Us: dance performance involving buffer playback and manipulation, algorithmic synthesis, real-time processing of flute and harp (6 channels)

Emil Brattested: duo with pedal-steel guitar playing composed and improvised material

Fennel: augmenting the “acoustic” nature of this quartet through modest processing

GHOST/pyramidg/EIDOLON:  algorithmic composition, interactive improvising program

Monsters For Breakfast: improvising vocal duo augmented by laptop

Monsters For Breakfast w/ João Pais Filipe: improvising vocal duo augmented by laptop and percussion

Nord+Mix Quartet: improvisations with soprano flute, alto flute, and viola; rehearsed in stereo, performed in 3rd Order Ambisonics during Nord+Mix workshop in Vilnius

Quintet: improvising ensemble working with semi-composed material

Thea Soti: improvising voice/laptop duo; Thea is working with hardware electronics

Trio w/ Tove Bagge & Guostė Tamulynaitė: improvisations with prepared piano, synthesizer, and viola

Research Blog – I

Greetings!

This blog is a space for documenting my work during my Masters of Music in Performance Technology studies at Norges musikkhøgskole between fall 2017 and spring 2019. I’ll use this space to record both the breakthroughs and challenges I experience during my studies and research as I work towards my Master’s project, to be presented in the spring of 2019.

The original proposal for my master’s project was to create a collection of “improvising” algorithms that could independently interact with improvising instrumentalists. My goal was to use the SuperCollider programming environment to design “instruments” that would use information from analysing the current musical setting to make statistical decisions during performance: when to play, what/how to play, when to stop playing, etc.

I have since decided to go in a different direction; considering how important I consider the community aspect of music making, designing an autonomous digital performer would effectively isolate me from rehearsing and performing with other musicians, countering my own values. While I still believe this could be a future direction to explore, I’m now directing my efforts towards designing a digital instrument that I can actively use in performance.

I see the role of laptop performer as curatorial: not all the specific musical decisions are being made by me in performance, but I choose the frame within which decisions (or content) are made. In the way that a bandleader makes curatorial decisions about which performers, program, or venue to work with, the algorithmic programmer makes curatorial decisions concerning degrees of randomness/density/etc. without necessarily controlling the specific details of each sound event.

With this approach, the laptop performer is in constant dialogue with the software and hardware, both in “rehearsal” or prototyping stages and in performance as well. In a live setting, as the computer is left to decide the details of musical events, the laptop performer (from a curatorial perspective), must decide how contextualise the music created by the computer; this can be done by modifying software parameters, introducing or removing new processes, or by simply turning the instrument off.

As I develop this instrument and my curatorial approach to laptop performance, I’ll try my best to update this blog regularly with video and audio documentation of various performances, my thoughts on the process, and also some of the SuperCollider code driving certain elements of my “instrument.” More to come soon!