Alrighty! So my exam concert is in 10 days or so, and I’m doing my best to inch myself across the finish line…I’ll be presenting two projects in the concert: first, saxophonist Anders Abelseth will improvise with my EIDOLON (2.0), and then I’ll improvise with violist Tove Bagge and pianist Guostė Tamulynaitė using my recently-mapped Monome 256 controller (see below).

Quickly (as I have to get back to work!), here is a list of some changes implemented in the second iteration of the EIDOLON – there’s a paper in my process portfolio with a bit more detail about my thought processes and approach:

-changed the decision-making paradigm: my original idea was to trigger processing based on dynamic/static behaviour of performer (if performer is “static,” reinforce that behaviour); now, a “state” variable determines the EIDOLON’s behaviour (support, contrast, ignore, or tacet) based on current and past trends, and these determine when and how certain sounds will appear

-added expandable/unlimited memory, by using Lists instead of Arrays; EIDOLON can now make decisions based on information collected from the entire performance

-added a second layer of analysis/decision making after first layer of processing, a Global Analyser which triggers and controls an “interrupt” layer that affects the global output

-all processing synths contained in a dictionary that is infinitely expandable, making it much easier to expand EIDOLON’s sonic vocabulary

-limited the use of all processes/sounds so that performances have less repetition and hopefully more linear formal structures

-modularized nearly everything (processes, memory arrays, OSC responders, etc.) so that the program can be easily expanded to accommodate several instrumental inputs…we’ll see how/if this works in practice, however! In addition to constantly analysing the input, each instrumental performer logs 2640 values into the program’s “memory” each minute in addition to all related processes and synthesis. While running the old EIDOLON program with a single performer, I don’t think the average CPU usage ever passed 20%, so it might work. The new program is much more complex than the first version, but I think I’ve improved my scripting technique quite a bit since then, so perhaps it will be efficient enough to bear the load!

ALSO: I’ve been working on mapping my Monome – the weeks (months?!) since my last post have been very busy!! Monome is a company that makes hand-made OSC controllers that offer a minimalistic design and simplistic interactivity. I have an older Grid model with 256 buttons that send three values: x coordinate, y coordinate, and a 0 or 1. In order to find expressive ways of mapping such a simple instrument, I’ve chosen to design processes that only have three modulatable parameters, which I can control with two independent tap tempos (whose values can be modulated by a decimal shift left or right) and a one-shot trigger.

I’ve also split the 16 x 16 grid into 4 identical channels so that I can perform with multiple inputs. Each channel has an input, slots for 33 distinct processes (which each have 3 preset “levels”), 3 slots for buffers which can be recorded into, 3 processes (x 3 preset “levels”) for each buffer, an output, and a mute button. The control modules have level controls for the synth presets, two tap tempos, a one-shot trigger, a mod key (for updating synth arguments), two decimal modifiers, a fadetime control (for synth envelopes, crossfading, etc.), and a volume control.

I’m very much looking forward to see the performative and expressive capabilities of this controller; I’ve put a lot of thought into the ergonomic and visual aspects of this controller, and I’m hoping that it will feel like a “real” instrument for me as much as it looks like I’m playing a “real” instrument for the audience.

Greetings! The last months have been dedicated mostly to solitary work on the EIDOLON project, though I should mention that a during a week in February, I went to Tbilisi, Georgia with two of my classmates at NMH. During the week, we held workshops on improvisation, mixing techniques, multichannel audio, and SuperCollider with bachelor and master’s students at the academy. We also had one concert of fixed media pieces and another concert with improvised/live electronic performances.

The week was undoubtedly a huge success: the students expressed a very strong desire to learn and explore more, and our performances were very well received. The whole trip was organized by Mako Gviniashvili, and we are in the early stages of planning a long-term continuation of this project that will take place over the next few years. I think it’s very important for us to maintain a connection with the students in Tbilisi, and I think it’s also a great opportunity for us to see how well we understand the concepts we work with on a regular basis!

While in Tbilisi, I gave a short solo performance with my improvising program EIDOLON. I began working on the first iteration of the EIDOLON during the December holiday and managed to finish a functioning version within a few weeks. After a several sessions of testing the EIDOLON with various instrumentalists, a few public performances, and discussions with various improvisers and teachers, I’ve collected a long list of changes I’d like to implement. Here are a few:

-use expandable Lists instead of Arrays for memory; short term memory reads {~list[0..79].median} (10 seconds), medium memory reads {~list[0..2399].median} (5 minutes), long term reads {~list.median}

-find a way to track trends in memory -> can try, ~list.differentiate, or write something in SClang

-all threshold values need to be adjustable for each instrument/microphone setup – build a tool that allows for quick discovery of upper/lower limits (could become a general soundcheck tool with an EQ, basic mixer, possibility of saving presets, etc.)

-in addition to analysing microphone input, all transformed sounds, synthesized sounds, etc. should pass through a \globalAnalyser that can also influence transformations of the composite

-the global/universal memory keeps track of all synth ~startStop activity, and can prevent certain sounds being used too often/frequently

-possible additional analysis: spectral entropy? spectral flux?

In light of these considerations (and many others not listed here), I’m now working on a new version of the EIDOLON (2.0!!) which will address these points. The core of the program will be slightly different so as to accommodate multiple instrumental inputs and so that I can integrate a bit more information feedback to produce more informed decisions, especially with regards to repetition, phrasing and musical form. I also plan on building a layer of “interrupt modules,” to borrow a term from Sam Pluta. These will be a set of processes triggered by the EIDOLON to prevent repetition without development and to occasionally introduce chaotic behaviour in performance.

As expected, the first iteration of the EIDOLON has presented several challenges, and the next weeks will be spent trying to address them. I’ve intentionally kept my spring relatively free so that I can dedicate time for this project and others; I plan to test the EIDOLON 2.0 at the end of this month, after which I’m sure I will have other challenges to address…I still have plans to further develop my live performance setup by mapping processes to a Monome controller. This project has been put on hold slightly over the last year, but I’m looking forward to revisit it again over the next months.

Although it seems I’ve been a bit relaxed with my blog posts this fall, the last months have very productive and I feel like I’ve made some great progress in my Master’s work. The spring was a very active period for me in terms of performing, and while deepening my knowledge of what I already knew, I also gained a better sense of the gaps in my knowledge about synthesis, DSP, and programming. I tried to spend my time this fall addressing those areas, both in my practical work and in my lessons with Eirik Arthur Blekesaune and Øyvind Brandtsegg.

One of the main things I’ve been focusing on this fall is deepening my understanding and practical use of algorithmic composition. At the end of September, German improvising vocalist Mascha Corman asked me to perform a concert with her at the end of October in Bern, knowing very well that I wouldn’t be able to be in Switzerland at the time. She wanted me to create a SuperCollider program that would run independently, listen to her performance, and then process and synthesize sounds based on her input. Her additional requests were that it perform differently each time and not become predictable, but also that it maintained some sort of structural integrity. Essentially, what she was asking for was heading in the direction of an Artificially Intelligent program that could consistently interact and respond to repeated behaviour without becoming predictable.

At the moment (and for the foreseeable future), I’m under-equipped to create such a program – and certainly not within the timeframe of a month! What I did instead was create a program that generates nested patterns of events in order to create event phrases. These phrases are then distributed over the length of a performance to create a larger form that should (presumably) contain an inner logic, even if inaudible. Each event triggers the real-time processing of a recorded sound buffer or the performer’s live input .

This approach was a pretty quick-and-dirty solution for the given deadline. The performance went okay – Mascha was quite pleased, and many things happened in the performance that were unexpected (as she wished) but implied a certain sense of interaction/machine listening that she desired. These were essentially coincidences – there was very little machine listening happening in my code, but her perceptual experience (and that of members in the audience) was very interesting for me. As I’ve been exploring algorithmic approaches to composition this fall, the idea of musical semantics keeps arising, and I find it very interesting to see how people respond to randomness – sometimes listeners can assign a great deal of value to elements that have no human agency behind them whatsoever, and I find this fascinating!

At the same time I was making this program for Mascha (which I called GHOST), I used the core of the piece – the form-creation algorithm – to create a purely acousmatic piece, using just processed sound files as material. The nested-pattern-generator (below) assigns a number to sound files and processes contained within a Dictionary[~sounds], and then arranges the numbers into groups or phrases of patterns, sometimes with different lengths, occasionally repeating phrases, etc. I called this piece “pyramidg,” after a built-in sorting algorithm in SuperCollider that was originally at the heart of this piece. After some modifications, I’m now just using the “pyramid” sorting method, which you can see in line 11 (I have to acknowledge that Eirik helped out immensely with the creation of this part of the code):

~makeSegments = {arg numSegments = 20;
	var pSteps, result;
	pSteps = ~sounds.keys.asArray.sort;
	result = numSegments.collect({arg item;
		pSteps.scramble.copyRange(0, rand(pSteps.size - 1));

~segments = ~makeSegments.value(5).pyramid(9).sputter(0.25,25);

~makeScoreFromSegments = {arg segments;
	var result, list,start;
	var startTime = ~startTime + 10, nextSubsegDelta = 0.0;
	var addDeltaTimesToSegment = {arg seg;
		var times = ({ exprand(30.0,125.0)} ! seg.size).sort;
		times.put(0, 0);
		[times, seg].flop;
	list = segments.collect({arg it;
		var subsegs = addDeltaTimesToSegment.value(it).collect({arg jt;
			var delta, segnum;
			#delta, segnum = jt;
			[delta, ~sounds[segnum][\start].value]
		subsegs = subsegs.collect({arg subseg;
			subseg = [
				subseg[0] + startTime + nextSubsegDelta,
		startTime = subsegs.last.first;
		nextSubsegDelta = exprand(30.0,45.0);

Mascha and I agreed to try to develop this project further (with some more time to experiment, of course), so I went to Bern at the beginning of December to expand upon the GHOST code. We tested a few of the machine listening Classes in SuperCollider and discussed in more detail the kind of performing software she would like to engage with. In the last days, I have begun creating the new version of GHOST (which I’m calling EIDOLON), which has very little to do with the original code but may actually come closer to Mascha’s original request. The heart of this beast is the \analyser Synthesis Definition:

SynthDef(\analyser, {
	arg inBus=0,frames=1024,thresh=0.3;
	var in,amp,silence,freq,hasFreq,chain,onsets,density,meanIOI,varianceIOI,time,trig;
	in =;
	amp =;
	silence =,0.01);
	# freq, hasFreq =, ampThreshold: 0.02, median: 7);
	chain = FFT(LocalBuf(frames),in);
	onsets =,thresh, \rcomplex);
	# density, meanIOI, varianceIOI =,2.0);
	time =;
	trig =;, '/analysis', [amp,silence,freq,hasFreq,onsets,density,meanIOI,varianceIOI,time]);,amp);,silence);,freq);,hasFreq);,onsets);,density);,meanIOI);,varianceIOI);

This part of the code analyses an incoming signal in the following ways:

amp – follows the amplitude of the incoming signal

silence – sends a trigger whenever amplitude falls below a threshold

freq – follows the pitch of the incoming signal

hasFreq – sends a trigger whenever pitch is detected

onsets – sends a trigger whenever an onset is detected

density – amount of onsets within a certain time window (2 seconds)

meanIOI – the average interonset interval in the time window

varianceIOI – the standard deviation of the interonset intervals

This collection of signals is used in two different ways: first, all of this generated data is sent to control-rate busses ( to be used as control signals for various processes to be applied to the input signal. Second, the class is sending all this information via OSC to a “Listener.” The Listener takes all the incoming analysis data and then runs them through a list of conditional statements; these statements initiate and terminate real-time processing and algorithmic patterns of synthesized sounds, along with controlling certain parameters (global envelope durations, for example). The Listener also keeps track of data sent in the previous OSC message, with the intention that I can create conditional statements that react to directional tendencies of the data. For example, if the last amplitude value was much smaller than the current amplitude value, we can assume the input signal is crescendoing – and an appropriate response can be triggered.

Though I’ve only been working on the EIDOLON for about a week now, I haven’t run into any major problems yet. I’m trying to create a large library of processes that the Listener can choose from, and exploring how these processes react to the incoming control data is consuming most of my time at the moment. My plan is to have a functional version by the beginning of January, and then I should be able to begin testing it with performers in Oslo. Mascha is coming to Oslo in February to perform with this program at Kulturhuset, and I’d like to have a working version I can send to her for testing as well.

After finishing an early version of the EIDOLON, I’d like to test it quite a bit in order to deal with all the bugs, but I also want to see how it will respond to various instruments and performers and I want to get to know their experiences performing with it. Will it be “convincing?” Will the outcome be musical? I’ve also been thinking about how it could be possible to develop a version with multiple inputs, which would present several problems to consider: do I analyse the inputs separately or globally? Are processes applied only to input sources that trigger them? I feel as though trying to create an EIDOLON that would perform with two or more performers would dramatically increase the amount of CPU and logic necessary to process the incoming data…but it could be interesting to explore!

For what it’s worth, here are the performances I’ve presented this fall since my last blog post; many of these concerts consisted of improvised music, but the two performances at the end of October were my first experiments with algorithmic/generative pieces:

17.9 «Metamorphic Songs» w/ Unni Løvlid et al. @ Ultimafestivalen

26.10 pyramidg @ Norges musikkhøgskole

30.10 GHOST w/ Mascha Corman @ Café Cairo, Bern

16.11 Abelseth/McCormick @ Lillesalen, Oslo

7.12 w/ Mascha Corman @ House Concert, Bern


The last months have a been quite busy (surprise!), with tours in Finland and Germany, exam season at NMH, travelling back to Canada for an artistic residency and some family time and now Portugal for another residency. While I’ve been playing a lot with my laptop setup, I haven’t developed as much as I planned to in the last months, but I have made some important developments!

The biggest breakthrough since last writing came in a lesson with Eirik Arthur Blekesaune. In discussing potential signal routing options for the mapping I have planned on the Monome, he introduced me to the Just In Time programming library, or JITLib. I had used parts of this library to build step sequencers for projects in the past, but Eirik introduced me to the Node Proxy Definition (or Ndef) syntax, and this was a huge revelation! This approach has many built-in convenience methods for routing audio and control signals, and I’ve spent a bit of the last few months exploring the capabilities of this approach in contrast to the server message approach I’ve been using in the past.

Moving forward, I think I’ll end up using a combination of both approaches – while the routing solutions provided by the JITLib are very relevant to my project, I like being able to generate multiple instances of the same synth when working with synthesized sounds (which can create very interesting textures, for example). I think this is better executed by using server messages or node objects that have a fixed envelope and are self-freeing…but perhaps a better grasp of the JITLib (Pdefs and Tdefs, for example) would suggest otherwise.

Other breakthroughs came through spending time this spring using the laptop in compositions. I went to Germany in June to work with the vocal duo Monsters For Breakfast, and each of us created new pieces for our short tour at the end of that month. The compositions ranged from loosely structured improvisations to fixed “traditional” musical material (e.g. rhythms, lyrics). Neither of the vocalists knew my musical vocabulary very well before I arrived, so it was interesting to try to come up with approaches to the electronic elements in their pieces. The process was frustrating at times – we would talk about what a sound could/should be, I would create something in the evening after our rehearsal, and then present it to them the next day. If it still wasn’t the sound we were looking for, I had to repeat the process…and this continued for a few days, working on several compositions concurrently. It was a nice challenge for me, and also pointed me in the direction of working with sounds I wouldn’t normally gravitate towards myself.

This experience repeated itself (to a lesser degree) in Campbellford, Canada during a residency/workshop I attended in the beginning of July. The residency was for musicians who identified as both composers and performers, and we were all tasked with presenting some aspect of our artistic practice to the group during the course of the workshop. I gave a practical demonstration on the potential application of using SuperCollider for generative composition. After this lecture, one of the other composers asked me to perform on his composition later that week. We had very limited rehearsal time, but the process was much the same – he would describe the sound he was looking for, I would come up with my idea of that sound, and then we would try to get closer to his intention. These two experiences working with composed material were very interesting for me; the rehearsal/development/prototype process is very different than working with instrumental music, and can sometimes be a bit frustrating. It has, however, brought up many interesting discussions about how we talk about sounds, especially when working with individuals whose first language is not English.

Another recent highlight came this spring on tour with Monsters For Breakfast. The three of us gave a workshop at the Institut für Musik und Medien in Düsseldorf, presenting our respective processes of composing and performing with this instrumentation, and I also talked about my approach to improvising and composing using SuperCollider. Before the workshop, I was a little anxious, as the director of the department we were visiting is Julian Rohrhuber, one of the developers of SuperCollider…I didn’t want to give the impression that I could teach his students anything that he wasn’t capable of teaching them himself! The workshop was on the weekend, so he wasn’t there after all, and though his students were at a very high level, it seemed that my approach to using this software was interesting and new for them. It was a great experience for me to present this workshop; I left feeling confident in the work I’m doing with this software, and I also felt like everyone involved (the vocal duo, myself, the attendees) had inspired thoughts to offer and share.

To sum up, here are the performances I’ve given with the laptop since my last blog post; most of these concerts consisted of improvised music, but some also included composed material:

25.5 “With|in” premiere @ Only Connect Festival of Sound, Oslo kl. 18

26.5 w/ Fennel @ Victoria Nasjonal Jazzscene, Oslo kl. 20

3.6 w/ Emil Brattested @Victoria Nasjonal Jazzscene, Oslo kl. 2030

18.6 w/ Monsters for Breakfast @  Café Duddel for LAB Days, Köln kl. 18

21.6 w/ Monsters for Breakfast @ Spektrum, Berlin kl. 20:30

22.6 w/ Monsters for Breakfast @ LOFT, Köln kl. 20:30

23.6 w/ Monsters for Breakfast @ Onomato Künstlerverein, Dusseldorf kl. 20

24.6 Creative Lab w/ Monsters for Breakfast @ ON Neue Musik Köln, Köln kl. 11

8.7-13.7 Westben Performer-Composer Residency, Campbellford

4.8 w/ Monsters for Breakfast @ No Noise Festival, Porto kl. 15 & 23

My plan going into the fall semester is to continue to develop flexible synthesized sounds, more processing algorithms, and to finish mapping the Monome with all of these tools and various ways of controlling parameters. Of course, I plan to continue performing, and will try to transition to playing with the Monome instead of the laptop as my interface.

A few days ago, I performed two sets at a festival in Porto with Monsters For Breakfast, and we played the second set with a local percussionist, João Pais Filipe. In addition to being a fantastic drummer, he builds his own cymbals and gongs, and as a result has a very unique voice when performing. We joined him at his workshop/rehearsal space after the festival to play again, and the music was something very special – the voices blended well with his percussion, and the gongs, bells, and cymbals responded in interesting ways to digital processing. We all agreed to work more with this quartet in the fall and into the spring (despite the logistics of living all over the continent), and I’m excited to see what this music can become!


It’s been a few months since my last written update, and there are plenty of things to talk about! Since I last updated this blog, I’ve been fortunate to perform with many different groups of instrumentalists and in a variety of settings, and I feel like this pragmatic experience is helping greatly in defining my improvisational approach to laptop performance.

Technically speaking, I’ve programmed a collection of synth definitions in SuperCollider that I use to process live inputs, synthesize sounds, and manipulate pre-recorded audio buffers. Until this point, I’ve been performing by sending server messages from the SuperCollider IDE to activate the synths and manipulate their arguments. This approach can be bit cumbersome and doesn’t really give me the level of reactivity I’d like to have; it does, however, provide me with the flexibility to alter every synth argument to the most minute detail.

I’ve come to realize that transposing this approach onto a physical interface means that I will have to prioritize either spontaneity or the amount of control I have over each argument. I’m currently in the process of mapping these server messages to a Monome 256 controller, which will give me the ability to react to musical situations much quicker than the live-coding approach. However, as the Monome is just a grid of toggle buttons, I have to limit myself to preset arguments for each synth definition – perhaps three “versions” of each synth. I see this as a necessary limitation at the moment, but it will perhaps force me to find creative solutions in performing with a restricted degree of control over synth parameters.

Though I haven’t finished the mapping process yet, I’ve been performing a fair amount with the server-messages approach. Since I last updated this blog, I have performed in the following settings:

23.3 – MAUM concert in Levinsalen, performed with 4 instrumentalists

30.3-31.3 – duo concerts in Denmark w/ saxophonist Anders Abelseth

1.4-7.4 – duo concerts in Berlin w/ vocalist Thea Soti (herself using analogue electronics)

10.4 – duo concert w/ pedal steel guitarist Emil Brattested

19.4-30.4 – Nord+Mix workshop in Vilnius, Lithuania where I performed in 3rd order ambisonics

4.5 – 6 channel collaborative piece with flute, harp, and 5 dancers

For the coming months, I have quite a bit of work to do, and quite a few things to look forward to! First, I will premiere a performative installation at Sentralen for the Only Connect festival. At the Nord+Mix workshop, I was introduced to the concept of spatial modulation synthesis, which I found very interesting. For this installation, we were asked to work with specific “spaces” in the service hallways of Sentralen in Oslo; I’ll try to fully exploit the idea of space by reading excerpts from the english translation of Georges Perec’s “Espèces d’espaces” while the transient information from the text controls the spatial modulation of my voice.

In the end of June, I’m heading to Köln for a series of concerts and workshops with the improvising vocal duo Monsters for Breakfast. Thea Soti, who I worked with in Berlin in April, makes up one half of the duo, and she has been generous enough to arrange a short tour in Germany along with a few workshops where I will present my approach to using SuperCollider in an improvisational context. In preparation for these concerts, I’m hoping to develop a few more synth definitions that I’ll be able to test out over the course of these concerts and workshops.

I have a few other concerts and workshops coming up as well, but I’ll report on those in the next blog update! Until then….

Here is where I’ll post documentation from the various projects I’m working with as this two-year study progresses:

Abelseth/McCormick: improvising saxophone/laptop duo

Among Us: dance performance involving buffer playback and manipulation, algorithmic synthesis, real-time processing of flute and harp (6 channels)E

EIDOLON: interactive improvising SuperCollider program

Emil Brattested: duo with pedal-steel guitar playing composed and improvised material

Fennel: augmenting the “acoustic” nature of this quartet through modest processing

Monsters For Breakfast: improvising vocal duo augmented by laptop

Monsters For Breakfast w/ João Pais Filipe: improvising vocal duo augmented by laptop and percussion

Nord+Mix Quartet: improvisations with soprano flute, alto flute, and viola; rehearsed in stereo, performed in 3rd Order Ambisonics during Nord+Mix workshop in Vilnius

pyramidg: algorithmic acousmatic composition

Quintet: improvising ensemble working with semi-composed material

Thea Soti: improvising voice/laptop duo; Thea is working with hardware electronics

Trio w/ Tove Bagge & Guostė Tamulynaitė: improvisations with prepared piano, synthesizer, and viola


This blog is a space for documenting my work during my Masters of Music in Performance Technology studies at Norges musikkhøgskole between fall 2017 and spring 2019. I’ll use this space to record both the breakthroughs and challenges I experience during my studies and research as I work towards my Master’s project, to be presented in the spring of 2019.

The original proposal for my master’s project was to create a collection of “improvising” algorithms that could independently interact with improvising instrumentalists. My goal was to use the SuperCollider programming environment to design “instruments” that would use information from analysing the current musical setting to make statistical decisions during performance: when to play, what/how to play, when to stop playing, etc.

I have since decided to go in a different direction; considering how important I consider the community aspect of music making, designing an autonomous digital performer would effectively isolate me from rehearsing and performing with other musicians, countering my own values. While I still believe this could be a future direction to explore, I’m now directing my efforts towards designing a digital instrument that I can actively use in performance.

I see the role of laptop performer as curatorial: not all the specific musical decisions are being made by me in performance, but I choose the frame within which decisions (or content) are made. In the way that a bandleader makes curatorial decisions about which performers, program, or venue to work with, the algorithmic programmer makes curatorial decisions concerning degrees of randomness/density/etc. without necessarily controlling the specific details of each sound event.

With this approach, the laptop performer is in constant dialogue with the software and hardware, both in “rehearsal” or prototyping stages and in performance as well. In a live setting, as the computer is left to decide the details of musical events, the laptop performer (from a curatorial perspective), must decide how contextualise the music created by the computer; this can be done by modifying software parameters, introducing or removing new processes, or by simply turning the instrument off.

As I develop this instrument and my curatorial approach to laptop performance, I’ll try my best to update this blog regularly with video and audio documentation of various performances, my thoughts on the process, and also some of the SuperCollider code driving certain elements of my “instrument.” More to come soon!