Alrighty! So my exam concert is in 10 days or so, and I’m doing my best to inch myself across the finish line…I’ll be presenting two projects in the concert: first, saxophonist Anders Abelseth will improvise with my EIDOLON (2.0), and then I’ll improvise with violist Tove Bagge and pianist Guostė Tamulynaitė using my recently-mapped Monome 256 controller (see below).
Quickly (as I have to get back to work!), here is a list of some changes implemented in the second iteration of the EIDOLON – there’s a paper in my process portfolio with a bit more detail about my thought processes and approach:
-changed the decision-making paradigm: my original idea was to trigger processing based on dynamic/static behaviour of performer (if performer is “static,” reinforce that behaviour); now, a “state” variable determines the EIDOLON’s behaviour (support, contrast, ignore, or tacet) based on current and past trends, and these determine when and how certain sounds will appear
-added expandable/unlimited memory, by using Lists instead of Arrays; EIDOLON can now make decisions based on information collected from the entire performance
-added a second layer of analysis/decision making after first layer of processing, a Global Analyser which triggers and controls an “interrupt” layer that affects the global output
-all processing synths contained in a dictionary that is infinitely expandable, making it much easier to expand EIDOLON’s sonic vocabulary
-limited the use of all processes/sounds so that performances have less repetition and hopefully more linear formal structures
-modularized nearly everything (processes, memory arrays, OSC responders, etc.) so that the program can be easily expanded to accommodate several instrumental inputs…we’ll see how/if this works in practice, however! In addition to constantly analysing the input, each instrumental performer logs 2640 values into the program’s “memory” each minute in addition to all related processes and synthesis. While running the old EIDOLON program with a single performer, I don’t think the average CPU usage ever passed 20%, so it might work. The new program is much more complex than the first version, but I think I’ve improved my scripting technique quite a bit since then, so perhaps it will be efficient enough to bear the load!
ALSO: I’ve been working on mapping my Monome – the weeks (months?!) since my last post have been very busy!! Monome is a company that makes hand-made OSC controllers that offer a minimalistic design and simplistic interactivity. I have an older Grid model with 256 buttons that send three values: x coordinate, y coordinate, and a 0 or 1. In order to find expressive ways of mapping such a simple instrument, I’ve chosen to design processes that only have three modulatable parameters, which I can control with two independent tap tempos (whose values can be modulated by a decimal shift left or right) and a one-shot trigger.
I’ve also split the 16 x 16 grid into 4 identical channels so that I can perform with multiple inputs. Each channel has an input, slots for 33 distinct processes (which each have 3 preset “levels”), 3 slots for buffers which can be recorded into, 3 processes (x 3 preset “levels”) for each buffer, an output, and a mute button. The control modules have level controls for the synth presets, two tap tempos, a one-shot trigger, a mod key (for updating synth arguments), two decimal modifiers, a fadetime control (for synth envelopes, crossfading, etc.), and a volume control.
I’m very much looking forward to see the performative and expressive capabilities of this controller; I’ve put a lot of thought into the ergonomic and visual aspects of this controller, and I’m hoping that it will feel like a “real” instrument for me as much as it looks like I’m playing a “real” instrument for the audience.