How do you introduce students to modular synthesis and are there common misconceptions you encounter?
One of the strengths of modular synthesisers is that they are incredibly customizable, but that can also present challenges when getting started – in a classroom with 20 students, you might have 20 completely unique modular setups. So, my approach to teaching modular synthesis always begins with universal foundations and principles rather than specific gear. I teach using a free virtual synth software called VCV Rack, because it’s an accessible tool that allows all of my students to start on the same page.
It’s also extremely valuable for students to focus on developing their listening skills and their own personal creativity, from the very beginning. It can sometimes be tempting to feel like you have to wait until you ‘know everything’ on a technical level before you start being creative and making music, but this will only slow down your growth. Even with just a few basic tools like oscillators, filters, and simple modulation or hands-on control, you can create a lot of beautiful music!
A significant misconception is that synthesis requires extensive technical expertise – that you need a background in computer programming or electrical engineering to succeed. Of course, synthesis is a technical topic, but anyone who is curious about sound and willing to put in some practice time is capable to learning it. Focus on the fundamentals first, build slowly, and in time all of the technical concepts will start to click.
What fundamental concepts do you think are essential for understanding modular synthesis – or sound synthesis more broadly?
I think it’s important to establish a solid foundation in the fundamental building blocks of synthesis: things like oscillators, control voltage, envelopes, clocks, gates and so on. These are established concepts that, once mastered, will enable you to create music in a broad range of styles on many different kinds of synthesisers.
One idea that can help to unlock a lot of flexibility in synthesis and sound design is: ‘a signal is a signal.’ Inside a typical Eurorack modular system, there’s no inherent difference between signals used for control voltage (modulation) and signals used for audio – the only distinction is the speed at which the signal fluctuates.
This flexibility extends to external inputs as well. When you bring a microphone signal, field recording, or other sound source into your modular system through a preamp, that signal can be treated just like any other in the system – used for modulation, processed through filters, or manipulated in countless other ways.
Understanding this interchangeability can help you move beyond what a specific tool is ‘supposed to do’ and toward developing a unique, personal approach to patching with synthesisers. Of course, blurring these traditional boundaries can be confusing at first. I always encourage students to start by learning the core functions of fundamental tools – but always to adopt a curious mindset about what else these modules can do.
You’ve previously stated that you learnt modular synthesis to augment your primary instrument of the trumpet. Could you describe your approach to working with this acoustic and electronic hybrid?
When most people think about combining acoustic and electronic instruments, they often default to using electronics to process acoustic sound. With early pieces using MIGSI (Minimally Invasive Gesture Sensing Interface, which I developed with Ryan Gaston in 2014), I was interested in exploring other means of interaction between the trumpet and electronics that were less about altering the trumpet sound, and more about extracting meaningful gestural information from the trumpet to manipulate and activate the electronics. For example, an early composition called Pocket Fig (2015) used valve movements to trigger the playback of pre-recorded samples, and to randomize their duration and playback speed.
Similarly, my approach to integrating acoustic and electronic elements has never focused on pitch detection from the trumpet. Instead, I’ve focused on building systems that can detect the overall level of activity from the performer, or the amount of persistence with a specific gesture (ie, pressing a valve over and over again), and to use that data to guide the electronic processes. The electronics don’t solely rely on input from the trumpet – they typically have some degree of autonomy or unpredictability. The relationship that I’m interested in building is less about having the trumpet control the electronics, and more about giving the electronics a means by which to understand how I’m playing at any moment in time, so that it can respond in a more collaborative and potentially surprising way.
Could you describe your process for developing new pieces?
My process for developing new pieces is very co-creative and emergent. Regardless of whether I’m composing a piece for myself or for others, one unifying aspect is that I always work with some element of improvisation, unpredictability, or surprise. When I’m composing for myself, I often start out by building a sound world or environment – this might be a modular synth patch that receives and responds to my trumpet input, or a collection of samples and a means by which to dynamically interact with them, and so on. At this stage in the process, I don’t make too many decisions about form, structure, or duration of the piece – I’m just collecting ideas, building interactions, and creating options for how the various materials might get combined.
From there, I’ll typically improvise within that sound world until some central ideas or themes start to emerge. The process of improvisation often sparks ideas for new sounds, or reveals the need to refine some interactions with the electronics. If I’m composing myself a new piece for live performance, a finished score typically consists of a macro structure for the piece, and a number of specific musical or technical landmarks to hit, with open sections of improvisation in between.
Can you share any tips for creating synth patches that are expressive? I understand that you use the trumpet’s signal, but do you have other favoured methods?
Expressivity in synthesis is very personal and subjective, but for me, it centers on designing meaningful interactions and using control voltage in thoughtful ways. I find that expressivity comes from being able to turn on a dime – moving between subtle, nuanced gestures and wild, aggressive sounds – and developing a playful, responsive relationship with your instrument.
One exercise I give my students is to analyse how acoustic instruments behave. Consider a plucked violin string – there’s a complex relationship between how hard you pluck, the resulting volume, brightness, and attack characteristics. All these parameters are naturally coupled together in acoustic instruments, but in synthesis, these relationships need to be deliberately created. When I’m designing patches, I think about how one physical gesture can meaningfully influence multiple parameters simultaneously – perhaps making a sound both brighter and louder as you increase intensity, or having it become both shorter and noisier as it decays. This multi-dimensional or one-to-many approach to control mapping and sound design can create much more expressive and engaging instruments.
Could you discuss your work in developing new interfaces for musical expression?
My work in developing new interfaces for musical expression began in 2014 with the design of MIGSI and has continued to be a central focus of my creative practice. My personal interest in this field is not solely about developing new technology, but also about forming a deep and personal performance practice with these new instruments and interfaces. Personally, instrument and interface design feels like a natural extension of my creative practice – it feels compositional, improvisational, and emergent. In many ways, I think of MIGSI as a creative piece unto itself, rather than as a ‘tool’ that I use to create work.
In the age of DAW do you think the ability to endlessly edit and perfect music has affected creativity?
In my creative practice, I treat the DAW more like another instrument than just a recording and editing tool. I use it as a sketch pad for ideas, or as way of generating inspiration for new projects. (Have you ever tried grabbing two random audio files from your computer and dropping them into your DAW to play them back simultaneously? Do it! The results are often fascinating.)
That said, while DAWs have certainly made many aspects of editing and mixing more accessible, they can sometimes overwhelm us with options and make it easy to avoid the ‘finish line’. I find it helpful to create intentional limitations, similar to how I approach modular synthesis. Just as I might restrict myself to certain modules or patch points, I might limit myself to specific tools or processes in a DAW to maintain focus on the core musical ideas.
It’s important to remember that these are just tools in service of your creative vision. Just because we can endlessly edit doesn’t mean we should – oftentimes those ‘imperfect’ moments are what give the music its character. I always encourage my students to focus on what they want to express creatively rather than trying to achieve some external standard of perfection.
Have you got strategies for encouraging your students to embrace imperfection and experimentation?
A central concept in my teaching is what I call ‘imperfect action’ – not waiting until you feel completely ready, but starting with whatever tools and understanding you currently have. This connects deeply with Pauline Oliveros’s Deep Listening practice, which has been hugely influential in my approach to both making and teaching music. When you’re truly engaged in deep listening, you begin to hear the beauty in unexpected sounds and moments that might otherwise be judged as ‘mistakes.’
In my classes, whether teaching synthesis or improvisation, we spend a lot of time discussing what perfection even means. Often, the most impactful aspects of music come from those moments of surprise or uncertainty. This is something I’ve experienced countless times in my own practice – those instances where an unexpected sound or ‘error’ leads to a completely new direction.
Are there new or emerging developments in synthesis and music production that you find exciting?
I’m particularly excited by the increasing accessibility of tools for spatial audio composition and performance. In my recent work, especially pieces like Manifold (for quadraphonic sound) and Sublimate (which was composed in a 5th order ambisonic dome), I’ve been exploring how electronic sound can move through space in ways that acoustic instruments can’t. This creates another dimension in the relationship between acoustic and electronic elements – not just how they interact sonically, but how they can create and inhabit different spaces.
There’s also a lot of potentially interesting developments in the fields of machine learning, AI, and music information retrieval that could help to strengthen opportunities for human-machine interaction. While I’m interested in staying abreast of new technological advances, my personal creative practice isn’t necessarily motivated by working with cutting-edge technology for the sake of cutting-edge technology. Sometimes my creative interests require learning new technology; sometimes it’s more appropriate to use very old instruments; sometimes I reach for common, commercially available tools, and sometimes I find myself designing something entirely new that doesn’t exist yet.
