Principle of Musical Decoupling

Or: How to Make Musical Systems More Flexible and More Complex


In the software world, the term coupled indicates that two components are dependent on one another. You can't change one without impacting the other. Decoupling two components pulls them apart in such a way that they can be changed independently, which has many benefits including increased flexibility of the overall system, easier maintenance (fixing a bug doesn't introduce a new bug elsewhere), and easier reuse of components in different contexts. Decoupling involves more work and often more complexity, but the pay off is worth it in all but the smallest systems.


My Principle of Decoupling is this: By striving for independence between components of a system we gain the flexibility to do things that would be too time consuming or difficult otherwise.

How does this principle apply to music? Let's start by taking a look at time. Music is the arrangement of sounds in time, and a composer's job (typically) consists of placing musical events on a timeline. A very coupled way to do this is to define 0 seconds as the start of the piece and let the composer specify things like:

There is little flexibility here because the musical events are tightly coupled to an actual "real world" timeline.

It is useful in music composition to decouple events from actual time. Instead of "1.5 seconds", the composer specifies an abstract time like "the 3rd beat of the first measure". Then a tempo for performance can be specified independently of the timeline. This increases flexibility: musical events can be performed fast or slow or even at continuously varying tempos without changing the original composition. However, this decoupling introduces some complexity because the composer needs to worry about tempo in addition to placing events in the timeline. The trade off is definitely worth it because almost every piece of music uses this type of decoupling.


Here are two methods for applying the principle of musical decoupling:

  1. Compositional abstractions around the physical properties of sound
  2. Any two properties of a sound that are normally specified together can instead be specified independently


We just looked at an example of method #1: Time is a physical property of sound. It's intrinsic to the physical laws of the universe. We can't change the flow of time. What we can do is introduce an abstraction around time - beats, measures, and tempo - to decouple compositional ideas from the resulting physical sounds.

Another example of method #1: Introduce an abstraction around pitch. Typically a composer selects a set of pitches from a particular scale for use in a composition, which couples the composition to the chosen scale. If a melody needs to be switched from a major scale to a minor scale, the notes would need to be rewritten. One option for decoupling pitch is to compose with abstract symbols instead of standard music notation pitches. For example, the numbers 1,2,3,4,5,6,7,8 could represent an octave of ascending pitches in an arbitrary 8-note scale. Any 8-note scale could be assigned to the numbers resulting in a completely different sound.


What about method #2? A sound event (e.g. a single note) must be specified by many properties. Common properties are onset time (placement in the timeline / the rhythm), timbre (instrument choice), duration, and intensity. In computer music, most synthesis techniques require many additional properties to describe a single sound event. Usually these properties are specified together, so they are coupled.

For simplicity, let's consider only two properties: pitch and intensity. I'll use standard notation for intensity: pp, p, mp, mf, f, ff (very soft to very loud). Consider an ascending melodic line with a crescendo:

(C4 p) (D4 mp) (E4 mf) (F4 f) (G4 ff)

What if we want to reuse this 5 note sequence but apply a decrescendo instead of a crescendo? What if we want to apply the same intensities to a different set of notes? As long as these are coupled, every unique pair of pitch+intensity needs to be specified.

We can decouple the properties and specify them independently:

P1 = C4 D4 E4 F4 G4
P3 = G4 F4 E4 D4 C4
P2 = C4 D4 E4 D4 C4

I1 = p mp mf f ff
I2 = ff f mf mp p
I3 = p mp mf mp p

From this we can construct 9 different 5 note patterns (P1+I1, P1+I2, P1+I3, P2+I1, ... P3+I3). 45 sound events can be generated from a list of 30 properties. If we had a similar set of 3 patterns for duration, then we could construct 27 different 5 note patterns. That's 135 sound events generated from a list of 45 properties. As the number of properties increases, the possible sound events increases exponentially.

What if the patterns are different lengths? Say we have these patterns:

C4 D4 E4 F4
p mf f

If we use them to construct sound events, we'll run out of intensities before we use all the pitches. No problem, let's loop the patterns until we get back to where we started:

(C4 p) (D4 mf) (E4 f) (F4 p) (C4 mf) (D4 f)
(E4 p) (F4 mf) (C4 f) (D4 p) (E4 mf) (F4 f)

And then we're back to (C4 p). From 7 properties, we were able to generate 12 sound events.

Using decoupled patterns does not prevent coupling when desired. We just need to be more mindful of it, and ensure that the patterns are the same length so they fit together exactly.


Hopefully it is clear that long, complicated patterns can be generated from a small number of simple decoupled property sequences. When using a computer to explore compositional possibilities, decoupling in this way provides a very useful tool for generative music without the need to resort to randomization.


To see this idea in action, try out the "chaining" example with Cosy Online (it may help to look at the other examples first to understand what's going on).



Adam Murray, 2008
contact the author...