Bending Reason: Microtones in the Virtual Realm

The problem with microtones and electronic instruments, as I understand it, is that these instruments are still being created in a cultural context where 12 notes per octave is the standard. Their design matches the implementation in mind, the social construction of harmony and tonality. Hardware reflects this most clearly, but software can have just as many, if not more, restrictions if its coded and designed with this 12-tone goal in mind.

My attempts at getting Reason to be more flexible with its pitch have been somewhat difficult. From the start, this software is only ever meant to construct musical sequences and phrases with 12 notes in mind–a serious problem for those devising systems with more than that. I’ve held off doing a lot of microtonal stuff with this software because of this, but I’m starting to work with some creative ways to “hack” existing automation features within Reason to aim for some in between notes. I had devised this approach early on, but it is quite labor intensive, so I’ve always held off and pursued the easier route of using 12 tone in order to access more of what the software has to offer.

Since I enjoy working in the 12 tone ultra plus system, my personal approach to microtonality and just intonation starts in 12 tone equal temperament, but looks at what existing notes can be “tweaked” to fit a just ratio, both to provide more harmonic tuning and expand the tonal palette. The major and minor third intervals are obvious: flatten an E relative to C by 14 or so cents to arrive at the 5/4 ratio. To my ears, 12 tone equal temperament does fine with the melodic essence of the third and fifth harmonic, so I don’t often bother with those as much, but it obviously doesn’t incorporate the 7th, 11th or 13th harmonics; these primes offer melodic and harmonic options that really don’t exist in traditional Western music; I will concede that 7 sometimes pokes its head in the mix, and I would argue that many people reach for a minor 7th interval with the 7th harmonic in mind, but aren’t aware of it. By altering existing 7th, 4th and 6th intervals in 12 tone, you can achieve those primes relative to your originating pitch–which is what I’m doing in this example.

Essentially, the approach was to take the maximum bend value of the pitch wheel and divide that number (when set to a half-step range) by percentage values taken from just intonation cent tables. From here I knew what value the wheel should be in order to bend the note just right to hit the right pitch, all I had to do was program in the automation to trigger this as it matched the note I wanted to bend in the sequencer. It would seem that some of the newer Reason instruments use percentage values for the pitch range, so this makes things much easier.

I kept it simple for this example. There are three synthesizer loops all using Reason’s “subtractor” synth (pad, bass, lead) accompanied by some recordings of loons in Yellowstone National Park and that weird sound that was coming out of my amp last week. The problem is that performing and improvising with these instruments is very difficult given what has to be done in order to get them to play microtonal sequences and melodies. What this is forcing me to do, however, is explore more ways I can use the non-audio parts of the instruments to create more unique sounds since it’s a lot simpler to use basic melodic patterns and sequences.

There are likely a few other ways to achieve this effect, but this makes the most sense to me and allows me to use a single VST at a time.

It’s very basic for now, but I think I’m on to something with this approach. It will be interesting to see how I can combine this with some guitar layers, or perhaps build out an entire electronic project. And if it sounds weird….well…that’s the point…

Let it Ring

As I wrote about earlier, pushing the guitar into a sound space it doesn’t occupy natively (that is, the essence of strings pressed against a surface and amplified acoustically or electrically) has pushed me to find more creative ways to achieve not just ambience, but dynamic ambience of the kind we tend to expect from electronic instruments that use a variety of sequencers and other automated processes that allow the musician to operate between the boundaries of composition, experimentation and performance (and no, I’m not just referring to improvisation).

This has been one of those discoveries: letting the ebow rest on the strings (lapsteels are best) and running the sound through a variety of effects that either create unique sounds over time (delays, and stacks of delays, for example) or ones that can be manipulated; plus, when your hands are free, you can more directly adjust and manipulate effects without needing to keep the strings wringing with a pick or your fingers.

This is also a fantastic way to burn through 9 volt batteries on the ebow.

Sometimes I Just Want to Abandon the Guitar

I started messing around with electronics very early on; I was quite fortunate enough to have a variety of instruments in the house, including some synths and electronic equipment, so I’ve always had contact and connection to electronic instruments and music. The truth is, though, electric guitarists are always somewhat connected to the concepts of electronic music to begin with when it comes to effects and processing. But there still some marked differences, enough to make me often think about quitting guitar altogether.

Now that may seem like a very melodramatic thing to say–and it is, I mean, I am somewhat of a melodramatic person to begin with. But there is a reason why it isn’t all that crazy or extreme–I do have some very good reasons for thinking about my decision to keep playing guitar in the future.

As I said, electronic instruments have always been “around,” and I have always loved the sounds of these instruments–but I’ve equally been in love with the possibilities of these instruments. The variety of combinations of different sounds gives a level of control over the sonic space in a very individualized way that I greatly desire (I do, after all, perform solo ambient guitar, with the instrument merely as a lens to access sounds I want to express). So, some might suggest, why not just do both?

Well, I could (and often do) use both. The problem, however, is what the result will end up being, and it always centers around a very serious question/consideration to make regarding my music: if electronic instruments will give me greater access to the music I make, then maybe I should just leave the guitar be. I can spend hours and hours crafting a guitar technique or tone to only scratch the surface of a tonality or musical approach that I could instead achieve in minutes on some kind of synth, getting past that barrier quicker and do more with the results.

But the problem, however, is that the process of building these sounds out on the guitar, even if they are sometimes trying to emulate processes and approaches on synthetic instruments, really pushes me to think of the guitar in a new way and produce some interesting results that I might not have otherwise come across if I just jumped straight into synths. And this, by the way, is something quite similar to what many other musicians do already. There’s a long history of guitar players thinking about solos from the perspective of, say, the saxophone, or the keyboard. Paul Gilbert has often talked about “thinking like a drummer” in his approach. So this isn’t anything new.

So yeah, I probably won’t give the guitar up, but goddamn do I find myself continually looking at electronic instruments and wondering, “what if?” And, I should say, it’s increasingly likely that electronic stuff will continue to find its way into my music, so maybe I will get the best of both worlds in the end anyway.