Modulisme 129

Scot Gresham-Lancaster

Conception - Layout : P. Petit / Cover Art : CLH

In the 1970s, Scot Gresham-Lancaster worked closely with Serge Tcherepnin, helping with the construction and distribution of his Serge Modular Music System. For a few years, and then he went on to work at Oberheim Electronics.
In the early 1980s, he was the technical director at the Mills College Center for Contemporary Music + teaching at various Universities.

He is a composer, performer, instrument builder, educator and educational technology specialist who uses computer networks to create new environments for musical and cross discipline expression. As a member of The Hub, he is one of the early pioneers of “computer network” music, which uses the behavior of interconnected music machines to create innovative ways for performers and computers to interact.
He performed in a series of “co-located” performances, collaborating in real time with live and distant dancers, video artists and musicians in network-based performances.

You have been pioneering in Modular Synthesis and been touring a lot, would you please retrace your career?

This is going to be a long answer, but I will try (and probably fail) to be concise.
Growing up on the San Francisco peninsula in the 1960’s was a particularly opportune circumstance. The burgeoning psychedelic movement was all around us teens. I ended up in 1969 at a “Happening” with the newly renamed “Grateful Dead” and Merry Pranksters and most specifically Ramon Sender working with Don Buchla playing what was then labeled the “San Francisco Tape Music Center” modular with tape loops and real time signal processing. I still have a few Buchla modules silkscreened with that label. Also, Will Jackson provided my first exposure to the early Serge module.
In 1971, I was enrolled in the Lenkurt Electronics student electronics training and learned the fundamentals of tube and transistor design and much excitement about the new “integrated circuits” or ICs. My music director at San Carlos High School, Mike Ryan was very knowledgeable about contemporary music and asked Bob Sheff (aka Blue Gene Tyranny) of the newly created Center for Contemporary Music (CCM) at Mills College to come to give a presentation of the newly purchased Moog IIIp modular that they had just acquired. CCM was a transplant of the original Tape Music Center from Diversadero to Mills College in a deal worked out by Morton Subotnick and Music Dept. chair Margaret Lyons. So Blue Gene (before he adopted that moniker) was showing us high school juniors the basics of modular synthesis and I was the pesky kid who stayed after and asked way too many questions. Blue said “We have to go now, kid, but why don’t you come by the CCM and you can spend some time in the studio with Moog on your own.
The next week I was there and Blue Gene let me into the room with the Moog and closed the door behind me without even telling me what a trunk line to the amps was or really anything, So I started patching away and after a frustrating 10 minutes, I went and found him and he showed me where to plug to get a sound, but he left immediately and so I was on my own. I returned many times and slowly got a pretty complete understanding of both the Buchla 100 (which is now in the Smithsonian collection) and that same Moog IIIp.

the Buchla 100 from Mills CCM

At that same time many of the students were building the newly minted “People’s Synthesizer” or the hard to remember name the “Tcherepnin” synthesizer. These legacy modules are now known as “Serge” modules. I am going to mention the obscure composer Tom Zahuranec since he was an amazing influence and lost to history. Here is a pointer for interested parties. I mention him because of his piece that hooked galvanic sensors (lie detector) to a broad leafed philodendron plant on stage and passed the voltages from the plant to the Buchla 100 modular. The audience was encouraged to use their psychic ability to influence the music. It being the late 60’s everyone was laser focused and listening for their own interaction with this peculiar setup. This concert was very influential on me.
Another pivotal early experience was seeing the Pulsa extravaganza at Mills in 1971 which was mind bogglingly good and I came to find out later a huge influence on Serge Tcherepnin as well.
My next electronic foray while continuing to go to Mills consistently, was studying for a year at SF State University and studying with Herb Beliwa using their Buchla 100 and a small early Serge 2 panel system doing a lot of work with the Ampex tape decks and looping and splicing.
In 1976 I was traveling across country and ended up in Richmond, VA and fell in love with Kathryn, my now wife of 47 years. She lived on Franklin street a block from the Virginia Commonwealth University VCU Electronic Music Studio which was directed by Loran Carrier. He was in need of technical support and had just gotten a $30k NEA grant to buy a computer and more modules for the studio. He hired me. There were a bunch of Electrocomp 101 (meh) but the main studio had a significant Emu modular which I had never worked with. It was a great modular with unique filters. I convinced Dr. Carrier to buy a 10 panel Serge Modular from the newly formed post-Cal Arts Serge company on Western Ave in LA. I reached out to Serge and introduce myself by way of Blue Gene and made the deal (which included a much needed 10% commission which I banked for my first panel) I also built an IMSAI 8080 S-100 bus system with a Cromemco DA convertor board for 8-bit CV out. 1977, not bad. We were programing it in assembly language one byte at a time … you youngsters can get off the lawn LOL

That SERGE system which was bought by ORourke later on…

Next, back in 1978 and back in Santa Cruz working at UCSC in the library, I was hanging out with David Cope and got access to their studio. Carl Fravel was there and working on a very interesting pitch to CV convertor that he was building for the Serge Modular system. This got me back in contact with Serge again as we were moving to LA. He offered my wife and I a job at the “factory” above the Wig shop on Western blvd. While in LA I bought a six panel Serge system with 5 panels already completed and made the new aluminum faced panel there. So I had a full and robust 6 panel modular that was an amazing instrument.
This is where I met Jill Fraser, Paul Young, Kevin Brahaney, Darrel Johansen, Gary Chang, Eric Drew Feldman, Peter Grenader, and really met Serge in person for the first time. That was an amazing set of people to get to know.
Jill and I did a new wave band for a while “Science” which for 1979 was way ahead of the live performance curve with a 6’4” transexual lead singer Jane Gaskill. I played my Bass through my Serge panel for effects on a few tunes but Jill Fraser was (and is) simply amazing with live electronics on stage.

Science (with Jill Fraser, Mark Nine and Jane Gaskill)

The money from Serge’s wonderful “factory” was not on a par with what I could be making with my electronics skills, so I moved on to work for a brief time in the nascent solar industry before landing a job at Oberheim. This was the period when the OBX was just rolling out. They put me in charge of getting the “Standard Products” going. These are the now coveted white face SEM modules that, for example, make up Lyle Mays amazing Oberheim 8 voice. At the time I was on the phone a lot with Dave Rossum at Emu since he had subcontracted the polyphonic voice source for the “Oberheim Eight Voice” which pre microcomputer was pretty involved with TTL digital logic.
I was offered a job in late 1980 at “Audio Works” in Marin working to do studio maintenance at the Record Plant in Sausalito and general Bay Area high end synth and studio repair. It was about this time that there was a job opening for the Technical Director of the Mills CCM, my old stomping ground. There had been a sea change and Robert Ashley was leaving and David Rosenboom was moving in. So in 1981 I got that position and it really was a major life changing opportunity.
I worked at Mills CCM as technical director from 1981 until 1987 but remained closely associated with the place from then on. Collaborating and performing at Mills almost every year for four decades. While I was technical director I got a chance to work closely with wide range of amazing composer/performers most importantly Pauline Oliveros, and Alvin Curran but Anthony Braxton, Terry Riley, Alvin Lucier, Iannis Xenakis, John Cage, Sal Martirano, Ken Gaburo, James Tenney etc etc.
David Rosenboom had been working closely with Don Buchla on the FOIL language and the Touché and so I remet Don at that time and worked for him on various projects throughout the 1980’s and 90’s
I was doing a lot of solo and ensemble performance with my Serge and early computer control throughout this period. I should mention the obscure but interesting D-SEQ that was an early rhythmic trigger source developed by the technical coordinator of Serge Modular, Darrel Johansen. It was written in assembly language on the small SBC Synertek SYM. Serge and company had moved to their Haight St. location in 1982 and Darrel was thinking this might evolve into a module. I used it extensively on the piece in this Modulisme release “Allegory of the Beach Whale”

Darrel Johnsen’s D-seq

It was around 1985 that a collective of composer/performers put on performances influenced by the earlier work of the “League of Automatic Music Composers”. This lead to “The Network Muse Festival in 1986 at The LAB in San Francisco. Instead of creating an ad-hoc wired connection of computer interaction, they decided to use a hub – a general purpose connection for network data. This was less failure-prone and enabled greater collaborations” From wikipedia That first “HUB” was the same Synertek SYM-1 6502 single board computer programmed by Phil Stone and hardware by Tim Perkis.
This became decades of touring and recordings. For those with an interest, there is an entire book and 3 CD set on John Zorn’s Tzadik label. That is an entire body of work that I won’t go into, but in this context it created a major change in my technical configuration for live performance. I will admit that while Mills and later as lecturer and support technician at California State University Hayward (cum EastBay) barely paid the bills and so I made a very difficult and to this day a regret filled decision. In 1986 I sold my 6 panel Serge modular to a graduate student with the proviso that if he were to ever sell it, he would give me first dibs. I found out later in 2001 my former student sold my system without contacting me to the up and coming electronic musician Jim O’Rourke … rrrr, still bothers me.
Luckily I had bought the unit I still use quite a bit to this day and that I was peripherally involved in the design of, an Oberheim Xpander.

Scot with his Oberheim Xpander

In 1990 I began working closely with Alvin Curran (of MEV fame) designing very early interactive Max patches for interaction with live acoustic players. These were very early days for Max before MSP, but we used MIDI to control early Akai samplers, Lexicon PCM 70 and my Xpander. The remaining vestige of that work is the album with the Rova Sax Quartet “Electric Rags II” Alvin and I did a bunch of performing together with his piece “Shofar”.
In 1996 my ongoing work with Pauline Oliveros led to an amazing extremely analog piece. Echoes from the Moon received several further realizations:” a 1996 performance … during a lunar eclipse at California State University, Hayward, in which approximately four hundred people lined up to “touch the moon” with their voices” G Douglas Barrett from this link . I had arranged that from stage Pauline’s mic was hooked up to a then dormant radio telescope behind Stanford to bounce radio waves off the moon and pick up the bounce back. It was a wonderful experience for the audience members that were able to take the mic and hear their voices return from the moon 1.8 seconds later (light speed from the moon and back) Cool piece.
The HUB took a hiatus around 1997 and Tim Perkis, Chris Brown and I formed “fuzzybunny”. This was a hard to define ensemble that remains one of my favorites. We released a self-titled first CD on Sonore records from France. An amazing use of a wide variety of electronics with this group. It looks like Chris is just playing keyboards and I am playing guitar, but that isn’t what is going on. We are interacting with live midi processes and effects, in my case without computers and just one small Serge panel and my eXpander and a Line 6 programmable guitar effects pedal.
I had been going to STEIM in Amsterdam throughout the 1980’s and 90’s. Often performing the “Crackle Box” that the amazing Michel Waisvisz of STEIM had given me.
This work led to ongoing work with composer Bert Barten and others on the Talking Trees project, which is still active. This involved taking voltages directly from trees adjacent to outdoor performance spaces and using those voltages to control, via real time interaction, a large Eurorack modular.

The CrackleBox

Throughout the 2000’s I did a series of pieces that Steve Bull and I called “Cellphonia”. These were “cellphone operas” that sometimes had modular interactivity as a component of their construction, but were primarily networked based works.
After working with David Tudor, I got introduced to Mark Holler, who had designed and built Tudor’s spectacular “Neural Net” Synthesizer. This one needs to be heard to be believed. Put simply, one of the strangest and most beautiful synthesizers I have worked with. This instrument was light years ahead of the current LLMs of AI and was an analog neural network on a chip.
Pauline Oliveros and I worked together at the very beginning of what became known as “telematic performance” and some of that work is important to note because the performances often had a tying thread of modular synthesis that acted as unifying drones between distant spaces.
My work over the last decade has been 2 fold while I was a professor at the University of Texas Dallas, I became close with both theater professors there and did extensive work with them.
After 40 years in Oakland, California we have moved to Brunswick Maine and I have already begun new East coast adventures. I did a duo interactive electronics with saxophonist Titus Abbott. Last year I created an 8 channel sound installation at the Maine Maritime Museum which was a direct sonification of the migration of whales in the Gulf of Maine through a single year where each day was a 4 second interval.
Believe it or not, I glossed over a whole ton of other projects and collaborations, but I am sure that is plenty

I often think that in the 60s, and still in the 70s, when electronics were still in their infancy, anything seemed possible. You can sense in the creative work of that period that composers let themselves be surprised by the new. It seemed possible to invent in those days? How do you feel about that?

I came from this world of experimentation and wonder.
I have never been a user of drum machines, for that matter regular rhythmic material, in general. I get the attraction and ease to fall into that but the surprise and sense of fresh new sound for me, with electronic sounds and effects on acoustic instruments, is the reactive potential that is created by the interaction. The micro environments that are created with the interaction are what is the most precious and new part of this new musical world.
For me, four on the floor just kills all that and makes the sound about repetition and predictability. This is just a matter of personal taste, but I enjoy the interaction and immediacy of unconstrained sounds.
There was this wonderful moment when we had spent two full days wiring up all 26 channels of Sal Martirano’s “Sal Mar” construction. We walked up to it and touched one of the unlabeled silver touch buttons and the whole thing erupted into a cacophony of swirling multidimensional sound. Sal wasn’t there and so we spent the next 10 minutes trying to tame the sound down. Finally, Sal walked in laughing and reached over and touched one unlabeled button and the whole thing was silent.
“It’s like flying a school bus full of delinquent children through an ever changing landscape” he said.
I had this experience with Tudor’s Analog Neural Net synthesis, which I used on the piece in this collection “Culture of Fire”. I have used it as a guiding principle for decades now.

Then, in the 80s, a certain notion of control emerged, a desire for synthesis that would allow the instruments of the orchestra to be remade. A desire for classicism? What was that like for you?

I sort of commented on this before, but this fake formalism is tiresome to me. It has been a source of internal conflict since moving to the East Coast since I am full a West Coast type both modularly and aesthetically. A world class improviser could listen to 15 seconds of Milton Babbitt’s most highly worked out music and just dive into the fray with something with the same emotional edginess and even more immediacy.
That being said, I do find myself writing piano pieces and instrumental work that are sort of in that space of rigor and accuracy, but that has to do with finding expression in the context of the act of actually writing notated music. Even in that context, I try to leave room for the player to find their own way of expressing themselves. Glen Gould playing Bach’s Goldberg variations comes to mind.

Score Electric Rags AC

What have you been working on lately, and do you have any upcoming releases or performances?

Since leaving my “day job” behind, I have been absolutely cranking on stuff. Last year I became connected with the Portland Ableton Live collective, so I have done a deep dive into Ableton. This was driven by my decades of experience with Max/MSP and M4L has really opened some amazing doors. I have a Push2 and am working on ways of integrating that with my performance setup of MIDI guitar with game controller which has been my workhorse setup since the fuzzybunny tours.
I am also in the midst of finishing off my new 6 panel Serge replacement which includes 73/75 paperface and 6 SSGs etc. A couple of WAD Wilson analog delays. Anyone out there got a Reticon SAD 1024 out there, give a shout. I have a couple panels from the old days as well.
My old Bay Area friend and monster sax player Phillip Greenlief has moved to Maine as well and we have been talking about a collaboration soon. My installation for the Maritime Museum may become part of a permanent exhibit in Bangor, that’s still in the works and of course I have to figure a way to make it back to Marseille to do some performing with you Phillipe and our mutual friend Jean-Marc Montera…

Scot Gresham-Lancaster & Jean-Marc Montera (live @ L’Embobineuse. 2012)

What do you usually start with when composing?

This sonification jag has me using data as the marble for sculpting new audio works. In general, my take away from years of live interaction with process based music is like that. Define a generative process of some sort and then point that process at something that makes or modifies sound and start turning knobs, etc. until it starts to be rich and listenable.

How do you see the relationship between sound and composition?

Well, some European composers of late have gone extreme with this. For example, Tristan Murail and Gérard Grisey are the clearest examples of what I mean by that. I think the lessons learned by the experience of Xenakis’ music strikes the sort of balance that I would strive for. A moment of immediacy of sound that is driving the perceived metamorphosis of structure forward. In a very real sense, this is where pure improvisation falls short and why I have backed away from this recently to an extent.

How strictly do you separate improvising and composing?

Often in collaboration, I am freely improvising and exercising the concepts of “Deep Listening” that Pauline Oliveros and her wonderful collaborator of many years Stuart Dempster enlightened me on. To really improvise with truth and the promise of shared expression with the audience, I feel that the improviser must be fully in the moment and let it flow out. If playing composed music, you are in an entirely different realm to approach this same state. The notation is guiding you to find that moment of human expression that lies in the notation and using your interaction to convey that to the listener. These two distinct but related processes.

Performing at the EYE gallery (in LA in 1980) playing my Serge.
With Dudley Brooks and Matt Ingles

Do you find that you record straight with no overdubbing, or do you end up multi-tracking and editing tracks in post-production?

I do both actually. Historically, most of my recorded music is just a record of straight interaction in real time with highly defined processes. All of the HUB, ROOM, fuzzybunny, ROVA etc. recordings are all zero overdub, completely live. My own solo stuff is a mixture, more of a pastiche of chunks of live material juxtaposed and in some, rather rare cases, overdubbed, or more accurately, collided with similarly derived material.

What type of instrument do you prefer to play?

It is hard to describe to someone not experienced with playing in an ensemble like the HUB, but a lot of my electroacoustic performance is based on that type of interaction. Some process is pointed at my instrument and I am pointing the firehose of data at various sound limiting or augmenting thing that is controlled often, lately, by a game controller. I have used game controllers for a couple decades now and love the multi parameter immediacy that they provide. 2 joysticks with switches on them, a D pad, four trigger button and the fancy one I have been using lately has 4 more sliders. I came up with a trick a few years back where you hold down one button to put it in choice mode and then you 15 choices of configuration to go to for the controller. Once you get it in muscle memory it opens up a bunch of ways of changing the routing of the joysticks. I am not sure I explained that well, but it works for me anyway.
I also really like my hacked version of the super rare GameTrak “golf game” controller. This is an XYZ controller with strings that come out of the “joystick” giving you a super accurate Z dimension. It’s amazing with these little golfing gloves on.and attached to bright orange strings running to the joystick, kept taut by a spring loaded reel. You can play the guitar while shooting the mix around in ambisonic space, for example. Love it!!
With fuzzybunny I started doing a lot of live interactive looping and I still use my Oberheim Echoplex and the first generation Boomerang looper in conjunction.
Of course, my two panels of EuroRack stuff, plus the eve growing collection of Serge panels.
Strangely, I also play a weird piano every day. It is a micro-grand. Kawai EP 308 Electric Grand Piano. Which also sounds great through my processing and looper.
Also, since 1998 or so I have used MIDI guitar a whole lot. It has the advantage of being a guitar, first off, but then also, you turn that down and suddenly it is a MIDI controller with 6 separate midi streams. This works perfectly with the Oberheim Xpander since there are 6 heterophonic voices that can be changed pretty much instantly to sound completely different with program changes, add to that the Xpander itself has endless knobs for all the parameters, so you can reach down and adjust things and change a given patch on the fly. I have two guitars I work with a Line 6 Vestax which has real time acoustic modeling and instant retuning which is fun and then I have a Godin SA which is a broad neck nylon string with the 13 pin Roland midi guitar for independent MIDI from each string. Pretty sweet. I often also use my Hamer Newport for just guitar sounds since it is the best that way. I play all of these through the Line 6 POD HD500 Multi-Effect and Amp Modeler which I have come to love.

Your compositional process is also based upon the use of acoustic instruments that you process or combine with Electronic. How do you work to marry that Electronic with your acoustic material?

There are three ways in which I am thinking when constructing pieces for acoustic instruments and electronics.
Thinking like the spectralists I mentioned earlier and matching and contrasting the spectrum of the instrumentalists. First to match the timbre and blur the distinction between acoustic and electronic and also to make electronic sounds the fill the parts of the full spectrum that aren’t covered by acoustic instruments
Using signal processing to alter the sounds of the instruments. This can be reverb, phase shifting, looping, amp simulation etc. A lot of times the parameters of these elements are under the control of the score. This was first introduced to me with the ghost scores of Morton Subotnick that my good friends Jill Fraser and Darrel Johansen worked on when I first worked at Serge’s
The most common technique that I have often used is tracking the live input of the other players and grabbing fragments of their melodic lines and using those capture parameters to make gesture of a similar shape but differing timbre. This creates a warped call and response sort of setting and gives you the ability to come back a little later with material the listener still has fresh in their mind.

Obviously you are interested in gesture, physical move to create the music, right? What is your favorite way to achieve such expression?

I got into this above, but this has been an ongoing problem with the sort of performing of “data flow” that has characterized a lot of my performing. I loved this early criticism of the HUB.
“You guys look like 6 nervous Air traffic controllers up there” LOL
This is a real problem with performing electroacoustic music live. No wonder DJs bob their heads up in down with the beat. I mean they love their “tunes” but also it communicates to the audience that they are locked into the groove they are laying down, etc. With spectral metamorphosing sound music this becomes problematic. This is why that Game Trak controller I was mentioning is so cool. The gestures of moving the bright orange strings attached to the controller on the ground really communicate to the audience that a change is occurring.
I did a series of piece using Gieger detectors that detected the change in radiation in the room of the performance and that was the extreme opposite because Radon and other radiation is changing around us all the time and it is invisible. So I put a changing light on it and so the audience could see the changes in radioactivity and experience it. This is a little outside the question, but gets to this idea of data flow as gesture which has been a big part of my work for decades know. In terms of gesture, I feel it is important to move with the sounds that you are “riding” like Sal Martirano’s flying school bus, to let the audience get a better feel of your interaction and intention.

How can one avoid losing the spontaneity that the analogue instrument allows and that so many composers have lost since they do everything from their computer?

I use a lot of feedback in patches and this adds an element of non-linear unpredictability. One of the things I loved about the Xpander was that the knobs were continuous and so whatever setting the parameter assigned to the knob was on you could turn up (or down) to 11.

The click of the mouse doesn’t sound like the turn of a knob, does it?

Since I have been doing a deep dive into the Ableton/Push environment, I have been thinking about this a lot. I think my subconscious reaction was to start working building up both my EuroRack system and building the large Serge modular that I have been piecing together for a few years. So I am missing the chaos and intimacy of working with a real patching system. My solution for live performance using software in the past few years was to use the Behringer X-TOUCH MINI which has those continuous knobs I was mentioning that I like so much on the Xpander, but clearly playing with just a trackpad or mouse is doable, but frustrating.

How were you first acquainted to Modular Synthesis? When did that happen and what did you think of it at the time?
How does it marry with your other « compositional tricks »?

I told that story before but to put a date on it, the first time I used a modular would have been late 1970, but I had seen them in use at the “happenings” as early as 1967 and this and the sort of open long form improvisation that the psychedelic bands were doing was a great inspiration.
My compositional and musical performance practice has always been eclectic. My father brought home a reel to reel tape recorder and figured out to put scotch tape of the erase head so I could overdub and I also learned to splice when I was in 6th grade, my 7th grade science fair project was to build a theremin from a kit, but I was in the choir, symphonic band and playing in rock bands while learning the beginnings of harmony. So it was all a mixture.
In 1973, I wrote and had performed a piece for choir and tape that was performed by the college choir. So these elements have always been integrated for me.

When did you buy your first system?
What was your first module or system?

I have owned a lot of various synthesizer systems over the years, but additionally I have been associated with institutions that gave me access to a lot of modular systems. It wasn’t until I was out of school and working for Serge that having a module made any sense, but I was so broke it took me about 4 months and some serious negotiating with Serge to build that first aluminum face panel that I built while working there. About 6 months into my solar job in 1979, I got enough cash together to buy the Serge modular with 5 panels in it for $600 off of the Recycler in Hollywood. Amazing deal even then.

How long did it take for you to become accustomed to patching your own synthesizer together out of its component parts?

I had electronics training and some background in circuit design. As I mentioned I built a theremin for my 7th grade science fair project and so it was very natural for me to expand into using all sorts of modules and rack mount peripherals. I was most often the keyboard player in the bands I was in although when I did play guitar I would sometimes play through a ring modulator or something.

What was the effect of that discovery on your compositional process?

The strength of composing using modular systems is the dynamic interaction between the behavior of each of the modules you collide together. The title of the HUB 3 CD set is “boundary layer” and that characterizes the concept that I push for when building a patch. The idea is to find that chaotic tipping point between chaos and order, often a fractal recursive space that has these amazing non-linear reactive moments.

On your existence?

There is a certain thrill to that corralling this unpredictable turbulent noisy edge and pulling it to a cohesive and expressive type of music/sound environment. This always feels like an affirmation that reaches over into my actual life. That I can be dealing with the chaos and non-linearity of “all the things” and figure a way to make it into something beautiful and rewarding.

Quite often modularists are in need for more, their hunger for new modules is never satisfied? You owning an impressive amount of gear, how do you explain that?

It is a guilty pleasure having ones passion for making music be so much like being a kid with toys.
I can imagine how many chisels a wood carver must have. There is this great moment in an interview with Serge Tcherepnin where he explains that his design and really what has been characterized as the “west coast” type of synthesis depends on the concept of “patch programming”. All these various tools and subtly different devices make this sort of exploration of previously unimagined sonic spaces possible. Always searching for that next impossible sound.

Do you prefer single-maker systems, knowing your love for BugBrand or Metasonix, or making your own modular synthesizer out of individual components form whatever manufacturer that match your needs?

I met Eric Barbour of Metasonix very early on when he was more known for his knowledge of tube design and just getting into it. He was such an inspiration. Of course, having worked in the manufacturing side of the synthesizer business gave me an amazing amount of understanding of the specifics of how each module works. It is very strange to discover these components that are now extinct that we thought would be here forever. They change and “improve” but the essence of something is lost. For example, the Reticon SAD 1024 BBD chip has this distinct rich quality that the later BBD chips just can’t do. The most extreme version of this which was the very rare ETANN chip at the heart of David Tudor’s Neural Net Synthesizer. I have been going back and forth with Serge designing a new type of computer controllable analog matrix mixer based on the inexpensive 4016 analog switch. If you toggle the CMOS switches at ultrasonic frequencies with pulse width modulation you can attenuate the analog signals … okay, I am geeking out, but I have always been going between serious music study and serious technology study my whole life. My work is a reflection of these two passions

How has your system been evolving?

I just finally dove into the EuroRack world and I am integrating that with my Xpander set up and various guitar processors. I have broken out my old Roland PG-1000 which is a whole bunch of linear sliders and can work really well with a Lucid Yarn to make CV control lines across both systems.

Instrument building may actually be quite compositional, defining your sonic palette, each new module enriching your vocabulary. Would you say that their choice and the way you build your systems can be an integral part of your compositional process? Or is this the other way round and you go after a new module because you want to be able to sound-design some of your ideas?

The EuroRack system I just got was co-designed by Steve MacClean and Dr. Richard Boulanger (C Sound maestro) so it is embedded with design ideas that I am uncovering. Serge once made a criticism that Buchla was making compositional decision for the user and think that is ture of many of these EuroRack designs. I like the most generalized modules that can do many things. The Expert Sleepers Disting Mk4 for example, or on my Serge setup where I have an entire panel of 6 Smooth Step Generators (SSGs). They are so flexible.It is an amazing module. I am really looking at the new Random Source GTS for the same reason. In another area of spectral processing the capabilities of Dave Rossum’s Panharmonium are opening up amazing spectral spaces that I haven’t been back to since using the Lexicon PCM 70 in that space.

Do you tend to use pure modular systems, or do you bring in outside effect and devices when playing or recording?

Ever since I started using the Xpander as my quick change artist I have been mixing and matching all sorts of equipment in my live setup. I have done entire performances with just the Line HD 500 pedal board, for example… Or with laptops in our HUB…

Would you please describe the system you used to create the music for us?

« Allegory of the Beached Whale » was done on two large Serge modulars, my 6 panels and another 10 panel system that had 3 Wilson Analog Delay (WAD) modules. Externally was a Lexicon PCM70, the rhythmic components were generated by the custom “D-SEQ” program written by Darrel Johansen running on a SYM 6502 single board computer with a custom interface for the banana jack triggers. It was a lot of session cut together. Two woodwind players and a guitarist were playing directly into this elaborate patch and interacting as they closely followed the score. Richard Marriot on Contrabass Clarinet, Magdalene Lucke on Tenor Sax and Dudley Brooks on Fender Stratocaster.

« Martian Time Slip » was a real pastiche of equipment and mixed live sessions. I use the fairly obscure Creamware Pulsar PC card for all the sample manipulation and some of the DSP work, although there was a lot of MAX/MSP manipulation as well. All the synthesis was done on a combination of my Oberheim Xpander, Korg Wavestation AD and some Serge modules. I also used a Lexicon PC70 and Digitech harmonizer for signal processing. Although it sounds very multi tracked most of the recordings are from various live performances I did in 1998 and 1999. There are sections that were pieced together and overlaid in ProTools.
I used samples of Grover Garnder’s reading of Philip K Dick’s novel of the same name. Also, my good friend the late great orator and shaman Sam Ashley, son of my early mentors Robert and Mary Ashley provided a wonderful vocal source sample that much of the piece is built from. Also, another close friend, musicologist Veniero Rizardi provided readings of small bits of text in Italian. Additionally, I used a text to speech conversion source of a female Italian voice in some sections.

« Culture of Fire » was also a combination of technologies. Inspired by both the Atomic Age and the idea of a human created analog of the mind. The main sound source under the layers is Mark Holler ETANN neural net synthesizer that he designed for David Tudor. An amazing almost indescribable and uncontrollable instrument, but with patience it gave me amazing source material to construct the piece from. I then used a very analog Geiger counter from the 1950’s to trigger my Serge patch and various other DSP devices. During that period I was also using the Boomerang looper, but not in the traditional way of most loopers. It had mode that went backwards as far as a loop size would be and then jumped forward twice that amount of time and played backwards in nearly real time. Hard to describe but a blast to engage with.

« In the Unlikely Event of a Water Landing » was a piece that I put together in several stages. I had been asked by a Cello Quartet from the Music Department at California State University at Hayward to write a study for them. They were lead by the late great cellist Lawrence Grainger who organized the ensemble a taught cello at the university. The quartet was written as an extension of a sketch of a string quartet of mine entitled “Flight Above Dying Skies” and had the impetus and tone of an elegy for our dying planet. On a return trip from one of my European performance tours working on the score during the flight, I was struck by the ghoulish route statement of the flight attendant “In the Unlikely Event of a Water Landing” which they always say. It prompted me to envision the fiery and terrifying reality of plummeting to earth and preposterous nature of modern air travel. The harmonic content of the quartet centered around the illusion and seeming impossibility of flight. So an arch shape. I made the recording and was unsatisfied with it on its own and so enlisted the assistance of my good friend and collaborator Sam Ashley. Known more for his vocal work with his father Robert Ashley , Sam had been working with Jim Horton and others doing live electronic music performances as “A A Bee removal ” He and compatriot Ben Azaram brought their electronic processing rigs and combined with my Studio setup, we took the flawed output of the monophonic Fairlight Voice Tracker, unsuccessfully tracking the polyphonic input of the Cello recording as the source for driving our electronics, reacting to the playback onto another 4 tracks of recording. The first time I played this combination as an electroacoustic piece, it was mixed as a quadraphonic piece that tried to emulate a flight that ended in a crash and some elements of that idea remain in the resulting stereophonic mix down. This was a complex realization that depended on exact cues from the score for moments of intervention and electronic interaction that were meant to represent system failures leading to the “Unlikely Event“.

« Geodesic Elipsoid 1.61803989 » was based on a complex rendering of a geometric modeling of a geodesic ellipsoid with a two focal points at the “golden mean” distant from the diameter at the base of the ellipse the geodesic bilaterally symmetrical elipsoid was built around. If you are familiar with a geodesic dome then you can imagine stretching the shape of the sphere to meet the dimensions of a symmetrical oval of 2 x 1 with two focal points 1.6180989 distant from the respective far ends of the ellipse. This results in the joining members of the virtual structure having varying slowly and expanding lengths across the dome.
The model for the structural members of the geodesic ellipsoid were but into an array and a program written to output the frequencies of each of those members collided with a plane that was passing through that virtual object. The number output for each of these members was matched to a frequency in a 19 TET equal tempered scale. These members of the structure are referred to as “chord factors” in the parlance of geodesic math and that was the inspiration for this whole piece, on some level, a pun on the interpolation of terms between music theory and architecture. This was realized on a combination of equipment including a rare and wonderful Buchla 400 and a custom hardware CV generator that I had built as part of my job as the technical director of the Mills Center for Contemporary Music. The sophisticated design and inspiration are directly attributable to the influence at that time of director at that time David Rosenboom and music theoretician and composer the late and wonderful Larry Polansky. I would remiss in not also mentioning Phil Burke who helped with my understanding of the intricacies of the Forth computer language as I watched the three of them, from the sidelines developing their Hierarchical Music Specification Language.

Can you outline how you patched and performed your Modulisme session?

ABW was constructed from a score that represented the two scenarios of a whale beaching itself. I had the rather long subtitle of “conscious suicide or driven mad by parasites”. So structured each of the sections as parallel timelines, running to the eventual death on the beach and the synthesized sound of gulls. The patch was quite and elaborate combination of using Carl Fravel’s “Gentle Electric” pitch to CV convertor on the acoustic instruments and taking multiple triggers into the D-SEQ and delaying the rhythmic structures by seconds to then trigger several Negative and Positive Slews whose timing was driven by the envelope follower which was run through and analog shift register.

MTS used a fairlight voice tracker that was used to coordinate the interaction of the various vocal samples that were triggered from a PC running the Creamware Pulsar cards from the score that was designed around the structure of the novel’s narrative.This also generated OSC data that was converted to routine control in Pure Data and also as CV for controlling the Serge modules and Lexicon PCM 70 reverb which was also used as a midi controlled resonant bank. All the material was created in live settings and then edited with some crossfades, but mostly live.

COF involved moving around the space near the equipment waving the Geiger counter wand and looking for increased radioactivity to trigger the synthesis pulses which were captured and looped and manipulated in real time. There was no score just a mechanism for interaction and the hope that some radioactive activity would create usable and interesting pulses.

Do you pre-patch your system when playing live, or do you tend to improvise on the spot?

A lot of my performance is built around presets that I have worked on and with for months, even years ahead of the performance and I often quickly switch them up, but each one has performative aspects that I use to enhance the moment in performance. So it is a little of both. If I am using a modular setup, I usually have two or three separate signal generation paths that I am working with and sometimes, in a live context I will patch one into the other for frequency modulation or whatever.

Do you find that you record straight with no overdubbing, or do you end up multi-tracking and editing tracks in post-production?

My situation has always been built around live performances that are recorded. I then take those live performances and piece them together into a stereo track. I very rarely sync two separate recordings and overdub them, but, for example at the very beginning of ABW there is a combination of a live performance at Mills with another at the ICE house in Pasadena. That is only for 20 seconds or so, however.

Which module could you not do without, or which module do you you use the most in every patch?

In terms of the Serge, I pretty reliably use the Smooth and Step Generator with a Noise source. It is so flexible and can easily go from Oscillator to envelope follower/integrator to Low Pass filter and has a frequency range of 20 minutes to 20 khz. Very flexible. I also am often using the wave multiplier and triple waveshaper for that unique timbre. Often the later stages are going through one of Serge’s excellent and unique filters.

What do you think that can only be achieved by modular synthesis that other forms of electronic music cannot or makes harder to do?

That previous explanation of my use of the SSG goes to what makes all modular synthesis unique. Range! You can go from sub-audio to much higher frequency very easily. Also, being able to hand tune sustained sounds allows you to make drones and textures into just intoned and therefore more resonant and distinctive sounds. I haven’t mentioned it earlier, but my work with Lou Harrison and Terry Riley early on really made thinking in terms of intonation very important for my synthesis work. If you listen for it, much of the work is in distinct intonation.

Have you used various forms of software modular (eg Reaktor Blocks, Softube Modular, VCVRack) or digital hardware with modular software editors (eg Nord Modular, Axoloti, Organelle), and if so what do you think of them?

I taught VCVRack when it came out to a couple of my classes when I was pretending to be a professor (LOL) This was a way to introduce them to the concepts of analog synthesis. I have been looking at Cardinal lately since it is more open source that VCVRack. I mentioned the Creamware Pulsar which was a very early version of the hardware/software synthesis mode, even before Nord. We had the Nord Modular as part of my setup with fuzzybunny, mostly as a vocoder for some sections, but it was an amazing early system.

Screenshot

What would be the system you are dreaming of?

I am always on the prowl for new sounds so I don’t really dream of a system. It is more, for me, to find amazing components like the Xpander, the Korg Wavestation A/D, various unique Serge modules WAD, SSG, WVM, etc) and work on combining them.
That being said, Serge and I have been going back and forth about a module that I am working of his sketches on now which is based on the Tudor Analog Neural Net synthesizer which was a singular and very interesting instrument.
I was interested to discover that David Tudor and Serge Tcherepnin had very little interaction, but talking to Serge about the ETANN and going back and forth, he came up with a really cool way of making a version of this now extinct analog ETANN chip.

Are you feeling close to some other contemporary Modularists?

Jill Fraser has been a close friend and synthesizer inspiration since the 70’s and we stay in touch.
She is working closely with Peter Grenader on the Re:Volt project which if you are in the LA area, I strongly recommend checking it out.
Tim Perkis of course…

Tim & Scot @ SF Electronic Music Fest

Which pioneers in Modularism influenced you and why?

Who I listen to often and derive inspiration from are sadly fading, but it would be Maryanne Amercher, Elaine Radigue and Pauline Oliveros.
All three of them have been solid inspirations.
Jill and Peter just realized Mort Subotnick’s Sidewinder which reminded me of that great work. I neglected to mention this but in the mid 70’s I spent several weeks working with the under appreciated genius Ilhan Mimaroglu who had a very big influence on my approach to synthesis and music making in general.
Working with Alvin Curran and by proximity MEV particularly Richard Teitlebaum was a great experience too, of course.

Any advice you could share for those willing to start or develop their “Modulisme” ?

Just dive in !!!
If you have the urge you will not stop. I feel I was a terrible teacher because I could never impart that idea.
I don’t think you can really, but if you are attracted to the sounds and nuance of work with modular synthesis then you find it in yourself I think.

https://scot.greshamlancaster.com/