Conception - Layout : P. Petit / Cover Art : Proefrock
@ STEIM. PHOTO : Frank Balde
Atau Tanaka’s work sits at a point where instrument, body, and system gradually merge.
He began with analog electronics in the 1980s, working hands-on with modular setups like the Serge Modular, patching, rerouting, and shaping sound as a physical process rather than a fixed structure.
In the 1990s, this approach shifted toward performance systems that extend beyond the instrument itself. With Sensorband, alongside Zbigniew Karkowski and Edwin van der Heide, he explored how movement could be directly translated into sound. Sensors worn on the body turned gesture into signal, making performance less about playing an interface and more about activating a continuous feedback loop between action and sound.
This line of work led him to use bio-signals such as electromyography (EMG), where electrical activity produced by muscle tension is captured through electrodes on the skin and converted into control data. These signals do not track movement itself, but the intention and effort behind it, allowing sound to be shaped directly by muscular activation.
Om Ujjayi – London 2023
For Modulisme, Tanaka created a session that brings these strands together.
He works with an analog modular system while controlling it through his own muscle activity: EMG sensors capture micro-variations in tension, which are mapped to synthesis parameters (amplitude, filtering, modulation).
The patch is no longer adjusted by hand alone.
Internal bodily signals continuously reshape it.
What emerges is not a separation between control and sound production, but a single system.
The modular synthesizer behaves as an extension of the performer’s muscular activity, where even the slightest contraction can alter the sonic structure in real time.
Festival Ars Musica, Brussels. Photo : Bjorn Comhaire
We know each other from your Sensorband days. Can you tell us about your musical vision back then?
Sensorband was a trio that we had between 1993 and 2003. I was playing Biomuse, the Dutch artist Edwin van der Heide was playing the MIDI conductor from STEIM, and Zbigniew Karkowski was playing an infrared percussion instrument made out of scaffolding pipes. We met in 1993 at STEIM, seeing each other’s solo performances, and Zbigniew proposed to make a trio. We were excited about the idea to work with interactive sensor instruments, but in a live context, like a band. And that’s why we were called Sensorband.
We played sensor instruments, but we were making a trio, a band, like any other band – a power trio in rock, a jazz trio or a classical string trio. Three is a magic number for musical ensembles, and with Sensorband, we aimed to find the musical communication and dynamic of all trios, but done with experimental computer music performed on sensor instruments.
Back then, computer music was a studio art of programming, compiling, and composing. What made STEIM unique as a studio was it was a place where you could invent new instruments and try them out on stage and perform with them live. This was in 1993, before the laptop scene came. We couldn’t do live signal processing on audio on a laptop computer yet. We actually brought desktop computers on stage with our sensor instruments. But when the laptop scene finally arrived, we were interested in performing on Max and Super Collider – digital sound synthesis languages, but not from the keyboard, not from the mouse, not in front of the computer, but on stage, in front of the audience, performing with our body gestures.
Sensorband. 1993.
You studied with Ivan Tcherepnin, so I imagine you came into contact with the Serge modular synthesis system a long time ago. What was the effect of that discovery on your compositional process? On your existence? How were you first acquainted to Modular Synthesis? When did that happen and what did you think of it at the time?
My teacher of electronic music at Harvard University was the Chinese-Russian composer, Ivan Tcherepnin. He did well-known works such as the Santur Opera. It turns out that Ivan Tcherepnin was the brother of Serge Tcherepnin, inventor of the Serge modular synthesizer. So I had very early access to Serge modular synths in the mid-1980s.
In fact, in the Harvard Electronic Music Studio, we had the Serge Modular serial number 2. This wasn’t the only modular synthesizer in the studio, there was also an early Buchla Model 200.
Working with modular synthesizers opened my ears into a kind of sonic awareness, but it didn’t happen by itself or right away.
Alongside the modular synths were Scully open reel tape recorders with which we could make tape loops. This is in 1984, the year the Macintosh 128K was being introduced, and I would go on to help Ivan build the MIDI studio, the digital studio across the hall.
But the main studio with the Scullys, the Serge and the Buchla was all analogue.
What was notable in the analog studio was the lack of anything resembling a piano keyboard. There were Minimoog synths on the market at that time, Roland and Yamaha were making keyboard synths. But the synthesizers in the Harvard Electronic Studio did not have black and white plastic piano keyboards. So for myself, originally a pianist, to think about making electronic sound didn’t come by using my skills as a pianist. They came as zooming in on oscillators, slews, and the timbres of the filters.
It was my friend Morley Robertson from our experimental performance group, Trashart, who was the one who really unlocked the Serge synthesizer. We studied Allen Strange’s book, Electronic Music, and one of the exercises was the patch for David Tudor’s Rainforest. Morley got elements of Rainforest to work on the Serge, generating an infinite stream of bubbling electronic sounds.
This was for me coming from classical music and playing free improv, jazz, a sonic awakening.
Sprouts Lisbon. Photo : Vera Marmelo
How does it marry with your other « compositional tricks »?
Composing on modular is an infinite challenge. It’s less deterministic than composing notes in a score or MIDI notes on a sequencer.
You have to let go of control and allow emergent processes to enter your musical vocabulary.
I’m not sure, still to this day, that I know how to compose for modular synths.
But I know how to let the modular synth live, and I can try to enter into a living relationship in performance with the modular synths. So of the tracks that are on this Modulisme release, the oldest track, AKS-21, was made on an EMS Synthi, the suitcase version. I jammed on the Synthi, changing the patch with its matrix of pin connectors. I generated music and sound, but then I later edited and composed, much in the way that I learned that the producer Teo Macero would recompose the jam sessions of Miles Davis’s bands. So there, the composition was an editing layer that allowed me to refine and focus the emergent sounds produced by the Synthi.
Obviously you are interested in gesture, physical movements to create the music, I remember seeing you play in the early 2000 with the Bio Muse system. Would you tell us more about that? What is your favorite way to achieve such expression?
I’ve been playing with signals from my body for over 30 years. The interfaces I use take electrical signals from the central nervous system and are turned into musical signals. In particular, I use signals from muscle tension, the electromyogram (EMG) where muscle contraction coming from musical gesture produces electrical activity. The neuron impulses that cause muscles to contract during musical gesture are picked up by electrodes and digitized and sent to a host synthesizer as a sound source or a control signal.
I’ve used a range of different muscle interface systems over the years.
And in the 1990s, it was called the Biomuse, invented by Ben Knapp and Hugh Lusted at Stanford University. The BioMuse took muscle signals, brain signals, as well as performing eye tracking, and converted these bioelectrical signals to MIDI. The BioMuse was in a two-unit 19” rack, and I would interface it to MIDI synthesizers of the day – Yamaha FM synthesizers, Korg vector synthesizers, Kurzweil K2000 synths, etc.
With Catherine Musseau, Festival de l’Eau, Saint-Nazaire
As technology changes, these muscle interfaces also changed.
Biocontrol Systems produced a wireless muscle sensor. They then made just the sensor with dry electrodes, put out zero to 5 volt output that could be used directly as control voltage input or input to an Arduino system. In 2014, a consumer product called the Myo was produced by a Canadian startup called Thalmic Labs. It was beautiful.This made an easy-to-use bracelet that picked up EMG signals. And researchers in the scene hacked the software development kit, the SDK, to access raw EMG data in software like Max. Since then, Thalmic Labs was acquired eventually by Meta, Facebook’s mother company, with in mind, I think, to use muscle interaction as part of their Metaverse offer. And the CEO of Meta, Mark Zuckerberg, was suddenly seen talking about neural computing in tech demos, waving his arms around. But the Metaverse system was a failure and the Myo disappeared into corporate vaults and was taken off the market.
This is the problem of working with commercial technology. We are at the mercy of the whims of corporations and businesses.
So one year after the Myo disappeared from the market, I worked with the modular synthesizer designer Martin Klang of Rebel Technologies to produce our own EMG interface. Martin had created the OWL, the OpenWare Labs OWL signal processing framework that ran on a Cortex M4 microcontroller.
He proposed to take a Texas Instruments biosignal acquisition chip and to pair it with the Cortex microcontroller running the OWL framework. We were then able to digitize the EMG signal and treat it directly in the audio signal processing chain thanks to Martin’s OWL system.This then was sent to the host synthesizer over USB or Bluetooth LE as audio or MIDI.
The EAVI EMG device, as we called it, is a class compliant musical device.
And in this way, fundamentally different from a consumer device intended to do something else that has to be hacked to make music. But throughout this evolution of EMG interfaces over 25 years, the technology changes.
There are higher resolution sampling of the biosignal. There is faster signal processing computation.
As technology changes, however, the human body stays the same. And so if I perform with my body and with the muscle tension of my body, it’s that same body. My body is getting older and slowing down as the technology is speeding up. But it’s still the muscle,it’s still the EMG. And that is the musical instrument for me. I think of it like a musician thinks of an instrument. A violinist does not put down the violin to change instruments just because something new has come around. Version upgrades to the violin might exist, but despite all the efforts to make carbon fibre violins and other new materials in violin fabrication, we love the sound of the 300-year-old wooden Cremona violins from Italy. I wanted to think about technology in this slow, long way, all while accessing the most recent ideas and possibilities of gestural interaction. And this was a vision and a decision I made very early on to say this muscle interaction is interesting enough that I can dedicate my musical life to playing this instrument, like a pianist dedicates their musical life to playing the piano.
Tacheles. 1993.
I know you are concerned with transmission and also teach.
These days in London, at Goldsmiths university. Would you please tell us more about that?
Now, I work in university, but not in a music department. I work in a computer science department, but one that’s very creative. And there, I lead a research center called the Centre for Sound, Technology and Culture (CSTC), where we are able to carry out both technical research in making new interactive devices, artistic research in making new music, and looking then at the cultural impact of this thinking. I carry out research that is funded by national and European research councils and in that way collaborate with other institutions.
For example, I have a research project currently called Technologies of Touch in which I’m collaborating with the University of Arts, (UDK, Universität der Künste Berlin) in Berlin, where a music theorist, Ariane Jeßulat, and a professor of wearable computing, Berit Greinke, are working together with me. The research allows me to create new musical instrument systems using muscle interaction and put them in different contexts, artistic contexts, scientific contexts, and cultural contexts.
And it was through EU research projects like Rapid Mix, where we were able to explore early machine learning technology from mapping gesture to sound.
It’s in those research projects that I work with music studios like IRCAM in Paris.
For example, now exploring AI timbre transfer using algorithms like Rave.
And it’s in that, those research projects, one called Meta Gesture Music, where I released an album of my music and music of others in the London scene.
It’s in a project, Biomusical Instrument, where I worked with modular synth designer Martin Klang to create the EAVI EMG interface. So the university is an environment to explore and to have the freedom, free from commercial pressures to do research.
I also teach and am in contact with the ideas and the energy of students who are learning and discovering these technologies. I teach in the subject of computational art, what we used to call new media art, interactive media art.
And here the students may or may not be musicians, they may or may not be artists, they may or may not be designers, but they’re interested to work with interactive technologies to produce new experiences. Sound can be one of the materials in these experiences. And so to think about the place of sound, maybe outside of music, but as a material we can sculpt with our bodies, with our modular synths, with digital signal processing, et cetera, is of great interest to me.
And to share these ideas and explorations with students and to see what students come up with is a way to reinvigorate and energize my own work.
With John Chowning. London 2025.
What have you been working on lately, and do you have any upcoming releases or performances?
In recent years, I’ve returned to composing instrumental music. Instrumental music that nonetheless uses interactive body sensing technologies. I have a piano piece, Suspensions for muscle EMG and piano, where the solo pianist has muscle sensors that capture tension of their muscles made during pianistic instrumental gesture. This allows the pianist, through granular synthesis, to capture the acoustic sound of the piano they are playing, live sample it, and stretch the sound with their muscle gestures, achieving a kind of dream of the pianist to shape sound after hitting the key. It’s an old piece, composed in 2009, and now is played by four pianists around the world.
Originally commissioned by Sarah Nichols and premiered at the Huddersfield Contemporary Music Festival, it has also been performed by the New York virtuoso Kathleen Supové, and in Europe by the Maltese pianist Trisha Dawn Williams. and the Italian pianist based in Belgium, Giusy Caruso. The fact that an old piece can be played by several musicians and to continue to be played is something I’m interested in, of creating a repertoire of body music.
Last year, one of the tracks we hear in this release is Déplacement for the Erämaa Trio and muscle sensors. I worked with the members of the Erämaa Trio for a year meeting them in Brussels every two months to bring them sketches of the composition, to have them try the muscle sensors, and to build the piece together. It’s been performed now in Paris, in Seoul, South Korea, Fukuoka Japan and had its premiere at the Ars Musica Festival in November 2024 in Brussels. We continue to play the piece, last year in Brussels at the artist’s loft of Peter Friess, YIAP, and this past October at the Royal Conservatory in Brussels.
The new commission this year is from Artzoyd Studios, a music centre in the north of France that’s an outgrowth of the legendary progressive rock band. The project is a co-production with the Geneva percussion ensemble, Eklekto.
I was asked by the artistic director of Artzoyd, Kasper Toeplitz, to compose a piece for percussion and electronics. So that’s what I’m working on now, working with musicians from the Eklekto Ensemble. In particular, over the summer, I was in the studio with Corentin Marillier to record audio data sets of his percussion playing on a range of orchestral percussion instruments. It resulted in about eight hours of recordings with which I’m using to train an AI model in IRCAM’s Rave algorithm. I went to Geneva in November for rehearsals to try out the timbre transfer model to be played by the percussionists. The piece was premiered in Geneva, the 21st of January, 2026.
Déplacement, with Erämaa Trop – festival Ars Musica Brussels. Photo : Bjorn Comhaire
What do you usually start with when composing?
When I start composing, I start with an idea, a concept, something that I’m interested in, in terms of the interaction of the human with sound. I then play with the idea to imagine how it might be. And so there is an idea, a concept underlying the work, and a process of exploration to see whether that idea gives an interesting music. Once that process has been started, there’s an iterative process of bouncing back and forth to refine the idea, to refine the conditions that produce the sound to make what I hope will be good music.
How do you see the relationship between sound and composition?
Sound is our material for music.
Sound has a life.
Sound has its own existence that needs to be respected.
At the same time, we have agency to shape sound and technological possibilities to sculpt it.
And so that exchange between the agency that we have to exercise our ideas and the inner life of sound is that process of composing with sound.
Performing John Cage’s Variaions VII in Berlin. Photo : Lee Callaghan.
How strictly do you separate improvising and composing?
Improvising and composing are not diametrically opposed.
They are not two opposites, they are two positions along the spectrum.
So I like the term ‘comprovisation’ that combines improvisation and composition. There is always an element of improvising within a composed musical structure.
There is a compositional element to improvisation where we might think of it as composing on the fly.
So there is a deep relationship between improvisation and composition, where rather than to think of them as opposites, we can think about the energies of one that can inform the other.
Do you find that you record straight with no overdubbing, or do you end up multi-tracking and editing tracks in post-production?
I like live recording, but I also like producing. And so even with a live recording, I may go back and edit. Even a piece that might be captured in one take, I might produce, overlaying different takes.
I may use multi-track recording in this case, studio production techniques that are very different from the tool kit that I use in live performance. This is because our attention in listening to recorded music is different than the intention when we listen to a live concert. And so the editing and production of recorded music is to draw the sonic attention of the listener to things they can’t see.
Resonate festival. Belgrade 2016.
What type of instrument do you prefer to play? Do you still use modular and how has your system been evolving?
It’s interesting how modular has come back. Modular synthesis was the first way that I learned electronic music, originally in the 1980s. Then I became interested in the digital synthesis revolution – FM synthesis and the Yamaha DX7. Sampling, wavetable and vector synthesis. Additive synthesis. But quickly I was using software like Max and Pure Data, which in their graphical programming paradigm are a form of modular synthesis. Using interactive sensors and having the body enter this musical system is also a kind of modular thinking. So maybe I’ve been modular all along.
But it was nonetheless interesting and curious for me when the Eurorack revolution came about recently. It took me a while to get back into it, but because I had that early training on the original Buchla and Serge systems. Once I figured out who makes what, I found the Eurorack system very open and in that modern, dependable electronics way, reliable and reproducible in ways that the old systems were not.
NYEMF. Photo : Nicholas Croft
Instrument building may actually be quite compositional, defining your sonic palette, each new module enriching your vocabulary. Would you say that their choice and the way you build your systems can be an integral part of your compositional process? Or is this the other way round and you go after a new module because you want to be able to sound-design some of your ideas?
The choice of module for me depends on the nature of the signal that I’m working with.
And so if I used ring modulators on my muscle signal, it’s because the muscle signal is a stochastic pulse train. If I used resonators on my heartbeat, it’s because there were resonant frequencies in the pulse beat that I wanted to bring out. If I used granulation,on the acoustic instrument input is because I was interested in going beyond simple live synthesis to create abstract sounds from the acoustic instrument source.
So the building of the synthesis chain in any situation for me goes back to thinking about what is the source that I’m working with, the sound source that I’m working with.
Modular synthesis is unique in what it allows us to focus on and the lifelike properties of emergent processes that can unfold. It doesn’t necessarily make things easier. Maybe modular synthesis makes things harder, but in doing so, clarifies what we’re setting out to do. We can see the elements on independent patch cables, on different controls and knobs. So it allows us to reduce and clarify what it is we’re setting out musically to do.
SAT Montreal with Lillevan VJing. Photo : Sebastien Roy.
Would you please describe the system you used to create the music for us?
Heart Monologues is a multi-lingual poem by Jasmina Bolfek-Radovani. She came to me because she had heard my track, Heart:Beat:Monitor from my CD, Biorhythms, asking for permission to use the track. I offered instead to perform live, and we created a performance reading of the poem with Delphine Salkin. The recording is from our performance in Pula, Croatia at the Galerija Makina. Jasmina and Delphine are reciting the poem in Croatian, French and English. I am playing the modular synth with an ADDAC heartbeat module driving a Mutable Instruments resonator, and muscle signal from my EAVI EMG interface being ring modulated.
Modular set up for heart monologues
The instrumental piece, Déplacement, is scored for chamber music trio. Akiko Okawa on viola, Cédric De Bruycker on bass clarinet, and Quentin Meurisse on piano. The work is composed, as there is a score, but the score is an open form, drawing from composers like Lutosławski, who would compose pieces, but that had indeterminate sections that were scored in boxes, where repetition and improvisation could fill, could enter into a compositional structure.
The three musicians in the Erämaa Trio are each wearing muscle sensors, EMG sensors, that go via the EAVI-EMG board that I created in my lab. It connects over Bluetooth MIDI to a Bela Pepper in the modular rack. Each musician in the trio has their own modular rack, their own EAVI system, and their own Bela. It’s inside the Bela Pepper that I’m running a Pure Data patch that live samples the acoustic instrument sound and performs granular synthesis using a cloud generator authored by Alessandro Cipriani and Maurizio Giri . So in this case, the modular synth is in fact a digital synthesizer in which I’ve implemented a cloud generator granular synthesiser. The fact that it’s done in Pure Data, a graphical programming language for music, that is in itself very modular in the way you draw, place objects on the screen and connect them up with virtual wires. So in this case, we have a digital modular system inside a Eurorack modular case, performed by three acoustic musicians. So we have the acoustic, the analog, and the digital in a kind of entanglement in Déplacement.
Om Ujjayi. YIAP Brussels
Finally, the piece Om Ujjayi is my yoga performance, where I take body signals of the muscle, the heartbeat and breathing and process them through an analogue modular synthesis patch. There, the composition is the patch and the sound source is the body. Neuron impulses of the muscle electromyogram signal feed a Mutable Instruments ring modulator module to be filtered and mixed with the raw stochastic pulse strand of the muscle.
The heartbeat signal is picked up by an ADDAC System heartbeat module and sent through a resonator followed by a low-pass filter to allow me to tune the heartbeat sound that is produced with a pulse. My breathing is captured by a wireless micro headset, wireless microphone and enters a delay line.
Then I have a polyrhythmic sequence running in a Pamela’s PRO Workout by ALM Busy Circuits. The polyrhythms are programmed according to the ratios of the human biorhythm, 23 to 28 to 33. These create blips of oscillators that are then filtered. The amplitude modulation on the blips can be modulated by
brainwave signals coming from an EEG, a brainwave headset.
To have a heart pulse clip on your ear, a headset microphone picking up breath, electrodes on the arms picking up muscle tension, and a brainwave reader on your head is a lot to manage. But whether it’s in performances with the EEG or without the EEG, here the human body becomes part of the modular system and is the sound source for modular synthesis sound processing.
Performing with Cicanoise, Festival Instants Fertiles, Saint-Nazaire France 2024.
Do you find that you record straight with no overdubbing, or do you end up multi-tracking and editing tracks in post-production?
One of the moments when I realized that the Eurorack scene was happening was when I shared the stage with the late Peter Rehberg at the Ars Musica Festival in 2016, programmed by Francois Bonnet of GRM. I had known Peter since the laptop days in the late 90s when he came to tour Japan and his running of the Mego label.
When Pita walked on stage with a modular and was playing with the same aesthetic that he had developed using the laptop and SuperCollider, I knew something very interesting was happening. He would go on to produce, what was for me, one of the one of the most beautiful releases of music made on modular, “At GRM”, of his performance at the festival Présences Électronique in 2009 and 2016. I still don’t know how Peter did it to get from sound world to sound world to do it with alacrity and deftness, finding spontaneity, but a sureness of foot.
And in the gang of musicians in that scene, I recently caught up with my friend Russell Haswell, who’s also been performing with modular. And with him, we talked about a lot of interesting modular companies. It’s thanks to him that I got interested in ADDAC Systems in Portugal. So I recently when I was in Lisbon for a concert, I went to visit André Gonçalves, the genius behind ADDAC Systems, to take a tour of the atelier and to see his personal studio inside the company premises. And as I’ve mentioned, I also know Martin Klang, who made some great modules under the name Rebel Technology. So whether it’s synthesists or synth builders, this community of musicians, this community is a continuous one.
Sprouts In Between. Lisbon – Photo : Vera Marmelo Sprouts In Between with Adriana Sá, Maria Do Mar. Lisbon – Photo : Vera Marmelo
Which pioneers in Modularism influenced you and why?
The pioneers of modular synthesis who inspired me are the Tcherepnin brothers, Ivan and Serge. Ivan taught us, and composed beautiful music on the Serge modular, the Santur Opera, and the Electric Flowers album. And his brother Serge for inventing the modular synthesizer system that didn’t need specific labels, that didn’t separate control from signal. This was revolutionary at the time.
I also had the good fortune in California to meet Don Buchla before he passed away. And having used the Buchla 200 system, I understood a slightly different paradigm of modular synthesis.
What interested me about Don was that he continued, he embraced digital musical instrument design by making instruments like Thunder, Lightning, and the Lumina e-marimba.
He was the modular synthesizer designer who was interested in a way that I was in interactive musical instruments, interactive musical instrument controllers.
w/ Ryuichi Sakamoto, David Toop, Manu Luksch and Mukul Patel.
Any advice you could share for those willing to start or develop their “Modulisme”?
The piece of advice that I have for young modular synthesists getting started is to stay simple and take time to listen to the sound.
Each module has its magic, and at the beginning you might think like you don’t have enough modules, but explore each module for its subtlety. Find the tiny changes that create subtle changes in the sound and zoom into the sound until you get bored.
When you think you’re bored with the sound, that’s the only the beginning of the exploration process. Take time, work with small numbers of modules, and zoom in on the sound.