Interview: Helene Hedsund

Helene Hedsund is our first Nordic Culture Point guest composer in Sirius. She is a Swedish composer and sound artist who works exclusively with surround sound. She has been active at EMS since 1994 and is a PhD candidate at the University of Birmingham. She uses her own tool which she has created in the audio application SuperCollider, and in Sirius she uses choir and field recordings for a piece she is planning to perform during Beast Feast 2018.

Can you describe your most typical work process?

It takes a long time and I must always get very familiar with the recordings that I am using. I’m working with the sound for a long time to get an understanding of how it works. I can have several hours of sound recording, which makes for a time consuming process. I often work from a certain idea at first, such as an analysis method or a compository twist, but maybe the idea disappears before I finish the work. I try idea after idea and if it does not work, I find another one.

How is your setup here?

I have my computer plus an instrument and composition tool I’ve created in SuperCollider, with keyboard or external MIDI keyboard to play audio. I’ve made several pieces with only that instrument. Most of the time I sit with it, trying out sounds, looking at the layouts and parameters that I can change. I have used this since 2012 and it is growing all the time. I’m still adding new things to it.

How did you get into surround sound?

I was attending an EMS course at Fylkingen in 1997. Denis Smalley played a multi-channel piece for us, and I became totally fascinated. In 2008, I took EMS’s 2-year course, and began working in their studios for immersive sound. Since then I have worked exclusively in surround. Working with multi-channel works, I walk around the room, sensing and listening while I work. It is very important to me.

Can you tell me what you are doing now in Sirius? What is the context, your material, and how do you work with it?
This is the last part of my PhD thesis, where I have made use of recordings from 9th of December in the years 2013-16. In the starting phase with these pieces, I simply went out where I found myself that day and recorded sounds with my microphone. I use the recordings as both material and starting point for various work. The recording may not be heard in the final work, but it might have influenced the shape, the frequencies or something else.
What I’m doing right now is processing of recordings from a walk on Centralbron in Stockholm. I have done a partials analysis on them and try to apply these data in the form of the five types in a counterpoint. Right now it sounds like there is a circle of singing monks here in Sirius. I am re-tuning the choir voices and their partials, and I will also use recorded, heavily filtered screams. In the closing part I intend to use the form and frequency of the recording, reproduced through the screams and the song, and the five-stroked counterpoint.
In piece 1, a complicated piano chord spreads the spectrogram of a short, but many times timestretched recording. I have created the sonogram in Audiosculpt, saved the picture and added it as a filter on the piano string. The ear can recognize certain movements, such as bird singing, but it is in a completely different form than in the original recordings. Piece # 2 is based on a recording from a bus tour and serialism experiments. The sound is filtered out of a later constructed twelve-tone series. In the third part I used the sound of a leaf blower as a part-analysis was performed. The analysis file contains numbers, frequencies and amplitudes, and I used these data to process the sound of a plectrum playing the strings of a piano with no shell.

Why 9.12?

It is a date concerning my mother, for which I have dedicated the entire dissertation to. My mother died on December 9, 2012, I was born December 9th, and she married December 9th. A lot of what I’ve done in my doctorate is about 9.12. In different ways. Of the 90 minutes of music that I’m submitting now, much is influenced by the numbers 9.12 or the date in one way or another, such as time interval, number or musical parameters otherwise.

You also have a background as a programmer. How did you get on that track?

After taking all the courses at EMS, I traveled to City University in London and started a master’s degree in 1997, but did not agree with the supervisor, traveled home again and did not make music for seven years. I was just going to take a break and began to educate myself in IT, partly because I was incredibly tired of being constantly broke. I worked as a programmer for 10 years and ended in 2009. It was quite fun to program but I went tired a while before I finished the career off, as more and more went on the same tasks again and again, then it was not that fun anymore.
How do you connect the programming with composing music?

In SuperCollider, one of my main tools, but it is quite different than Java, so it was not an easy switch.

Do you see any similarities between working with programming and composition, in general?

Yes I do. I just remember when I started with IT and thought it was incredibly nice to let go of the hardest part of all the decision-making. When programming, either it will be what it is, and it will work – or not. This is not the case with music, so the pain is greater when deciding which way to go. Beyond that, I think, quite similarly, “to get this result, what can I do?”

Can programming and composition merge into one?

Within composition, the sound is what’s important. There is a lot of problem solving on the road, like with programming, but in addition using the aesthetic sense and hearing, to judge what can be used or to be rejected, so both yes and no.

How do you usually implement the room in your compositions?

I often tend to divide it into frequencies. I usually have the highest frequencies in the ceiling, and the register continues downwards. I rarely use panning, except when I was at ZKM. There I used Zirconium (download), their open source software that lets you draw lines for the sound. It is convenient and independent of speaker setup. You only create a new layout that fits, and then you can use the composition you already had.

What is the sound system where the play is going to be performed? Do you have to adapt the technical much to what you are working on?

I plan to perform this work at BeastFeast in Birmingham in 2018. There are two concert venues. One is the Bramall Dome Room with approx. 20 speakers placed in a dome where the ceiling speakers are fixed, and the floor speakers are adapted to different concerts, with varying numbers. The Elgar Concert Hall (ECH) is also used, a regular concert venue that has to be refurnished taking use of 100 of the house’s speakers when it’s going to be a concert – a job that usually takes a couple of days, and a system that is adapted to many different layouts, for example 8 channels, 7.1, 5.1, and diffusion. The speakers are of different types, for example tweeters and speakers facing the walls. I can probably take the composition and technical setup that I make here directly and if I have to adapt something, it’s very easy to do.

What do you do when you’re not at Notam?

I moved home from Birmingham back to Stockholm last fall, and I work a lot at EMS. I am pampered. I’m used to being able to simply walk over to work at the studio with my pieces, something I’ve realized is an incredible luxury. But everything changes, EMS has become so popular now that it can be difficult to get studio time.

What do you think of Oslo?

It’s a quieter pace here than in Stockholm, would happily have moved here if it had not been just as bad weather here as in Stockholm.