MuBu is a highly advanced tool for signal processing, analysis, machine learning and audio synthesis in Max. MuBu is one of the most comprehensive audio processing tools available today, and can be used in everything from composition, sound design, live performance and improvisation to interactive installations and dance.
At this meetup we meet for inspiration, to show some patches and help each other. You do not need to be a MuBu expert, but it is an advantage if you master Max or other audio programming languages.
All you need to bring is a laptop. We can help you install MuBu if you haven’t used it before.
Time: Wednesday, February 5, 2020, 18.00 to 22.00
Location: Notam, Sandakerveien 24D, building F3, Oslo, Norway
MuBu meetup is free.
When: 14th-16th of March
14th of February 25th of February
Application: Send as an email to email@example.com
The goal of the 3 day meeting is for artists to work on documenting their own instrument, performance, installation and/or other interactive project.
This process is guided by Marije Baalman. Participants are invited to share their documentation and describe their process during the work session. That way the participants can build feedback and dialogue techniques that will access more in-depth insights of their work and that of others.
The work session is developed in parallel with the writing of the book “Just a question of mapping – ins and outs of composing with realtime data” by Marije Baalman.
The target audience for the work session is experienced artists and designers who would like to reflect on their own practice and learn from other’s practice.
Particpants are asked to have a project that uses interactive technologies. For example:
• an augmented acoustic instrument with sensors
• a digital instrument, controlled with MIDI controllers, self-built interfaces and/or game controllers.
• an interactive installation, that uses sensors for interaction with the public
• an interactive dance setting where one or more dancers wear or are tracked by sensors, that control sound, light and/or visuals
Each day we will work from 11:00 until 18:00 with a lunch break in the middle. The session will take place at NOTAM.
Day 1 – morning: Introduction & setup
- Participant and coach introductions.
- Introduction of each participant’s project.
- Introduction of Mapping my Mapping methods.
- Discussion of aims and concepts.
- Setting up instruments/installations to demonstrate.
Day 1 – afternoon: Ins and Outs & Physical description
- Gesture and output – what happens?
- Playing the instrument/installation in gestures and words.
- How are the different physical elements connected, including cables, communication protocols, etc.
- How are the elements used?
Day 2: Computational process
- Tracing data/signal from input to output.
- What does the output that is controlled/steered/influenced by the data do?
Day 3 – morning: Writing documentation
- Clarify and structure the descriptions from previous days.
- Review, analyze and discuss the documentation.
Day 3 – afternoon: Discussion
- Discuss all participant’s projects with Q&A
- Review and discussion of practical and aesthetic issues that have arisen during the 3 days.
To participate, please send the following information to firstname.lastname@example.org before
14th of February 25th of February 2020. Baalman will then make a selection of projects, aiming to have a good diversity of projects and artists.
• Motivation for participating
• Which project you would like to document and analyze during the meeting?
Please share a brief description and links to any documentation you already have of your instrument, e.g. a video online.
About Marije Baalman
Marije Baalman has built new musical interfaces since 2002 and has been creating various projects involving realtime data in the contexts of music, dance and installations. In addition she has been developing the Sense/Stage wireless sensing platform since 2007, which is available through a webshop. She has held numerous workshops involving realtime sensing and mapping data from sensors to sound. Between 2011 and 2016 she worked at STEIM developing hardware, firmware and software for artists. She was also involved in the Modality project, a toolkit for SuperCollider to access HID, MIDI and OSC controllers, the SuperCollider port to the Bela, an embedded platform for creating musical interfaces. She has written several articles about her artistic practice and research over the past years and contributed to three chapters in ‘The SuperCollider Book’ published by MIT Press in 2011.
Date: 24th of January, 5pm-7pm
In this workshop, Mads Kjeldgaard (composer, artist and Notam-employee) will teach you the basics of the powerful and vast pattern library of SuperCollider.
We will cover algorithmic composition in SuperCollider and you will leave the workshop with a small algorithmic composition and an idea of how to write algorithmic music.
The subjects covered in the workshop are:
* What are patterns?
* Live coding patterns
* Value patterns
* Event patterns
* Tonality/scales in patterns
This workshop will be very hands-on, so please bring a laptop with SuperCollider and the sc3-plugins preinstalled (test if your setup works before the workshop, please) so that you can start writing algorithmic music immediately.
No experience with SuperCollider or programming in general is required to participate in this workshop.
Everyone is welcome!
Diemo Schwarz: Advanced sound processing with IRCAM MuBu.
December 2nd – 6th, 11:00 – 17:00
Place: Notam in Oslo
Price: 5000 NOK for all academic employees in 50% positions or more, free for all others. The course is produced in collaboration with BEK – Bergen Center for Electronic Art.
Diemo Schwarz is one of the main developers behind MuBu, a highly advanced toolkit for signal processing, analysis, machine learning and sound synthesis in Max. MuBu is some of the most comprehensive audio processing tools available today, and can be used in anything from composition, sound design, live performance and improvisation to interactive installations and dance. The workshop will contain a mix of practical work, supervision and lectures.
Diemo Schwarz is one of the foremost international experts on real-time analysis and synthesis of sound, and is affiliated with IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, France. Diemo Schwarz works both as a researcher and musician, combining improvised electronic music and his own gesture-controlled digital instrument.
The participants must have a good, basic understanding of Max to attend this Worshop. People with good skills in audio programming languages such as Super Collider, Csound, PD and CLM, will also benefit from this workshop.
MuBu (for “multi-buffer”) is a set of Max modules for real-time and off-line multimodal signal processing (audio and movement), machine learning, and granular, concatenative or additive sound synthesis. Using the multimodal MuBu container users can store, edit, and visualize different types of temporally synchronized channels: audio, spectra, sound descriptors, motion capture data, segmentation markers and MIDI scores. Simplified symbolic musical representations and parameters for synthesis and spatialization control can also be integrated.
MuBu integrates modules for interactive machine learning for recognition of sound or motion forms. MuBu also includes PiPo (Plugin Interface for Processing Objects) for signal analysis and processing.
MuBu is used in the areas of musical composition, sound design, live performance and improvisation, interactive installations, and dance. The workshop will be mainly devoted to presenting the technology and giving participants the chance to use it via “hands-on” sessions in order to acquire the skills necessary for creating compositions, installations, digital instruments and personalized creative tools.
More info on MuBu: http://forumnet.ircam.fr/product/mubu-en/
Schedule for the workshop
- introduction + examples
- basic concepts of MuBu (container, buffers, tracks, frames, matrices, editing)
- granular synthesis
- spatialised granular synthesis (multi-channel and ambisonics)
- spectral+additive analysis/synthesis
- sound analysis
- corpus-based concatenative synthesis: audio descriptors, catart
- machine learning with MuBu (classification, gesture recognition and following)
Diemo Schwarz is researcher–developer at the Sound–Music–Movement Interaction (ISMM) team at Ircam, working on sound analysis and interactive corpus-based concatenative synthesis in multiple research and musical projects at the intersection between computer science, music technology, and audio-visual creation. He is also a performer of improvised electronic music on his own gesture-controlled digital instrument based on CataRT.
Time: December 2nd – 6th, 11:00 – 17:00
Price: 5000,- NOK for all academic employees in 50% positions or more, free for all others.
Teacher: Diemo Schwarz
Number of seats: 12
Location: Notam in Oslo
For registration, send an email to email@example.com with name, residence and email address as well as information about your background knowledge and motivation to take the course. We will then send you detailed information about payment, start-up and further follow-up.
Knut Wiggen built EMS in Stockholm with new technology in the intersection between analog and digital methods. As part of this development, he constructed a composition software – MusicBox – that he verified by writing five short musical studies. On October 18, 18:00, a vinyl recording of these studies in stereo version will be released at NOTAM, and the
release is at the label O. Gudmundsen Minde, by Lars Mørch Finborud and Lasse Marhaug. Mastering by NOTAM’s Cato Langnes.
More about Knut Wiggen here: http://www.knut-wiggen.com/Texts/Rudi_OS_23.3.pdf
Annette Vande Gorne (1946) is a veteran in electroacoustic music. She studied composition with Pierre Schaeffer in Paris, following encounters with the acousmatic music of a. o. pioneers Pierre Henry and François Bayle. Her music has been performed across the world, and she is a a specialist in the use of the acousmonium – a loudspeaker orchestra with more than 70 dissimilar loudspeakers. Vande Gorne is in Oslo in conjunction with nyMusikk’s micro festival TEIP, which focuses on the use of analog means as tools for work and expression.
Vande Gorne leads a workshop at NOTAM for everyone with an interest in learning more about composition techniques for analog tape. This is a unique opportunity to gain first hand knowledge on the development of early electronic music. There’s an upward limit of 10 participants.
Workshop cost: 300 NOK.
Time: Saturday October 13. 12-16
Place: Notam, Sandakerveien 24D, bygg F3, 0473 Oslo
Vande Gornes music will be performed during the closing of the micro festival TEIP on Saturday October 13.(link til facebookevent)
A four-day micro festival about and with analog tape.
Today, electronic music consists of digital files that holds binary numbers, which are easily shared, sold or bought with a swipe. Before digital, however, electronic music was produced on and for large and slow machines that stored sound on reels of magnetic tape.
TEIP pays homage by presenting three pioneers in the field: Annea Lockwood, Annette Vande Gorne and Françoise Barrière. Lockwood was last performed in Norway in 1979, while Vande Gorne’s and Barrière’s music was performed by NICEM during the 1990s.
As a contrast to these works, TEIP presents recent music by Håvard Volden and Magnus Bugge. Throughout the festival, visitors can also see an installation by sound artist Atle Selnes Nielsen. TEIP is produced in its entirety in nyMusikk’s space in Platous gate 18.
The sound installation Electric rain (2018) employs a 96-channel sound system where each loudspeaker receives an individual signal. In order to fully exploit the affordances of a system Sofi this magnitude, the sound material must also be comprehensive, and Flø uses field recordings, studio recordings and synthetic sound in order to create complex models of different types of rain.
The field recordings are loaded into computer memory, and sections from this buffer is selected in such a way that the same signal is not sent to more than one speaker. The selections constantly change, and the listeners will never experience the same sound balance twice.
There are two types of point sources: studio recordings of single drops and synthetically generated sounds. Flø has made approximately two thousand recordings of single drops, where drops of different sizes fall on different materials, such as tree, steel, plastic, paper, fabric a.o. These recordings are then combined into larger textures and structures using stochastic (controlled randomness) methods.
The same collection of singular drops has also been done with synthesized drops, where Flø has tried out different methods in the control of frequency, spectral balance, density and combination into structures. There is a body of research on how sound describes meteorological phenomena, and several models for simulation of the acoustical and physical qualities of rain exist. Flø ended up basing parts of his work on an algorithm for rain synthesis developed by the musician Katsuhiro Chiba.
The programming has been executed in the high-level Max environment, and Ircam’s MuBu has been used for analysis of the sound recordings while Ircam’s Spat has been used for working with the spatial aspects. The sounds were distributed in the space according to principles about variation in placement, density and movement. The field recordings were used for creating diffuse sound fields, while the singular sounds were used as point sources with a precise placement in the space.
The loudspeakers are of Flø’s own construction, built in collaboration with NOTAM’s Thom Johansen and Hans Wilmers, as is the case also with the amplifier. The speaker elements are coaxial, which means that the treble and mid/bass-elements are placed in the same axis, and not vertically displaced as in most studio- and living room speakers. The result is that sound radiates equally in all directions, and the round cabinet that Flø designed has the same effect. Embedded in the cabinet is also a specially constructed filter that separates the signal to the treble element from the signal that is sent to the mid/bass element. Visually, the speaker does not require much attention, and this is deliberate. Custom circuit boards for the amplifier was designed in collaboration with NOTAM’s Hans Wilmers and Thom Johansen, and as several other of Flø’s installations installasjoner, the technical aspects are executed with a large focus on detail.
The installation is generative in the sense that the processes that are set in motion develop the material and its presentation in realtime. In the concert version Black Rain (Flø and van der Loo 2018), the musicians controlled the selections of drop sounds that were used, which parameter values were used for the synthetic sounds, how the sound density wandered through the concert space and so on. In general, the soundscape developed towards a synthetic character during the concert, and the stochastic element was maintained by the use of two analog feedback-loops, crafted by Rob Hordijk.
From a historical perspective, this work joins a long cultural historical tradition in literature, visual art, film and music where rain and its significance is modeled and brought forward to be experienced as material for art.
Do you want to learn SuperCollider or just become an even better sound hacker?
Sure you do – and there is simply no better way of doing that than by hanging out with like minded people.
For exactly this reason, Notam (the Norwegian Center for Technology in Music and the Arts) will be hosting a free, cozy and informal SuperCollider meetup every 2nd monday of the month in Oslo.
Everyone, no matter coding experience or background, is welcome to join us for these sessions of sonic shenanigans, code sharing, performances, workshops, talks, tutorials, debugging or helping each other get started or get further with SuperCollider.
All you have to bring is a laptop (We can even help you install and setup SuperCollider if you’ve never used it before or barely know what to use it for.)
SuperCollider is an open source framework for audio synthesis and algorithmic composition. It’s one of the most popular and widely used programming languages for sound work and is available for free for all platforms – it can even be embedded on micro computers like the Raspberry Pi and Bela.
SuperCollider is useful for many things: Algorithmic composition, generative music, all things computer music, livecode performances, in conjunction with microcontrollers and sensors, for installation work, multi channel work, dsp, research, ambisonics or simply sound hacking.
About Skandinavisk SuperCollider Klubb
Dates for the fall/winter season of 2018/2019: 10/9, 8/10, 12/11, 10/12, 14/1
All meetups are free and start at 19.00.
Address: Sandakerveien 24D, bygg F3, 0473 Oslo, Norge
If you have any questions about the SuperCollider meetups, contact Mads Kjeldgaard at firstname.lastname@example.org
Simon Løffler’s work Songbirds was performed during Ultima 12018, and the songbirds in the piece were made by Hans Wilmers at NOTAM. They are small combinations of fine mechanics and electronics that sing and snap with the beaks as the piece develops. An unusual thing about this project, seen from NOTAM’s side, is that the sounds are produced mechanically and not by electroacoustic means, although the control is digital. Making the birds snap their beaks and producing sound with the bellows that can be seen in the pictures without creating too much noise has been demanding, and the solution that was chosen shows the quality of the engineering competence in NOTAM’s staff.
The sound generation is based on a mechanical songbird where parts of the mechanics have been replaced by servo motors and solenoids. In order to avoid constant noise from the bellows, and quick-start electric motor starts the instant that the sound should ring, and this requires tight control of the flute and the motor that moved the bellow. In addition to the flute sounds, the clicking sound from the beak was used as a percussive element in Løffler’s work, while the head movements of the birds were used scenically, all in tight coordination with the musicians from Asimisimasa.
Rain as a sounding, climatic phenomenon is Asbjørn Blokkum Flø’s point of departure for this sitre-specific sound installation. Water is essential for all life on earth, and rain is central in the circulation of water. Rain results from large weather systems, and is also a complex sound phenomenon.
The sound of rain might sound quite arbitrary, but below the surface there are complex sound phenomena; the distribution of the positions of the drops, their size and number are some of the elements that influence the sound of rain. In the installation this timbral dimension is investigated by way of a large number of sound elements. One hundred individually controlled loudspeakers of Flø’s own construction fills the gallery room at Atelier Nord ANX, and envelops the listener in a three-dimensional sound field. This soundfields is further colored by the acoustic properties of the space as well as the trajectories of the visitors when exploring. A custom-made software continually adjusts the rain sound variables, such as drop size, number and duration. This influences how the rain is heard – from light drizzle to tropical storm.
The engineering has been done at NOTAM by Asbjørn Blokkum Flø, Thom Johansen and Hans Wilmers.
Location: Atelier Nord ANX, Olaf Ryes plass 2 (entrance Sofienberggata)
Dates: Sept. 07. – 30., 2018
Opening hours: 13 – 18 during the Ultima festival (Sept. 13. – 22.). Thursday and friday 15 – 18, saturday and sunday 13 – 18 during the rest of the exhibition period.
On September 14. at 19:00 there is a concert with Asbjørn Blokkum Flø og Ernst van der Loo.
Elektric regn has been made with support from Billedkunstnernes Vederlagsfond, Komponistenes Vederlagsfond And Norsk kulturfond.