October 2025, Notam invites you to two days of introduction to Dicy2.
If you want to start working with machine learning and artificial intelligence for music, Dicy2 is a great place to start. Machine learning for audio and music can be difficult to learn for beginners, and the idea behind the workshop is to give participants an easy way to get started.
Individual guidance will be provided along the way, and if participants have their own questions and ideas, we can solve this together. The workshop will also enable you to continue your own studies afterwards, allowing you to become an experienced user of Dicy2.
__________________________________________________________________________________________________
Goals of the workshop
In this hands-on workshop, we will guide participants through the design of their own interactive environments. By the end of the course, you will have the technical and musical foundation to build and perform with generative systems using Dicy2.
The workshop requires no prior knowledge, but basic knowledge of audio software is an advantage, and it's also a good idea to know a little Max and Ableton Live.
The workshop provides training in Dicy2 for Max and Ableton Live. Participants will gain an understanding of working with interactive systems, which will be useful for people working with, for example, improvisation, composition, acoustic music, electronic music and sound art, regardless of genre. Dicy2 can also be used to create interactive sound systems for dance, theatre, sound installations and media art.
Participants are encouraged to bring their own sound database to personalise their work. These sounds will feed the programme's memory, enabling real-time musical interaction based on live sound analysis. Any type of audio material can be used - personal recordings or external samples - but ideally they should be relatively homogeneous (e.g. monophonic pieces ("stems") or isolated instruments) to ensure optimal so-called "agent behaviour" in the program.
__________________________________________________________________________________________________
About Dicy2
Dicy2 is both a Max library and an Ableton Live plugin for composing with interactive agents. Based on machine learning models, these agents generate musical sequences in real time and are designed to support both structured composition processes and autonomous improvisation.
The tool is the result of years of collaborative research and production with musicians and artists such as Rémi Fox, Steve Lehman, Orchestre National de Jazz, Alexandros Markeas, Pascal Dusapin, Marta Gentilucci, Rodolphe Burger and others. A dedicated development effort in 2022 led to the release of a stable, easy-to-use version of Dicy2 for both Max and Ableton Live environments.
There are many software and services today that use machine learning and artificial intelligence to generate music, but where most other solutions only offer ready-made music based on simple text instructions, Dicy2 is designed to act as a creative tool for musicians, composers and sound designers.
More information here:
https://forum.ircam.fr/projects/detail/dicy2/
https://forum.ircam.fr/projects/detail/dicy2-for-live/
Videos:
IRCAM Tutorials / Dicy2: Introduction
Designing Dicy2 music generation tools through artistic collaborations
__________________________________________________________________________________________________
About the teacher
Jérôme Nika (b. 1988) works with generative technologies and artificial intelligence for creative human-machine interaction.
As a researcher in the Interaction sound music movement team at Ircam, Jérôme Nika's work focuses on how to model, learn and navigate an "artificial musical memory" in creative contexts. As opposed to a replacement approach where artificial intelligence replaces humans, this research aims to develop new creative practices.
Nika graduated from the French Grandes Écoles Télécom ParisTech and ENSTA ParisTech in 2012. In addition, he studied acoustics, signal processing and computer science applied to music (ATIAM Master, Sorbonne Université) and composition. He specialised in the application of computer science and signal processing to digital creation and music through a PhD at Ircam (Prize for Young Researchers in Science and Music, 2015; Prize for Young Researchers awarded by the French Association for Computer Music, 2016), and then as a postdoc.
Between 2018 and 2020, he worked as a freelance computer music designer and musician and was an invited researcher at Le Fresnoy - Studio National des Arts Contemporains. Since 2020, he has been working as a researcher at Ircam, and is involved in a number of artistic productions.
Jérôme Nika is the developer of Dicy2.
Read more about Jérôme Nika here:
__________________________________________________________________________________________________
Course structure
Day 1
Dicy2, introduction
Machine learning and AI
Dicy2 for Live
Dicy2 for Max
Design of interactive environments
Use of the audio database
Day 2
Real-time musical interaction based on live sound analysis
Dicy2 concepts
Audio interactions
Agents and scenarios
Feeding strategies
Further studies after the workshop
__________________________________________________________________________________________________
Practical information
Deadline for registration: 13 October 2025
Time: 27 - 28 October 2025, 11:00-17:00
Price: 1300 kroner
Language: English
Teacher: Jérôme Nika
Location: Notam, Oslo. This is a physical event and will not be posted online.
Number of places: 12
There are a limited number of places. Your place in the workshop is confirmed upon payment.
12 available seats