Composing and performing with Dicy2 with Jérôme Nika

October 2025, Notam invites you to two days of introduction to Dicy2.

If you want to start working with machine learning and artificial intelligence for music, Dicy2 may be right for you. Machine learning for sound and music can be difficult to learn for beginners, and the idea behind this workshop is to give participants an easy way to get started with this.

Individual guidance will be provided along the way, and if participants have questions and ideas of their own, we can solve this together. The workshop will also enable you to continue your own studies after the workshop, so that you can become an experienced user of Dicy2.
__________________________________________________________________________________________________

Goals of the workshop
This hands-on workshop will guide participants through the design of their own interactive environments. By the end of the course, they will have the technical and musical grounding to independently build and perform with generative systems using Dicy2.

The workshop does not require any prior knowledge, but basic knowledge of audio software is an advantage, and it is also a good idea to know a little Max and Ableton Live.

The workshop provides training in Dicy2 for Max and Ableton Live. Participants gain an understanding of working with interactive systems, and this will be useful for people working with, for example, improvisation, composition, acoustic music, electronic music, and sound art, regardless of genre. Dicy2 can also be used to create interactive sound systems for both dance, theater, sound installations, and media art.

Participants are encouraged to bring their own sound database to personalize the experience. These sounds will feed the agent’s memory, enabling real-time musical interaction based on live audio analysis. Any type of audio material can be used — personal recordings or external samples — but ideally they should be relatively homogeneous (e.g. monophonic stems or isolated instruments) to ensure optimal agent behavior.
__________________________________________________________________________________________________

About Dicy2
Dicy2 is both a Max library and an Ableton Live plugin for composing with interactive agents. Based on machine learning models, these agents generate musical sequences in real time and are designed to support both structured compositional processes and autonomous improvisation.

The tool has emerged from several years of collaborative research and production with artists such as Rémi Fox, Steve Lehman, the Orchestre National de Jazz, Alexandros Markeas, Pascal Dusapin, Marta Gentilucci, Rodolphe Burger, and others. A dedicated development effort in 2022 led to the release of a stable, artist-friendly version of Dicy2 for both Max and Live environments.

There are many pieces of software and services today that use machine learning and artificial intelligence to generate music, but where most other solutions only offer ready-made music based on simple text prompts, Dicy2 is designed to function as a creative tool for musicians, composers and sound designers.

More information here:
https://forum.ircam.fr/projects/detail/dicy2/
https://forum.ircam.fr/projects/detail/dicy2-for-live/

Videos:
IRCAM Tutorials / Dicy2: Introduction

Designing Dicy2 music generation tools through artistic collaborations

__________________________________________________________________________________________________

About the teacher
Jérôme Nika (b. 1988) works with generative technologies and artificial intelligence for human-machine creative interactions.

As a researcher in the Interaction Sound Music Movement team at Ircam, Jérôme Nika’s work focuses on how to model, learn, and navigate an “artificial musical memory” in creative contexts. In opposition to a “replacement approach” where AI would substitute for human, this research aims at designing novel creative practices.

He graduated from the French Grandes Écoles Télécom ParisTech and ENSTA ParisTech in 2012. In addition, he studied acoustics, signal processing and computer science applied to music (ATIAM Master, Sorbonne Université) and composition. He specialised in the applications of computer science and signal processing to digital creation and music through a PhD at Ircam (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016), and then as a postdoctoral researcher.

Between 2018 and 2020, he worked as a freelance computer music designer / musician and was an invited researcher at Le Fresnoy – Studio National des Arts Contemporains. Since 2020, he is a permanent researcher at Ircam, and is involved in numerous artistic productions.

Jérôme Nika is the developer of Dicy2.

Read more about Jérôme Nika here:


__________________________________________________________________________________________________

Course structure

Day 1
Dicy2, introduction
Machine learning and AI
Dicy2 for Live
Dicy2 for Max
Design of interactive environments
Use of the audio database

Day 2
Real-time musical interaction based on live audio analysis
Dicy2 concepts
Audio interactions
Agents and scenarios
Performance strategies
Further studies after the workshop
__________________________________________________________________________________________________

Practical information
Deadline for registration: October 13th, 2025
Time: October 27 - 28, 2025, 11:00-17:00
Price: 1300 kroner
Language: English
Teacher: Jérôme Nika
Location: Notam, Oslo. This is a physical event and will not be posted online.
Number of places: 12

There are a limited number of places. Your place in the workshop is confirmed upon payment.

10 available seats