Organisers: Charles P Martin, Fabio Morreale, Benedikte Wallace, Hugo Scurto
The use of machine learning and AI in everyday applications has taken off in recent years. Now, you can buy a refrigerator with “AI”, but, despite much media interest in “AI composers”, not a musical instrument (or perhaps, not a good one). This workshop seeks to develop a community of NIME researchers and practitioners to analyse the roles that computational intelligence already plays in music technology and where it may play a role in future. We aim to consolidate current ML-related thought in NIME, and to develop a research network that focuses on future work in ML-enhanced interfaces for musical performance. Notably, this objective diverges from previous NIME research that has focused on technical implementations; rather, we aim to offer a forum for academic discussions on critical and theoretical perspectives. Our workshop will be motivated by four themes and a number of hard questions related to musical AI/ML:
- What are the roles of AI/ML in NIMEs?
- What kind of NIMEs and music technology does AI/ML afford?
- How can AI/ML affect musical practices?
- What distinguishes current/new applications of AI/ML to NIMEs from those already established?
- Can AI/ML-enabled instruments produce unique music?
- What are the potential benefits and drawbacks of using AI/ML in NIMEs?
- Is musical AI/ML a product of our techno-euphoric climate or will its effects be long-lasting?
- What will the effects be on enhancement and reduction of performers' creativity?
- What role could musical AI/ML have in education?
- What concepts other than computational creativity could drive the design of ML/AI in NIMEs?
- How to better include users in the musical AI/ML-design process?
- What evaluation methods could be developed for AI/ML with relevance to NIMEs?
- How can practice-based artistic research complement technical progress of AI/ML in NIMEs?
- What kind of musical bias could data used for training ML models potentially encapsulate?
- How should we cope with legal issues related to data ownership in ML-based NIMEs?
- How should we cope with the environmental issues related to the training of AI/ML for NIMEs?
The workshop will involve short talks from participants to frame their research topic and/or musical practices. Abstracts to present will be selected by the organisers with a short round of peer-review. Non-presenting participation will also be allowed. The bulk of the workshop will be focussed on community building and co-design activities. The outcomes of the workshop will be used to motivate an edited volume or special issue on the use of ML/AI in NIME interfaces and performance practices.
- Introduction and provocations from organisers (30m)
- Short talks from participants introducing interests (60m - 5m each)
- Division into theme groups: what are the main ideas behind each theme? (60m)
- Design challenge: Towards framework for AI/ML in NIMEs (60m)
This workshop will be an academic forum, as such our technical requirements are light.
- Classroom or Seminar room
- Projector / Speakers
- HDMI and power for laptops to front of room
- WiFi and power in order to stream presentations if appropriate and enable virtual participation to increase potential for inclusion.
We suggest a half-day workshop.
We intend that this workshop will lead to a special journal issue or edited volume on “Critical Perspectives on AI/ML in Musical Interfaces”. The workshop proceedings (accepted abstracts) will be published on a small website, similarly to other workshop gatherings in this area (e.g., NeurIPS ML for Creativity and Design workshops). Abstract reviewing and submission will be arranged via EasyChair.
Charles Martin is a Lecturer at the Australian National University Research School of Computer Science and was previously a postdoctoral research fellow at the University of Oslo's Robotics and Intelligent Systems Group. His PhD, on designing digital musical instruments to support ensemble improvisation was awarded from the Australian National University in 2016 and he also holds degrees in mathematics and music. Charles is a researcher in music technology, machine learning, and human-computer interaction. His research focus is on how intelligent systems can be deployed and evaluated in complex real-world situations, particularly in the creative arts. His musical works as a percussionist and computer musician have been performed throughout Australia, Europe and the USA and presented at international conferences on music technology and percussion.
Fabio Morreale is a Lecturer at the University of Auckland, School of Music. He was previously a postdoctoral research fellow at Queen Mary University of London (Augmented Instruments Lab) and at the University of Trento (interAction Lab). In 2015, he was awarded a PhD in Human-Computer Interaction from the University of Trento (Italy) and he holds a master degree in Computer Science. His research is aimed at critically assessing the impact of technology on music creation, learning, and consumption and at designing counterpowers and alternative futures.
Benedikte Wallace is a Doctoral Research Fellow at the RITMO Centre for Interdisciplinary Studies of Rhythm, Time and Motion at the University of Oslo. With a background in music recording and a master degree in Informatics, her academic interests lie in the cross-section between art and science. Her current research is centred around sound-motion mappings with 3D motion capture, generative machine learning and computational creativity.
Hugo Scurto is a researcher, musician, and designer. He is currently a postdoctoral fellow at EnsadLab, the arts and design research lab of École nationale supérieure des Arts Décoratifs - Paris. He recently completed a PhD in Computer Music at IRCAM, entitled “Designing With Machine Learning for Interactive Music Dispositifs”, under the supervision of Frédéric Bevilacqua. Before that, he spent one year as a visiting researcher at Goldsmiths University of London, working with Rebecca Fiebrink on Human-Centred Machine Learning applied to Music. His research combines qualitative, quantitative, and practice-based methods to study, design, and inquire machine learning technology situated in human musical practices. His practice seeks to create new forms of interactive learning interfaces that critically take into account the creative, cultural, and inclusive dimensions of human musical expression.