Skip to content

Latest commit

 

History

History
97 lines (61 loc) · 5.22 KB

README.md

File metadata and controls

97 lines (61 loc) · 5.22 KB

Meldy - A mood-based melody generator

Meldy is a simple grammar-model melody generator, based on a user-provided mood expressed in the valence-arousal plane. It's developed as project for "Advanced Coding Tools and Methodologies" and "Computer Music: Representations and Models" courses of the MSc in Music and Acoustic Engineering of Politecnico di Milano.

Links

Demo Video

demo video thumbnail

Project Description

overview

The main focus of this project is on the melody generation step.

Mood Selection

User is expected to provide a mood for generating the melody. We adopted the dimensional approach from Music Emotion Recognition [9] to describe moods in a two-dimentional plane, so we have valence (i.e. the positivity of the mood) on the x-axis and arousal (i.e. the intensity of the mood) on the y-axis.

mood picker

The picker is realized using p5.js [4] and the relevant code can be found at /front-end/src/p5/mood_picker_sketch.js.

Melody Generation

Melody is generated starting from valence and arousal values, by producing a MusicXML score for music representation. MusicXML is better suited than other formats (e.g. MIDI), because it's meant for notation representation (rather than playing representation), and can effectively carry harmonical information that would be lost in other formats (e.g. enharmonic equivalence, key signature and scale mode [11]).

At first, a mapping is made between the mood and three main music features: scale mode, tempo and central octave of the melody, as shown in the following figure. This mapping is inspired by MER studies on music features [9].

mood mapping

Then, a grammar method [8] is used to generate the relative degrees (within one octave range) and the duration of each note of the melody. In the current version, we are using a hand-crafted Markov Chain that we empirically built by trial and error. Different rulesets are defined for different "snap"-points of both valence and arousal. These can be found in files /back-end/data/grades.yml and /back-end/data/durations.yml. An example of degree-chain is provided in the following picture, assuming a DO-mobile interpretation (i.e. DO = 1° degree):

grammar

More details on the rationale of the melody generation can be found in /docs/melody-generation.md.

We employed the music21 [1] library for the actual generation of the melody. The relevant code can be found in the class MelodyGenerator in source file /back-end/src/melody.py. Since this is a Python library, this step is run on a separate back-end providing the following communication with the WebApp:

client-server communication

Score display

grammar

The generated MusicXML is finally rendered on the web page thanks to OpenSheetMusicDisplay [2]. From this view, it's possible to playback the generate score, download it to open with a notation software (e.g. MuseScore) or go back to the mood selection view.

Navigation between views is achieved through DOM replacement of HTML fragments. The relevant code can be found at /front-end/src/navigation.js and inside /front-end/src/views/.

Resources

  1. music21: a toolkit for computer-aided musicology.
  2. OpenSheetMusicDisplay: renders MusicXML sheet music in the browser.
  3. OSMD Audio Player: browser based audio player for MusicXML scores.
  4. p5.js: JavaScript port of Processing.
  5. webpack: a bundler for JavaScript and friends.
  6. Flask: Python micro framework for building web applications.
  7. Pipenv: Python development workflow for humans.

Bibliography

  1. McCormack, J. (1996). Grammar based music composition. Complex systems, 96, 321-336.
  2. Yang, Y. H., & Chen, H. H. (2011). Music emotion recognition. CRC Press.
  3. Cuthbert, M., Ariza, C., Hogue, B., & Oberholtzer, J. W. (2020). Music21 Documentation
  4. Sarti, A. (2019). Computer Music: Representation and Models. Course material of MSc in Music and Acoustic Engineering

© 2020 Matteo Bernardini & Yilin Zhu