AI and Automatic Music Generation for Mindfulness

Research output: Contribution to conferencePaperpeer-review

Abstract

This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. Our system can generate a small corpus of music using Hidden Markov Models; we can label the pieces with emotional tags using data elicited from questionnaires. This produces a corpus of labelled music underpinned by perceptual evaluations. We then analyse participant’s galvanic skin response (GSR) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.
Original languageEnglish
Number of pages10
Publication statusPublished - 17 Mar 2019
Event2019 AES International Conference on Immersive and Interactive Audio: Creating the Next Dimension of Sound Experience - York, United Kingdom
Duration: 27 Mar 201929 Mar 2019

Conference

Conference2019 AES International Conference on Immersive and Interactive Audio: Creating the Next Dimension of Sound Experience
Country/TerritoryUnited Kingdom
CityYork
Period27/03/1929/03/19

Bibliographical note

© 2019 Audio Engineering Society. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details.

Cite this