By the same authors

AI and Automatic Music Generation for Mindfulness

Research output: Contribution to conferencePaperpeer-review

Full text download(s)





Conference2019 AES International Conference on Immersive and Interactive Audio: Creating the Next Dimension of Sound Experience
CountryUnited Kingdom
Conference date(s)27/03/1929/03/19

Publication details

DateAccepted/In press - 6 Jan 2019
DatePublished (current) - 17 Mar 2019
Number of pages10
Original languageEnglish


This paper presents an architecture for the creation of emotionally congruent music using machine learning aided sound synthesis. Our system can generate a small corpus of music using Hidden Markov Models; we can label the pieces with emotional tags using data elicited from questionnaires. This produces a corpus of labelled music underpinned by perceptual evaluations. We then analyse participant’s galvanic skin response (GSR) while listening to our generated music pieces and the emotions they describe in a questionnaire conducted after listening. These analyses reveal that there is a direct correlation between the calmness/scariness of a musical piece, the users’ GSR reading and the emotions they describe feeling. From these, we will be able to estimate an emotional state using biofeedback as a control signal for a machine-learning algorithm, which generates new musical structures according to a perceptually informed musical feature similarity model. Our case study suggests various applications including in gaming, automated soundtrack generation, and mindfulness.

Bibliographical note

© 2019 Audio Engineering Society. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy. Further copying may not be permitted; contact the publisher for details.


Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations