Affective Calibration of Musical Feature Sets in an Emotionally Intelligent Music Composition System

Duncan Williams, Alexis Kirke, Eduardo Miranda, Ian Daly, Faustina Hwang, James Weaver, Slawomir Nasuto

Research output: Contribution to journalArticlepeer-review


We report on a player evaluation of a pilot system for dynamic video game soundtrack generation. The system being evaluated generates music using an AI-based algorithmic composition technique to create score in real-time, in response to a continuously varying emotional trajectory dictated by gameplay cues.
After a section of gameplay, players rated the system on a Likert scale according to emotional congruence with the narrative, and also according to their perceived immersion with the gameplay. The generated system showed a statistically meaningful and consistent improvement in ratings for emotional congruence, yet with a decrease in perceived immersion, which might be attributed to the marked difference in instrumentation between the generated music, voiced by a solo piano timbre, and the original, fully orchestrated soundtrack. Finally, players rated selected stimuli from the generated soundtrack dataset on a two-dimensional model reflecting perceived valence and arousal. These ratings were compared to the intended emotional descriptor in the meta-data accompanying specific gameplay events. Participant responses suggested strong agreement with the affective correlates, but also a significant amount of interparticipant variability. Individual calibration of the musical feature set, or further adjustment of the musical feature set are therefore suggested as useful avenues for further work.
Original languageEnglish
Pages (from-to)17:1-17:13
Number of pages13
JournalACM Transactions on Applied Perception
Issue number3
Publication statusPublished - 1 May 2017


  • Algorithmic composition, emotional congruence, music perception

Cite this