A computational model for predicting perceived musical expression in branding scenarios

Research output: Contribution to journalArticle

Published copy (DOI)

Author(s)

  • Steffen Lepa
  • Martin Herzog
  • Jochen Steffens
  • Andreas Schoenrock
  • Hauke Egermann

Department/unit(s)

Publication details

JournalJournal of New Music Research
DateAccepted/In press - 29 May 2020
DateE-pub ahead of print (current) - 16 Jun 2020
Early online date16/06/20
Original languageEnglish

Abstract

We describe the development of a computational model predicting listener-perceived expressions of music in branding contexts. Representative ground truth from multi-national online listening experiments was combined with machine learning of music branding expert knowledge, and audio signal analysis toolbox outputs. A mixture of random forest and traditional regression models is able to predict average ratings of perceived brand image on four dimensions. Resulting cross-validated prediction accuracy (R²) was Arousal: 61%, Valence: 44%, Authenticity: 55%, and Timeliness: 74%. Audio descriptors for rhythm, instrumentation, and musical style contributed most. Adaptive sub-models for different marketing target groups further increase prediction accuracy.

Bibliographical note

© 2020, Informa UK Limited, trading as Taylor & Francis Group. This is an author-produced version of the published paper. Uploaded in accordance with the publisher’s self-archiving policy.

Projects

Discover related content

Find related publications, people, projects, datasets and more using interactive charts.

View graph of relations