Emotion-driven Piano Music Generation via Two-stage Disentanglement and Functional Representation
Jingyue Huang (New York University)*, Ke Chen (University of California San Diego), Yi-Hsuan Yang (National Taiwan University)
Keywords: Musical features and properties -> expression and performative aspects of music; Musical features and properties -> musical affect, emotion and mood, MIR tasks -> music generation
Managing the emotional aspect remains a challenge in automatic music generation. Prior works aim to learn various emotions at once, leading to inadequate modeling. This paper explores the disentanglement of emotions in piano performance generation through a two-stage framework. The first stage focuses on valence modeling of lead sheet, and the second stage addresses arousal modeling by introducing performance-level attributes. To further capture features that shape valence, an aspect less explored by previous approaches, we introduce a novel functional representation of symbolic music. This representation aims to capture the emotional impact of major-minor tonality, as well as the interactions among notes, chords, and key signatures. Objective and subjective experiments validate the effectiveness of our framework in both emotional valence and arousal modeling. We further leverage our framework in a novel application of emotional controls, showing a broad potential in emotion-driven music generation.
Reviews
No reviews available