Self-Supervised Multi-View Learning for Disentangled Music Audio Representations

Julia Wilkins (New York University)*, Sivan Ding (NYU), Magdalena Fuentes (New York University), Juan P Bello (New York University)

This paper will be presented in person

Abstract:

Self-supervised learning (SSL) offers a powerful way to learn robust, generalizable representations without labeled data. In music, where labeled data is scarce, existing SSL methods typically use generated supervision and multi-view redundancy to create pretext tasks. However, these approaches often produce entangled representations and lose view-specific information. We propose a novel self-supervised multi-view learning framework for audio designed to incentivize separation between private and shared representation spaces. A case study of results on audio disentanglement in a controlled setting demonstrates the effectiveness of our method.