Stem-JEPA: A Joint-Embedding Predictive Architecture for Musical Stem Compatibility Estimation
Alain Riou (Sony CSL Paris), Stefan Lattner (Sony Computer Science Laboratories, Paris), Gaëtan Hadjeres (Sony CSL)*, Michael Anslow (Sony Computer Science Laboratories, Paris), Geoffroy Peeters (LTCI - Télécom Paris, IP Paris)
Keywords: Knowledge-driven approaches to MIR -> representations of music; Musical features and properties; Musical features and properties -> representations of music, Knowledge-driven approaches to MIR -> machine learning/artificial intelligence for music
This paper explores the automated process of determining stem compatibility by identifying audio recordings of single instruments that blend well with a given musical context. To tackle this challenge, we present Stem-JEPA, a novel Joint-Embedding Predictive Architecture (JEPA) trained on a multi-track dataset using a self-supervised learning approach. Our model comprises two networks: an encoder and a predictor, which are jointly trained to predict the embeddings of compatible stems from the embeddings of a given context, typically a mix of several instruments. Training a model in this manner allows its use in estimating stem compatibility—retrieving, aligning, or generating a stem to match a given mix—or for downstream tasks such as genre or key estimation, as the training paradigm requires the model to learn information related to timbre, harmony, and rhythm. We evaluate our model’s performance on a retrieval task on the MUSDB18 dataset, testing its ability to find the missing stem from a mix and through a subjective user study. We also show that the learned embeddings capture temporal alignment information and, finally, evaluate the representations learned by our model on several downstream tasks, highlighting that they effectively capture meaningful musical features.
Reviews
No reviews available