Can LLMs "Reason" in Music? An Evaluation of LLMs' Capability of Music Understanding and Generation
Ziya Zhou (HKUST)*, Yuhang Wu (Multimodal Art Projection), Zhiyue Wu (Shenzhen University), Xinyue Zhang (Multimodal Art Projection), Ruibin Yuan (CMU), Yinghao MA (Queen Mary University of London), Lu Wang (Shenzhen University), Emmanouil Benetos (Queen Mary University of London), Wei Xue (The Hong Kong University of Science and Technology), Yike Guo (Hong Kong University of Science and Technology)
Keywords: Generative Tasks -> artistically-inspired generative tasks, Creativity -> computational creativity; Creativity -> human-ai co-creativity; Human-centered MIR -> human-computer interaction; Human-centered MIR -> user-centered evaluation
Symbolic Music, akin to language, can be encoded in discrete symbols. Recent research has extended the application of large language models (LLMs) such as GPT-4 and Llama2 to the symbolic music domain including understanding and generation. Yet scant research explores the details of how these LLMs perform on advanced music understanding and conditioned generation, especially from the multi-step reasoning perspective, which is a critical aspect in the conditioned, editable, and interactive human-computer co-creation process. This study conducts a thorough investigation of LLMs' capability and limitations in symbolic music processing. We identify that current LLMs exhibit poor performance in song-level multi-step music reasoning, and typically fail to leverage learned music knowledge when addressing complex musical tasks. An analysis of LLMs' responses highlights distinctly their pros and cons. Our findings suggest achieving advanced musical capability is not a free lunch for LLMs, and future research should focus more on bridging the gap between music knowledge and reasoning, to improve the co-creation experience for musicians.
Reviews
No reviews available