How Far Can Pretrained LLMs Go in Symbolic Music? Controlled Comparisons of Supervised and Preference-based Adaptation

Abstract

Music often shares notable parallels with language, motivating the use of pretrained large language models (LLMs) for symbolic music understanding and generation. Despite growing interest, the practical effectiveness of adapting instruction-tuned LLMs to symbolic music remains insufficiently characterized. We present a controlled comparative study of finetuning strategies for ABC-based generation and understanding, comparing an off-the-shelf instruction-tuned backbone to domain-adapted variants and a music-specialized LLM baseline. Across multiple symbolic music corpora and evaluation signals, we provide some insights into adaptation choices for symbolic music applications. We highlight the domain adaptation vs. preserving prior information tradeoff as well as the distinct behavior of metrics used to measure the domain adaptation for symbolic music.

Publication
Accepted at NLP4MusA 2026
Emmanouil Karystinaios
Emmanouil Karystinaios
Postdoctoral Researcher in Artificial Intelligence

My research interests include Music Information Retrieval, Music Generative models and Graph Neural Networks