JAM: A Tiny Flow-based Song Generator with Fine-grained Controllability and Aesthetic Alignment

The First Model of Project Jamify

Abstract

Diffusion and flow-matching models have revolutionized automatic text-to-audio generation in recent times. These models are increasingly capable of generating high quality and faithful audio outputs capturing to speech and acoustic events. However, there is still much room for improvement in creative audio generation that primarily involves music and songs. Recent open lyrics-to-song models, such as, DiffRhythm, ACE-Step, and LeVo, have set an acceptable standard in automatic song generation for recreational use. However, these models lack fine-grained word-level controllability often desired by musicians in their workflows. To the best of our knowledge, our flow-matching-based JAM is the first effort toward endowing word-level timing and duration control in song generation, allowing fine-grained vocal control. To enhance the quality of generated songs to better align with human preferences, we implement aesthetic alignment through Direct Preference Optimization, which iteratively refines the model using a synthetic dataset, eliminating the need or manual data annotations. Furthermore, we aim to standardize the evaluation of such lyrics-to-song models through our public evaluation dataset JAME. We show that JAM outperforms the existing models in terms of the music-specific attributes.

Key Contributions

đŸ—ïž Compact Architecture
At 530M parameters, JAM is less than half the size of the next smallest system (DiffRhythm-1.1B), enabling faster inference and reduced resource demands.
🎯 Fine-Grained Alignment
By accepting word- and phoneme-level timing inputs, JAM lets users control the exact placement of each vocal sound, improving rhythmic flexibility and expressive timing.
📝 Enhanced Lyric Fidelity
This precise alignment reduces WER and PER by over 3× compared to prior work, as the model can directly attend to phoneme boundaries and correct misalignments.
⏱ Global Duration Control
Our novel duration mechanism controls inter-word pacing and specifies how much instrumental introduction and coda to generate, giving composers full control over song structure.
đŸ§Ș Rigorous Evaluation
We compiled lyrics for 250 tracks released after models' training cut-offs to avoid data contamination, spanning various genres for comprehensive evaluation.
🎹 Aesthetic Alignment
Using automated song-quality models like SongEval to generate synthetic preference labels, we apply alignment in multiple rounds for additional performance gains.

Model Comparison Demo

Choose an Example:

đŸŽ” Lyrics

In the silence of the morning light
I find my voice, I find my sight
Every dream that seemed so far
Now shines bright like a distant star

We are the ones who dare to fly
Beyond the clouds up in the sky
With every step we're breaking ground
Our hearts beat with a powerful sound
JAM (Ours)
0:00 0:00
✹ Fine-grained word-level control
📊 WER: 0.10 | Parameters: 530M
🎯 Phoneme-aligned timing
Audio file not found
DiffRhythm
0:00 0:00
đŸŽ” Standard generation quality
📊 WER: 0.35 | Parameters: 1.1B
⚖ No fine-grained control
Audio file not found