TY - JOUR
T1 - Fast Jukebox
T2 - Accelerating Music Generation with Knowledge Distillation
AU - Pezzat-Morales, Michel
AU - Perez-Meana, Hector
AU - Nakashika, Toru
N1 - Publisher Copyright:
© 2023 by the authors.
PY - 2023/5
Y1 - 2023/5
N2 - Featured Application: This paper presents an improvement of the Jukebox system for music generation, which significantly reduces the inference time. The Jukebox model can generate high-diversity music within a single system, which is achieved by using a hierarchical VQ-VAE architecture to compress audio in a discrete space at different compression levels. Even though the results are impressive, the inference stage is tremendously slow. To address this issue, we propose a Fast Jukebox, which uses different knowledge distillation strategies to reduce the number of parameters of the prior model for compressed space. Since the Jukebox has shown highly diverse audio generation capabilities, we used a simple compilation of songs for experimental purposes. Evaluation results obtained using emotional valence show that the proposed approach achieved a tendency towards actively pleasant, thus reducing inference time for all VQ-VAE levels without compromising quality.
AB - Featured Application: This paper presents an improvement of the Jukebox system for music generation, which significantly reduces the inference time. The Jukebox model can generate high-diversity music within a single system, which is achieved by using a hierarchical VQ-VAE architecture to compress audio in a discrete space at different compression levels. Even though the results are impressive, the inference stage is tremendously slow. To address this issue, we propose a Fast Jukebox, which uses different knowledge distillation strategies to reduce the number of parameters of the prior model for compressed space. Since the Jukebox has shown highly diverse audio generation capabilities, we used a simple compilation of songs for experimental purposes. Evaluation results obtained using emotional valence show that the proposed approach achieved a tendency towards actively pleasant, thus reducing inference time for all VQ-VAE levels without compromising quality.
KW - VQ-VAE
KW - autoregressive prediction
KW - knowledge distillation
KW - music generation
UR - http://www.scopus.com/inward/record.url?scp=85159352772&partnerID=8YFLogxK
U2 - 10.3390/app13095630
DO - 10.3390/app13095630
M3 - Artículo
AN - SCOPUS:85159352772
SN - 2076-3417
VL - 13
JO - Applied Sciences (Switzerland)
JF - Applied Sciences (Switzerland)
IS - 9
M1 - 5630
ER -