Entry
Simple Title
Gautam Mittal, Jesse Engel, Curtis Hawthorne, Ian Simon (2021)
Type
Paper
Year
2021
Posted at
Tags
music
Overview - 何がすごい?
Abstract
Score-based generative models and diffusion probabilistic models have been successful at generating high-quality samples in continuous do- mains such as images and audio. However, due to their Langevin-inspired sampling mechanisms, their application to discrete and sequential data has been limited. In this work, we present a tech- nique for training diffusion models on sequen- tial data by parameterizing the discrete domain in the continuous latent space of a pre-trained variational autoencoder. Our method is non- autoregressive and learns to generate sequences of latent embeddings through the reverse process and offers parallel generation with a constant num- ber of iterative refinement steps. We apply this technique to modeling symbolic music and show strong unconditional generation and post-hoc con- ditional infilling results compared to autoregres- sive language models operating over the same continuous embeddings. 1.