Entry
Type
Year
Posted at
Tags
Overview - 何がすごい?
CycleGANの考え方をベースに、オーディオの空間で与えられたベースライン対してドラムのパターンを生成しようとする研究
Abstract
The two main research threads in computer-based music generation are: the construction of autonomous music- making systems, and the design of computer-based environments to assist musicians. In the symbolic domain, the key problem of automatically arranging a piece music was extensively studied, while relatively fewer systems tackled this challenge in the audio domain. In this contribution, we propose CycleDRUMS, a novel method for generating drums given a bass line. After converting the waveform of the bass into a mel-spectrogram, we are able to automatically generate original drums that follow the beat, sound credible and can be directly mixed with the input bass. We formulated this task as an unpaired image-to-image translation problem, and we addressed it with CycleGAN, a well-established unsupervised style transfer framework, originally designed for treating images. The choice to deploy raw audio and mel- spectrograms enabled us to better represent how humans perceive music, and to potentially draw sounds for new arrangements from the vast collection of music recordings accumulated in the last century. In absence of an objective way of evaluating the output of both generative adversarial networks and music generative systems, we further defined a possible metric for the proposed task, partially based on human (and expert) judgement. Finally, as a comparison, we replicated our results with Pix2Pix, a paired image-to-image translation network, and we showed that our approach approach outperforms it.
Motivation
- シンボリックレベル = MIDI or 楽譜ではなくオーディオのレベルで音楽を扱いたい
- ドラムとベースが深く関係してるのであれば、画像の変換モデルのようにこの二つを扱うことが可能なのではないか
- 5秒間のドラムとベースの音のMel-spectrogramをCycleGANの画像のように扱う
- ドラムとベースはDemcusという研究の結果を利用して分割 (spleeterのようなもの)