license: mit
tags:
- audio
- music
- generation
- tensorflow
Musika Model: the_beatles
Model provided by: nobitachainsaw
Pretrained the_beatles model for the Musika system for fast infinite waveform music generation. Introduced in this paper.
How to use
You can generate music from this pretrained the_beatles model using the notebook available here.
Model description
This pretrained GAN system consists of a ResNet-style generator and discriminator. During training, stability is controlled by adapting the strength of gradient penalty regularization on-the-fly. The gradient penalty weighting term is contained in switch.npy. The generator is conditioned on a latent coordinate system to produce samples of arbitrary length. The latent representations produced by the generator are then passed to a decoder which converts them into waveform audio. The generator has a context window of about 12 seconds of audio.