---
license: mit
tags:
- audiocraft
- Musicgen
- Music Generation
- micro-musicgen
---
# micro-musicgen-acid
Curated and trained by Aaron Abebe.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65829a887cec0a2080d4bb3f/Sg7T_BRLgvGkAJxla0PM7.png)
> [!WARNING]
> WARNING: **These models WILL sound bad to a lot of people.** The goal is not create pleasant sounding music,
> but to spark creativity by using the weird sounds of Neural Codecs for music production and sampling!
Micro-Musicgen is a new family of super small music generation models focussing on experimental music and latent space exploration capabilities.
These models have unique abilities and drawbacks which should enhance creativity when working with them while creating music.
- **only unconditional generation**: Trained without text-conditioning to reduce model size.
- **very fast generation times**: ~8secs for 10x 10sec samples.
- **permissive licensing**: The models are trained from scratch using royalty-free samples and handmade chops,
which allows them to be released via the MIT License.
This is the second entry in the series and is called `micro-musicgen-acid`. It's trained on different 303 sample packs as well as audio I made with my Behringer TD-3.
If you find this model interesting, please consider:
- following me on [GitHub](https://github.com/aaronabebe)
- following me on [Twitter](https://twitter.com/mcaaroni)
## Samples
All samples are from a single run, without cherry picking.
## Benchmarks
## Usage
Install my [audiocraft](https://github.com/facebookresearch/audiocraft) fork:
```
pip install -U git+https://github.com/aaronabebe/audiocraft#egg=audiocraft
```
Then, you should be able to load this model just like any other musicgen checkpoint here on the Hub:
```python
import torchaudio
from audiocraft.models import MusicGen
from audiocraft.data.audio import audio_write
model = MusicGen.get_pretrained('pharoAIsanders420/micro-musicgen-acid')
model.set_generation_params(duration=10)
wav = model.generate_unconditional(10)
for idx, one_wav in enumerate(wav):
# Will save under {idx}.wav, with loudness normalization at -14 db LUFS.
audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True)
```
## Dataset
I sourced the datasets from these royalty free sources:
- [https://www.samplescience.info/2022/05/abstract-303.html](https://www.samplescience.info/2022/05/abstract-303.html)
- [https://www.musicradar.com/news/free-303-samples](https://www.musicradar.com/news/free-303-samples)
- [https://www.musicblip.com/product/c-am-114bpm-418mb-61loops/](https://www.musicblip.com/product/c-am-114bpm-418mb-61loops/)
- about 1 hour of different sequences I recorded from my Behringer TD-3.