Added colab link
Browse files
README.md
CHANGED
@@ -40,8 +40,7 @@ There might be acknowledgments missing. If you find some other resemblance to a
|
|
40 |
|
41 |
### Using pivaenist in colab
|
42 |
|
43 |
-
If you preferred directly using or testing the model without the need to install it, you can use
|
44 |
-
[colab link]
|
45 |
|
46 |
## Installation
|
47 |
|
@@ -63,8 +62,6 @@ pip install -r ./pivaenist/requirements.txt
|
|
63 |
|
64 |
The first one will clone the repository. Then, fluidsynth, a real-time MIDI synthesizer, is also set up in order to be used by the pretty-midi library. With the last line, you will make sure to have all dependencies on your system.
|
65 |
|
66 |
-
[More Information Needed]
|
67 |
-
|
68 |
## Training Details
|
69 |
|
70 |
Pivaenist was trained on the midi files of the [MAESTRO v2.0.0 dataset](https://magenta.tensorflow.org/datasets/maestro). Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.
|
|
|
40 |
|
41 |
### Using pivaenist in colab
|
42 |
|
43 |
+
If you preferred directly using or testing the model without the need to install it, you can use [this colab notebook](https://colab.research.google.com/drive/1VLbykZ1YrVlCg9UtTVjdJcN0u18f-akD?usp=sharing) and follow its instructions. Moreover, this serves as an example of use.
|
|
|
44 |
|
45 |
## Installation
|
46 |
|
|
|
62 |
|
63 |
The first one will clone the repository. Then, fluidsynth, a real-time MIDI synthesizer, is also set up in order to be used by the pretty-midi library. With the last line, you will make sure to have all dependencies on your system.
|
64 |
|
|
|
|
|
65 |
## Training Details
|
66 |
|
67 |
Pivaenist was trained on the midi files of the [MAESTRO v2.0.0 dataset](https://magenta.tensorflow.org/datasets/maestro). Their preprocessing involves splitting each note in pitch, duration and step, which compose a column of a 3xN matrix (which we call song map), where N is the number of notes and a row represents sequentially the different pitches, durations and steps. The VAE's objective is to reconstruct these matrices, making it then possible to generate random maps by sampling from the distribution, and then convert them to a MIDI file.
|