Kutsuya commited on
Commit
5bc0de9
1 Parent(s): d5a36f8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -0
README.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ license: afl-3.0
4
+ language:
5
+ - en
6
+ - ja
7
+ ---
8
+ This model is trained by using [so-vits-svc-fork](https://github.com/voicepaw/so-vits-svc-fork)
9
+
10
+ Hardware used:
11
+ ============
12
+ * CPU: AMD Ryzen 9 3900X
13
+ * RAM: 64 GB
14
+ * GPU: 3090 24GB
15
+
16
+ Acquiring the dataset
17
+ =====================
18
+ Software used:
19
+ * `ultimatevocalremovergui`
20
+ * `Audacity`
21
+
22
+ <h3>Step 1</h3>
23
+ Find videos, music, podcasts or whatever that contains the voice you want to make a model of. <br>
24
+
25
+ <h3>Step 2</h3>
26
+ Snip out the parts of the videos/music you want to use for the dataset. The clearer the audio, the better. This means no background noise whatsoever. <b>Each file must be a maximum of 10 seconds!</b><br>
27
+ You can do this via Audacity or any other software you feel familiar with.<br>
28
+ For a decent model, you will need about 100 samples.
29
+
30
+ <h3>Step 3</h3>
31
+ If a sample has a background noise (which it will most likely have), remove it via `ultimatevocalremovergui`.<br>
32
+
33
+ Removing background noises
34
+ -----------------------------------
35
+ <h3>Installing the requirements</h3>
36
+ <h3>Step 1</h3>
37
+
38
+ Install `ultimatevocalremovergui` by following the following steps:<br>
39
+ * `git clone https://github.com/Anjok07/ultimatevocalremovergui`
40
+ * `cd ultimatevocalremovergui`
41
+ * `nano environment.yml`
42
+ * Fill it with the following text:
43
+
44
+ ```name: ultimatevocalremovergui
45
+ channels:
46
+ - defaults
47
+ dependencies:
48
+ - python=3.10
49
+ - tk
50
+ - pip
51
+ - pip:
52
+ - -r requirements.txt
53
+ ```
54
+ * Save it by pressing `ctrl`+`x` followed by `Y` then press `enter`
55
+ * `conda env create -f environment.yml`
56
+ * `conda activate ultimatevocalremovergui`
57
+ * `python UVR.py`
58
+
59
+ <h3>Step 2</h3>
60
+
61
+ The software will now startup (this might take a bit). It will look like this: ![UVR](https://imgur.com/UKv2V7J)
62
+ First we need to download a model like so:
63
+ * Click on the wrench icon next to the `Start Processing` button.
64
+ * At the top of the new window that opens, click on the tab called `Download Center`
65
+ * Select the radio button called `Demucs`
66
+ * Select `Demucs v4: htdemucs_ft`
67
+ * Click the download button underneath this combobox
68
+
69
+ Now that the model is downloaded we are going to remove the background noise from our voice sample. To do this do the following:
70
+ * At the top, click the `Select Input` button
71
+ * Select your voice sample
72
+ * Now click on the `Select Output` button
73
+ * <b>IMPORTANT!</b> Your output should be like this: `dataset_raw/{speaker_id}/**/{wav_file}.{any_format}`, example: `dataset_raw/sinon/wav/sample1.wav`. This folder can be anywhere on your system
74
+ * Select a directory where you want the processed file to appear
75
+ * Now under the text `CHOOSE PROCESS METHOD` select `Demucs`
76
+ * Make sure the model is selected under the text `CHOOSE DEMUCS MODEL`
77
+ * Click on `GPU Conversion` to speed up the process
78
+ * Now click on `Start Processing` and wait until it is done
79
+ * After it's done, navigate to the folder you set as output and listen to it. Does it sound ok? if it does, you are now done, if it doesn't, don't use this file in your dataset
80
+
81
+ Training the model
82
+ =====================
83
+ Here is a quick explanation on how I trained this model.
84
+
85
+ Software used:
86
+
87
+ * `so-vits-svc-fork` (The software to morph your voice)
88
+ * `qpwgraph` (this is used to reroute the output to another process like Discord or Telegram)
89
+
90
+ <h3>Step 1</h3>
91
+
92
+ First, install qprgraph:
93
+ * `paru qpwgraph`
94
+
95
+ Now, clone the so-vits-svc-fork repo:
96
+ * `git clone https://github.com/voicepaw/so-vits-svc-fork`
97
+
98
+ Then, cd into the repo:
99
+ * `cd so-vits-svc-fork`
100
+
101
+ Now, make a conda environment:
102
+ * `conda create -n so-vits-svc-fork python=3.10`
103
+
104
+ Now, activate the conda environment:
105
+ * `conda activate so-vits-svc-fork`
106
+
107
+ Now, install the requirements:
108
+ * `python -m pip install -U pip setuptools wheel`
109
+ * `pip install -U torch torchaudio --index-url https://download.pytorch.org/whl/cu118`
110
+ * `pip install -U so-vits-svc-fork` (This will install the package inside your conda environment, meaning you can run it anywhere on your system as long as you are in your conda environment)
111
+
112
+ <h3>Step 2</h3>
113
+
114
+ * Navigate to the directory where you dataset is at, for example, if your dataset is at `/mnt/Shark/Projects/Sinon-Voice/training/dataset_raw/sinon/wav/` navigate to `/mnt/Shark/Projects/Sinon-Voice/training` run the following commands:
115
+ * `svc pre-resample`
116
+ * `svc pre-config`
117
+ * `svc pre-hubert`
118
+ * `svc train -t`
119
+
120
+ Using the model
121
+ =====================
122
+ <h3>Step 1</h3>
123
+
124
+ Now, run the program:<br>
125
+ * `svcg`
126
+
127
+ On the right side in the application that just opened, make sure to set the input device to default (ALSA) and the output also to default (ALSA)
128
+ ![Example](https://imgur.com/ctxVKfT)
129
+
130
+ <h3>Step 2</h3>
131
+
132
+ * At the top, select your model and config files. These are located in your training folder at: `logs/44k/`
133
+
134
+ <h3>Step 3</h3>
135
+
136
+ * You can now tweak some settings, for example the pitch (I recommend a value of 12 to begin with)
137
+ * Turn off Auto predict
138
+
139
+ <h3>Step 4</h3>
140
+
141
+ * After tweaking the settings to your liking, press the button called `Infer` at the very bottom to start the voice morph
142
+
143
+ <h3>Additional info</h3>
144
+
145
+ If nothing happens, take a look at the terminal and act accordingly