File size: 2,592 Bytes
f80c5ec
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
title: ZeroRVC
emoji: 🎙️
colorFrom: gray
colorTo: gray
sdk: gradio
sdk_version: 4.37.2
app_file: app.py
pinned: false
---

# ZeroRVC

Run Retrieval-based Voice Conversion training and inference with ease.

## Features

- [x] Dataset Preparation
- [x] Hugging Face Datasets Integration
- [x] Hugging Face Accelerate Integration
- [x] Trainer API
- [x] Inference API
  - [ ] Index Support
- [ ] Tensorboard Support
- [ ] FP16 Support

## Dataset Preparation

ZeroRVC provides a simple API to prepare your dataset for training. You only need to provide the path to your audio files. The feature extraction models will be downloaded automatically, or you can provide your own with the `hubert` and `rmvpe` arguments.

```py
from zerorvc import prepare

dataset = prepare("./my-voices")
```

Since `dataset` is a Hugging Face Dataset object, you can easily push it to the Hugging Face Hub.

```py
dataset.push_to_hub("my-rvc-dataset", token=HF_TOKEN)
```

And bring the preprocessed dataset back with the following code.

```py
from datasets import load_dataset

dataset = load_dataset("my-rvc-dataset")
```

## Training

Once you've prepared your dataset, you can start training your model with the `RVCTrainer`.

```py
from tqdm import tqdm
from zerorvc import RVCTrainer

epochs = 100
trainer = RVCTrainer(checkpoint_dir="./checkpoints")
training = tqdm(
    trainer.train(
        dataset=dataset["train"], # preprocessed dataset
        resume_from=trainer.latest_checkpoint(), # resume training from the latest checkpoint if any
        epochs=epochs, batch_size=8
    )
)

# Training loop: iterate over epochs
for checkpoint in training:
    training.set_description(
        f"Epoch {checkpoint.epoch}/{epochs} loss: (gen: {checkpoint.loss_gen:.4f}, fm: {checkpoint.loss_fm:.4f}, mel: {checkpoint.loss_mel:.4f}, kl: {checkpoint.loss_kl:.4f}, disc: {checkpoint.loss_disc:.4f})"
    )

    # Save checkpoint every 10 epochs
    if checkpoint.epoch % 10 == 0:
        checkpoint.save(checkpoint_dir=trainer.checkpoint_dir)
        # Directly push the synthesizer to the Hugging Face Hub
        checkpoint.G.push_to_hub("my-rvc-model", token=HF_TOKEN)

print("Training completed.")
```

You can also push the whole GAN weights to the Hugging Face Hub.

```py
checkpoint.push_to_hub("my-rvc-model", token=HF_TOKEN)
```

## Inference

ZeroRVC provides an easy API to convert your voice with the trained model.

```py
from zerorvc import RVC
import soundfile as sf

rvc = RVC.from_pretrained("my-rvc-model")
samples = rvc.convert("test.mp3")
sf.write("output.wav", samples, rvc.sr)
```