ameerazam08 commited on
Commit
ad750c5
1 Parent(s): 9e7a39a

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -14
README.md CHANGED
@@ -1,14 +1,100 @@
1
- ---
2
- title: UDiffText
3
- emoji: 😋
4
- colorFrom: purple
5
- colorTo: blue
6
- sdk: gradio
7
- sdk_version: 3.41.0
8
- python_version: 3.11.4
9
- app_file: app.py
10
- pinned: true
11
- license: apache-2.0
12
- ---
13
-
14
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+ <div>&nbsp;</div>
3
+ <img src="resources/OpenVoiceLogo.jpg" width="400"/>
4
+
5
+ [Paper](https://arxiv.org/abs/2312.01479) |
6
+ [Website](https://research.myshell.ai/open-voice)
7
+
8
+ </div>
9
+
10
+ ## Introduction
11
+ As we detailed in our [paper](https://arxiv.org/abs/2312.01479) and [website](https://research.myshell.ai/open-voice), the advantages of OpenVoice are three-fold:
12
+
13
+ **1. Accurate Tone Color Cloning.**
14
+ OpenVoice can accurately clone the reference tone color and generate speech in multiple languages and accents.
15
+
16
+ **2. Flexible Voice Style Control.**
17
+ OpenVoice enables granular control over voice styles, such as emotion and accent, as well as other style parameters including rhythm, pauses, and intonation.
18
+
19
+ **3. Zero-shot Cross-lingual Voice Cloning.**
20
+ Neither of the language of the generated speech nor the language of the reference speech needs to be presented in the massive-speaker multi-lingual training dataset.
21
+
22
+ [Video](https://github.com/myshell-ai/OpenVoice/assets/40556743/3cba936f-82bf-476c-9e52-09f0f417bb2f)
23
+
24
+ <div align="center">
25
+ <div>&nbsp;</div>
26
+ <img src="resources/framework.jpg" width="800"/>
27
+ <div>&nbsp;</div>
28
+ </div>
29
+
30
+ OpenVoice has been powering the instant voice cloning capability of [myshell.ai](https://app.myshell.ai/explore) since May 2023. Until Nov 2023, the voice cloning model has been used tens of millions of times by users worldwide, and witnessed the explosive user growth on the platform.
31
+
32
+ ## Main Contributors
33
+
34
+ - [Zengyi Qin](https://www.qinzy.tech) at MIT and MyShell
35
+ - [Wenliang Zhao](https://wl-zhao.github.io) at Tsinghua University
36
+ - [Xumin Yu](https://yuxumin.github.io) at Tsinghua University
37
+ - [Ethan Sun](https://twitter.com/ethan_myshell) at MyShell
38
+
39
+ ## Live Demo
40
+
41
+ <div align="center">
42
+ <a href="https://app.myshell.ai/explore"><img src="resources/myshell.jpg"></a>
43
+ &nbsp;&nbsp;&nbsp;&nbsp;
44
+ <a href="https://www.lepton.ai/playground/openvoice"><img src="resources/lepton.jpg"></a>
45
+ </div>
46
+
47
+ ## Disclaimer
48
+
49
+ This is an open-source implementation that approximates the performance of the internal voice clone technology of [myshell.ai](https://app.myshell.ai/explore). The online version in myshell.ai has better 1) audio quality, 2) voice cloning similarity, 3) speech naturalness and 4) computational efficiency.
50
+
51
+ ## Installation
52
+ Clone this repo, and run
53
+ ```
54
+ conda create -n openvoice python=3.9
55
+ conda activate openvoice
56
+ conda install pytorch==1.13.1 torchvision==0.14.1 torchaudio==0.13.1 pytorch-cuda=11.7 -c pytorch -c nvidia
57
+ pip install -r requirements.txt
58
+ ```
59
+ Download the checkpoint from [here](https://myshell-public-repo-hosting.s3.amazonaws.com/checkpoints_openvoice.zip) and extract it to the `checkpoints` folder
60
+
61
+ ## Usage
62
+
63
+ **1. Flexible Voice Style Control.**
64
+ Please see `demo_part1.ipynb` for an example usage of how OpenVoice enables flexible style control over the cloned voice.
65
+
66
+ **2. Cross-Lingual Voice Cloning.**
67
+ Please see `demo_part2.ipynb` for an example for languages seen or unseen in the MSML training set.
68
+
69
+ **3. Advanced Usage.**
70
+ The base speaker model can be replaced with any model (in any language and style) that the user prefer. Please use the `se_extractor.get_se` function as demonstrated in the demo to extract the tone color embedding for the new base speaker.
71
+
72
+ **4. Tips to Generate Natural Speech.**
73
+ There are many single or multi-speaker TTS methods that can generate natural speech, and are readily available. By simply replacing the base speaker model with the model you prefer, you can push the speech naturalness to a level you desire.
74
+
75
+ ## Roadmap
76
+
77
+ - [x] Inference code
78
+ - [x] Tone color converter model
79
+ - [x] Multi-style base speaker model
80
+ - [x] Multi-style and multi-lingual demo
81
+ - [ ] Base speaker model in other languages
82
+ - [ ] EN base speaker model with better naturalness
83
+
84
+
85
+ ## Citation
86
+ ```
87
+ @article{qin2023openvoice,
88
+ title={OpenVoice: Versatile Instant Voice Cloning},
89
+ author={Qin, Zengyi and Zhao, Wenliang and Yu, Xumin and Sun, Xin},
90
+ journal={arXiv preprint arXiv:2312.01479},
91
+ year={2023}
92
+ }
93
+ ```
94
+
95
+ ## License
96
+ This repository is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which prohibits commercial usage. **MyShell reserves the ability to detect whether an audio is generated by OpenVoice**, no matter whether the watermark is added or not.
97
+
98
+
99
+ ## Acknowledgements
100
+ This open-source implementation is based on several open-source projects, [TTS](https://github.com/coqui-ai/TTS), [VITS](https://github.com/jaywalnut310/vits), and [VITS2](https://github.com/daniilrobnikov/vits2). Thanks for their awesome work!