Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- README.md +103 -103
- fromtts.wav +3 -0
.gitattributes
CHANGED
@@ -34,3 +34,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
reference.wav filter=lfs diff=lfs merge=lfs -text
|
|
|
|
34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
36 |
reference.wav filter=lfs diff=lfs merge=lfs -text
|
37 |
+
fromtts.wav filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -1,103 +1,103 @@
|
|
1 |
-
---
|
2 |
-
license: other
|
3 |
-
license_name: coqui-public-model-license
|
4 |
-
license_link: https://coqui.ai/cpml
|
5 |
-
library_name: coqui
|
6 |
-
pipeline_tag: text-to-speech
|
7 |
-
widget:
|
8 |
-
- text: "Once when I was six years old I saw a magnificent picture"
|
9 |
-
---
|
10 |
-
|
11 |
-
# โTTS_v2 - Peter Drury Fine-Tuned Model
|
12 |
-
|
13 |
-
This repository hosts a fine-tuned version of the โTTS model.
|
14 |
-
|
15 |
-
![Peter Drury](peterdrury.jpg)
|
16 |
-
|
17 |
-
Listen to a sample of the โTTS_v2 - Peter Drury Fine-Tuned Model:
|
18 |
-
|
19 |
-
<audio controls>
|
20 |
-
<source src="https://huggingface.co/
|
21 |
-
Your browser does not support the audio element.
|
22 |
-
</audio>
|
23 |
-
|
24 |
-
Here's a Peter Drury mp3 voice line clip from the training data:
|
25 |
-
|
26 |
-
<audio controls>
|
27 |
-
<source src="https://huggingface.co/
|
28 |
-
Your browser does not support the audio element.
|
29 |
-
</audio>
|
30 |
-
|
31 |
-
## Features
|
32 |
-
- ๐๏ธ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
|
33 |
-
- ๐ **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
|
34 |
-
- ๐ **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
|
35 |
-
- ๐ **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
|
36 |
-
- ๐ง **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
|
37 |
-
|
38 |
-
## Supported Languages
|
39 |
-
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
|
40 |
-
|
41 |
-
## Usage in Roll Cage
|
42 |
-
๐ค๐ฌ Boost your AI experience with this Ollama add-on! Enjoy real-time audio ๐๏ธ and text ๐ chats, LaTeX rendering ๐, agent automations โ๏ธ, workflows ๐, text-to-image ๐โก๏ธ๐ผ๏ธ, image-to-text ๐ผ๏ธโก๏ธ๐ค, image-to-video ๐ผ๏ธโก๏ธ๐ฅ transformations. Fine-tune text ๐, voice ๐ฃ๏ธ, and image ๐ผ๏ธ gens. Includes Windows macro controls ๐ฅ๏ธ and DuckDuckGo search.
|
43 |
-
|
44 |
-
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
|
45 |
-
|
46 |
-
## CoquiTTS and Resources
|
47 |
-
- ๐ธ๐ฌ **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
|
48 |
-
- ๐ **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
|
49 |
-
- ๐ฉโ๐ป **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
|
50 |
-
- ๐ฏ **Community**: [Discord](https://discord.gg/5eXr5seRrv)
|
51 |
-
|
52 |
-
## License
|
53 |
-
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
|
54 |
-
|
55 |
-
## Contact
|
56 |
-
Join our ๐ธCommunity on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
|
57 |
-
|
58 |
-
Using ๐ธTTS API:
|
59 |
-
|
60 |
-
```python
|
61 |
-
from TTS.api import TTS
|
62 |
-
|
63 |
-
tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
|
64 |
-
config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
|
65 |
-
|
66 |
-
# generate speech by cloning a voice using default settings
|
67 |
-
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
|
68 |
-
file_path="output.wav",
|
69 |
-
speaker_wav="/path/to/target/speaker.wav",
|
70 |
-
language="en")
|
71 |
-
|
72 |
-
```
|
73 |
-
|
74 |
-
Using ๐ธTTS Command line:
|
75 |
-
|
76 |
-
```console
|
77 |
-
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
|
78 |
-
--text "Bugรผn okula gitmek istemiyorum." \
|
79 |
-
--speaker_wav /path/to/target/speaker.wav \
|
80 |
-
--language_idx tr \
|
81 |
-
--use_cuda true
|
82 |
-
```
|
83 |
-
|
84 |
-
Using the model directly:
|
85 |
-
|
86 |
-
```python
|
87 |
-
from TTS.tts.configs.xtts_config import XttsConfig
|
88 |
-
from TTS.tts.models.xtts import Xtts
|
89 |
-
|
90 |
-
config = XttsConfig()
|
91 |
-
config.load_json("/path/to/xtts/config.json")
|
92 |
-
model = Xtts.init_from_config(config)
|
93 |
-
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
|
94 |
-
model.cuda()
|
95 |
-
|
96 |
-
outputs = model.synthesize(
|
97 |
-
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
|
98 |
-
config,
|
99 |
-
speaker_wav="/data/TTS-public/_refclips/3.wav",
|
100 |
-
gpt_cond_len=3,
|
101 |
-
language="en",
|
102 |
-
)
|
103 |
-
```
|
|
|
1 |
+
---
|
2 |
+
license: other
|
3 |
+
license_name: coqui-public-model-license
|
4 |
+
license_link: https://coqui.ai/cpml
|
5 |
+
library_name: coqui
|
6 |
+
pipeline_tag: text-to-speech
|
7 |
+
widget:
|
8 |
+
- text: "Once when I was six years old I saw a magnificent picture"
|
9 |
+
---
|
10 |
+
|
11 |
+
# โTTS_v2 - Peter Drury Fine-Tuned Model
|
12 |
+
|
13 |
+
This repository hosts a fine-tuned version of the โTTS model.
|
14 |
+
|
15 |
+
![Peter Drury](peterdrury.jpg)
|
16 |
+
|
17 |
+
Listen to a sample of the โTTS_v2 - Peter Drury Fine-Tuned Model:
|
18 |
+
|
19 |
+
<audio controls>
|
20 |
+
<source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/sample_c3po_generated.wav" type="audio/wav">
|
21 |
+
Your browser does not support the audio element.
|
22 |
+
</audio>
|
23 |
+
|
24 |
+
Here's a Peter Drury mp3 voice line clip from the training data:
|
25 |
+
|
26 |
+
<audio controls>
|
27 |
+
<source src="https://huggingface.co/Borcherding/XTTS-v2_C3PO/raw/main/reference2.mp3" type="audio/wav">
|
28 |
+
Your browser does not support the audio element.
|
29 |
+
</audio>
|
30 |
+
|
31 |
+
## Features
|
32 |
+
- ๐๏ธ **Voice Cloning**: Realistic voice cloning with just a short audio clip.
|
33 |
+
- ๐ **Multi-Lingual Support**: Generates speech in 17 different languages while maintaining Peter Drury's voice.
|
34 |
+
- ๐ **Emotion & Style Transfer**: Captures the emotional tone and style of the original voice.
|
35 |
+
- ๐ **Cross-Language Cloning**: Maintains the unique voice characteristics across different languages.
|
36 |
+
- ๐ง **High-Quality Audio**: Outputs at a 24kHz sampling rate for clear and high-fidelity audio.
|
37 |
+
|
38 |
+
## Supported Languages
|
39 |
+
The model supports the following 17 languages: English (en), Spanish (es), French (fr), German (de), Italian (it), Portuguese (pt), Polish (pl), Turkish (tr), Russian (ru), Dutch (nl), Czech (cs), Arabic (ar), Chinese (zh-cn), Japanese (ja), Hungarian (hu), Korean (ko), and Hindi (hi).
|
40 |
+
|
41 |
+
## Usage in Roll Cage
|
42 |
+
๐ค๐ฌ Boost your AI experience with this Ollama add-on! Enjoy real-time audio ๐๏ธ and text ๐ chats, LaTeX rendering ๐, agent automations โ๏ธ, workflows ๐, text-to-image ๐โก๏ธ๐ผ๏ธ, image-to-text ๐ผ๏ธโก๏ธ๐ค, image-to-video ๐ผ๏ธโก๏ธ๐ฅ transformations. Fine-tune text ๐, voice ๐ฃ๏ธ, and image ๐ผ๏ธ gens. Includes Windows macro controls ๐ฅ๏ธ and DuckDuckGo search.
|
43 |
+
|
44 |
+
[ollama_agent_roll_cage (OARC)](https://github.com/Leoleojames1/ollama_agent_roll_cage) is a completely local Python & CMD toolset add-on for the Ollama command line interface. The OARC toolset automates the creation of agents, giving the user more control over the likely output. It provides SYSTEM prompt templates for each ./Modelfile, allowing users to design and deploy custom agents quickly. Users can select which local model file is used in agent construction with the desired system prompt.
|
45 |
+
|
46 |
+
## CoquiTTS and Resources
|
47 |
+
- ๐ธ๐ฌ **CoquiTTS**: [Coqui TTS on GitHub](https://github.com/coqui-ai/TTS)
|
48 |
+
- ๐ **Documentation**: [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
|
49 |
+
- ๐ฉโ๐ป **Questions**: [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
|
50 |
+
- ๐ฏ **Community**: [Discord](https://discord.gg/5eXr5seRrv)
|
51 |
+
|
52 |
+
## License
|
53 |
+
This model is licensed under the [Coqui Public Model License](https://coqui.ai/cpml). Read more about the origin story of CPML [here](https://coqui.ai/blog/tts/cpml).
|
54 |
+
|
55 |
+
## Contact
|
56 |
+
Join our ๐ธCommunity on [Discord](https://discord.gg/fBC58unbKE) and follow us on [Twitter](https://twitter.com/coqui_ai). For inquiries, email us at [email protected].
|
57 |
+
|
58 |
+
Using ๐ธTTS API:
|
59 |
+
|
60 |
+
```python
|
61 |
+
from TTS.api import TTS
|
62 |
+
|
63 |
+
tts = TTS(model_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/",
|
64 |
+
config_path="D:/AI/ollama_agent_roll_cage/AgentFiles/Ignored_TTS/XTTS-v2_PeterDrury/config.json", progress_bar=False, gpu=True).to(self.device)
|
65 |
+
|
66 |
+
# generate speech by cloning a voice using default settings
|
67 |
+
tts.tts_to_file(text="It took me quite a long time to develop a voice, and now that I have it I'm not going to be silent.",
|
68 |
+
file_path="output.wav",
|
69 |
+
speaker_wav="/path/to/target/speaker.wav",
|
70 |
+
language="en")
|
71 |
+
|
72 |
+
```
|
73 |
+
|
74 |
+
Using ๐ธTTS Command line:
|
75 |
+
|
76 |
+
```console
|
77 |
+
tts --model_name tts_models/multilingual/multi-dataset/xtts_v2 \
|
78 |
+
--text "Bugรผn okula gitmek istemiyorum." \
|
79 |
+
--speaker_wav /path/to/target/speaker.wav \
|
80 |
+
--language_idx tr \
|
81 |
+
--use_cuda true
|
82 |
+
```
|
83 |
+
|
84 |
+
Using the model directly:
|
85 |
+
|
86 |
+
```python
|
87 |
+
from TTS.tts.configs.xtts_config import XttsConfig
|
88 |
+
from TTS.tts.models.xtts import Xtts
|
89 |
+
|
90 |
+
config = XttsConfig()
|
91 |
+
config.load_json("/path/to/xtts/config.json")
|
92 |
+
model = Xtts.init_from_config(config)
|
93 |
+
model.load_checkpoint(config, checkpoint_dir="/path/to/xtts/", eval=True)
|
94 |
+
model.cuda()
|
95 |
+
|
96 |
+
outputs = model.synthesize(
|
97 |
+
"It took me quite a long time to develop a voice and now that I have it I am not going to be silent.",
|
98 |
+
config,
|
99 |
+
speaker_wav="/data/TTS-public/_refclips/3.wav",
|
100 |
+
gpt_cond_len=3,
|
101 |
+
language="en",
|
102 |
+
)
|
103 |
+
```
|
fromtts.wav
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:f523fe36e634b1f373da63346fe3fec4eb2ffe93091d9495ca199471ff649997
|
3 |
+
size 2027600
|