ONNX
English
File size: 2,714 Bytes
a63b6b9
 
 
 
 
 
 
9b929dd
 
0b70ef0
a63b6b9
 
 
b18ac4a
a63b6b9
 
 
 
 
 
 
 
 
 
 
1e9a254
eb940db
1e9a254
 
 
 
 
 
 
 
 
 
a63b6b9
 
 
f5ebe82
 
 
 
 
a63b6b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: cc-by-4.0
datasets:
- CSTR-Edinburgh/vctk
language:
- en
---
[Spaces Demo](https://huggingface.co/spaces/Akjava/matcha-tts_vctk-onnx)

Trained with Matcha-TTS(Not my work,I just converted to onnx) - [Github](https://github.com/shivammehta25/Matcha-TTS) | [Paper](https://arxiv.org/abs/2309.03199)

How to Infer see [Github page](https://github.com/akjava/Matcha-TTS-Japanese/tree/main/examples)
## License
You have to follow the cc-by-4.0 vctk license.
### Datasets License
- VCTK Dataset license are cc-by-4.0
### Tools License

These tools did not effect output license.

- Matcha-TTS - MIT
- ONNX Simplifier - Apache2.0
- onnxruntime - MIT
### Converted model Owner(me)
I release my output under MIT License.If you want your license ,convert it by yourself
## Onnx File Type
All models are simplify(If you need original,export by yourself)

Vocoder:hifigan_univ_v1(some english speaker avoid robotic)

- vctk_univ_simplify.onnx 
- vctk_univ_simplify_q8.onnx - Quantized Github page friendly small size ,but 3-5 times slow

 Vocoder:hifigan_T2_v1(Good for English)
 
- vctk_t2_simplify.onnx 
- vctk_t2_simplify_q8.onnx - Quantized Github page friendly small size ,but 3-5 times slow
## How to Convert
### Export Model
see Matcha-TTS [ONNX export](https://github.com/shivammehta25/Matcha-TTS)

```
python -m matcha.onnx.export matcha_vctk.ckpt vctk_t2.onnx --vocoder-name "hifigan_T2_v1" --vocoder-checkpoint "generator_v1"
```

### simplify model
```
from onnxsim import simplify
import onnx

import argparse
parser = argparse.ArgumentParser(
        description="create simplify onnx"
    )
parser.add_argument(
        "--input","-i",
        type=str,required=True
    )
parser.add_argument(
        "--output","-o",
        type=str
    )
args = parser.parse_args()

src_model_path = args.input
if args.output == None:
    dst_model_path = src_model_path.replace(".onnx","_simplify.onnx")
else:
    dst_model_path = args.output


model = onnx.load(src_model_path)
model_simp, check = simplify(model)

onnx.save(model_simp, dst_model_path)
```
### quantize model
```
from onnxruntime.quantization import quantize_dynamic, QuantType
import argparse
parser = argparse.ArgumentParser(
        description="create quantized onnx"
    )
parser.add_argument(
        "--input","-i",
        type=str,required=True
    )
parser.add_argument(
        "--output","-o",
        type=str
    )
args = parser.parse_args()

src_model_path = args.input
if args.output == None:
    dst_model_path = src_model_path.replace(".onnx","_q8.onnx")
else:
    dst_model_path = args.output
    
# only QUInt8 works well
quantized_model = quantize_dynamic(src_model_path, dst_model_path, weight_type=QuantType.QUInt8)
```