dragonSwing
commited on
Commit
•
4b45541
1
Parent(s):
bb37e85
Update README
Browse files
README.md
CHANGED
@@ -13,4 +13,15 @@ The base model is pre-trained on 16kHz sampled speech audio from Vietnamese spee
|
|
13 |
[Paper](https://arxiv.org/abs/2006.11477)
|
14 |
|
15 |
# Usage
|
16 |
-
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
13 |
[Paper](https://arxiv.org/abs/2006.11477)
|
14 |
|
15 |
# Usage
|
16 |
+
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.
|
17 |
+
|
18 |
+
```python
|
19 |
+
import torch
|
20 |
+
from transformers import Wav2Vec2Model
|
21 |
+
|
22 |
+
model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-3k")
|
23 |
+
|
24 |
+
# Sanity check
|
25 |
+
inputs = torch.rand([1, 16000])
|
26 |
+
outputs = model(inputs)
|
27 |
+
```
|