File size: 3,151 Bytes
3c9b6ba 5c1dbf3 f10b376 5c1dbf3 3c9b6ba 5c1dbf3 86185c1 5c1dbf3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
---
language:
- ko # Example: fr
license: apache-2.0 # Example: apache-2.0 or any license from https://hf.co/docs/hub/repositories-licenses
library_name: kenlm # Optional. Example: keras or any library from https://github.com/huggingface/hub-docs/blob/main/js/src/lib/interfaces/Libraries.ts
tags:
- audio
- automatic-speech-recognition
- text2text-generation
datasets:
- korean-wiki
---
# ko-ctc-kenlm-spelling-only-wiki
## Table of Contents
- [ko-ctc-kenlm-spelling-only-wiki](#ko-ctc-kenlm-spelling-only-wiki)
- [Table of Contents](#table-of-contents)
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
- **Model Description** <br />
- ์ํฅ ๋ชจ๋ธ์ ์ํ N-gram Base์ LM์ผ๋ก ์์๋ณ ๋จ์ด๊ธฐ๋ฐ์ผ๋ก ๋ง๋ค์ด์ก์ผ๋ฉฐ, KenLM์ผ๋ก ํ์ต๋์์ต๋๋ค. ํด๋น ๋ชจ๋ธ์ [ko-spelling-wav2vec2-conformer-del-1s](https://huggingface.co/42MARU/ko-spelling-wav2vec2-conformer-del-1s)๊ณผ ์ฌ์ฉํ์ญ์์ค. <br />
- HuggingFace Transformers Style๋ก ๋ถ๋ฌ์ ์ฌ์ฉํ ์ ์๋๋ก ์ฒ๋ฆฌํ์ต๋๋ค. <br />
- pyctcdecode lib์ ์ด์ฉํด์๋ ๋ฐ๋ก ์ฌ์ฉ๊ฐ๋ฅํฉ๋๋ค. <br />
- data๋ wiki korean์ ์ฌ์ฉํ์ต๋๋ค. <br />
spelling vocab data์ ์๋ ๋ฌธ์ฅ์ ์ ๋ถ ์ ๊ฑฐํ์ฌ, ์คํ๋ ค LM์ผ๋ก Outlier๊ฐ ๋ฐ์ํ ์์๋ฅผ ์ต์ํ ์์ผฐ์ต๋๋ค. <br />
ํด๋น ๋ชจ๋ธ์ **์ฒ ์์ ์ฌ** ๊ธฐ์ค์ ๋ฐ์ดํฐ๋ก ํ์ต๋ ๋ชจ๋ธ์
๋๋ค. (์ซ์์ ์์ด๋ ๊ฐ ํ๊ธฐ๋ฒ์ ๋ฐ๋ฆ) <br />
- **Developed by:** TADev (@lIlBrother)
- **Language(s):** Korean
- **License:** apache-2.0
## How to Get Started With the Model
```python
import librosa
from pyctcdecode import build_ctcdecoder
from transformers import (
AutoConfig,
AutoFeatureExtractor,
AutoModelForCTC,
AutoTokenizer,
Wav2Vec2ProcessorWithLM,
)
from transformers.pipelines import AutomaticSpeechRecognitionPipeline
audio_path = ""
# ๋ชจ๋ธ๊ณผ ํ ํฌ๋์ด์ , ์์ธก์ ์ํ ๊ฐ ๋ชจ๋๋ค์ ๋ถ๋ฌ์ต๋๋ค.
model = AutoModelForCTC.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
feature_extractor = AutoFeatureExtractor.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
tokenizer = AutoTokenizer.from_pretrained("42MARU/ko-spelling-wav2vec2-conformer-del-1s")
processor = Wav2Vec2ProcessorWithLM("42MARU/ko-ctc-kenlm-spelling-only-wiki")
# ์ค์ ์์ธก์ ์ํ ํ์ดํ๋ผ์ธ์ ์ ์๋ ๋ชจ๋๋ค์ ์ฝ์
.
asr_pipeline = AutomaticSpeechRecognitionPipeline(
model=model,
tokenizer=processor.tokenizer,
feature_extractor=processor.feature_extractor,
decoder=processor.decoder,
device=-1,
)
# ์์ฑํ์ผ์ ๋ถ๋ฌ์ค๊ณ beamsearch ํ๋ผ๋ฏธํฐ๋ฅผ ํน์ ํ์ฌ ์์ธก์ ์ํํฉ๋๋ค.
raw_data, _ = librosa.load(audio_path, sr=16000)
kwargs = {"decoder_kwargs": {"beam_width": 100}}
pred = asr_pipeline(inputs=raw_data, **kwargs)["text"]
# ๋ชจ๋ธ์ด ์์ ๋ถ๋ฆฌ ์ ๋์ฝ๋ ํ
์คํธ๋ก ๋์ค๋ฏ๋ก, ์ผ๋ฐ String์ผ๋ก ๋ณํํด์ค ํ์๊ฐ ์์ต๋๋ค.
result = unicodedata.normalize("NFC", pred)
print(result)
# ์๋
ํ์ธ์ 123 ํ
์คํธ์
๋๋ค.
```
|