Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
+
language:
|
4 |
+
- ga
|
5 |
+
library_name: transformers
|
6 |
+
tags:
|
7 |
+
- tokenizer
|
8 |
+
- irish
|
9 |
+
---
|
10 |
+
|
11 |
+
**Historical Irish SentencePiece tokenizer** was trained on Old, Middle, Early Modern, Classical Modern and pre-reform Modern Irish texts from St. Gall Glosses, Würzburg Glosses, CELT and the book subcorpus Historical Irish Corpus. The training data spans ca. 550 — 1926 and covers a wide variety of genres, such as bardic poetry, native Irish stories, translations and adaptations of continental epic and romance, annals, genealogies, grammatical and medical tracts, diaries, and religious writing. Due to code-switching in some texts, the model has some Latin in the vocabulary.
|
12 |
+
|
13 |
+
[SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing (Kudo et al., 2018)](https://arxiv.org/pdf/1808.06226.pdf) treats the input as a raw input stream, thus including the space in the set of characters to use. It then uses the BPE or unigram algorithm to construct the appropriate vocabulary. It helps process languages that don't separate words. All transformer models in the `transformers` library that use SentencePiece use it in combination with unigram. Examples of models using SentencePiece are [ALBERT](https://huggingface.co/docs/transformers/en/model_doc/albert), [XLNet](https://huggingface.co/docs/transformers/en/model_doc/xlnet), [Marian](https://huggingface.co/docs/transformers/en/model_doc/marian), and [T5](https://huggingface.co/docs/transformers/en/model_doc/t5).
|
14 |
+
|
15 |
+
### Use
|
16 |
+
|
17 |
+
```python
|
18 |
+
from transformers import AutoTokenizer
|
19 |
+
|
20 |
+
tokenizer = AutoTokenizer.from_pretrained("ancatmara/historical-irish-tokenizer-sentencepiece")
|
21 |
+
texts = ['Boí Óengus in n-aidchi n-aili inna chotlud.', 'Co n-accae ní, in n-ingin cucci for crunn síuil dó.']
|
22 |
+
|
23 |
+
tokenizer(texts, max_length=128, truncation=True)
|
24 |
+
```
|
25 |
+
|
26 |
+
Out:
|
27 |
+
|
28 |
+
```python
|
29 |
+
>>> {'input_ids': [[0, 16082, 2910, 213, 8040, 13888, 1937, 6875, 343, 3455, 2], [0, 1785, 6693, 1783, 13014, 213, 14883, 739, 12985, 279, 458, 1049, 602, 358, 1782, 2]],
|
30 |
+
'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]],
|
31 |
+
'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]}
|
32 |
+
```
|
33 |
+
|
34 |
+
```python
|
35 |
+
tokenizer.decode([0, 16082, 2910, 213, 8040, 13888, 1937, 6875, 343, 3455, 2])
|
36 |
+
```
|
37 |
+
|
38 |
+
Out:
|
39 |
+
|
40 |
+
```python
|
41 |
+
>>> '<s> Boí Óengus in n-aidchi n-aili inna chotlud.</s>'
|
42 |
+
```
|