cahya commited on
Commit
fcdd15e
1 Parent(s): 8a2e7ce

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: "id"
3
+ license: "mit"
4
+ datasets:
5
+ - Indonesian Wikipedia
6
+ - id_newspapers_2018
7
+ widget:
8
+ - text: "Ibu ku sedang bekerja [MASK] supermarket."
9
+ ---
10
+
11
+ # Indonesian BERT base model (uncased)
12
+
13
+ ## Model description
14
+ It is BERT-base model pre-trained with indonesian Wikipedia and indonesian newspapers using a masked language modeling (MLM) objective. This
15
+ model is uncased.
16
+
17
+ This is one of several other language models that have been pre-trained with indonesian datasets. More detail about
18
+ its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
19
+
20
+ ## Intended uses & limitations
21
+
22
+ ### How to use
23
+ You can use this model directly with a pipeline for masked language modeling:
24
+ ```python
25
+ >>> from transformers import pipeline
26
+ >>> unmasker = pipeline('fill-mask', model='cahya/bert-base-indonesian-1.5G')
27
+ >>> unmasker("Ibu ku sedang bekerja [MASK] supermarket")
28
+
29
+ [{'sequence': '[CLS] ibu ku sedang bekerja di supermarket [SEP]',
30
+ 'score': 0.7983310222625732,
31
+ 'token': 1495},
32
+ {'sequence': '[CLS] ibu ku sedang bekerja. supermarket [SEP]',
33
+ 'score': 0.090003103017807,
34
+ 'token': 17},
35
+ {'sequence': '[CLS] ibu ku sedang bekerja sebagai supermarket [SEP]',
36
+ 'score': 0.025469014421105385,
37
+ 'token': 1600},
38
+ {'sequence': '[CLS] ibu ku sedang bekerja dengan supermarket [SEP]',
39
+ 'score': 0.017966199666261673,
40
+ 'token': 1555},
41
+ {'sequence': '[CLS] ibu ku sedang bekerja untuk supermarket [SEP]',
42
+ 'score': 0.016971781849861145,
43
+ 'token': 1572}]
44
+ ```
45
+ Here is how to use this model to get the features of a given text in PyTorch:
46
+ ```python
47
+ from transformers import BertTokenizer, BertModel
48
+
49
+ model_name='cahya/bert-base-indonesian-1.5G'
50
+ tokenizer = BertTokenizer.from_pretrained(model_name)
51
+ model = BertModel.from_pretrained(model_name)
52
+ text = "Silakan diganti dengan text apa saja."
53
+ encoded_input = tokenizer(text, return_tensors='pt')
54
+ output = model(**encoded_input)
55
+ ```
56
+ and in Tensorflow:
57
+ ```python
58
+ from transformers import BertTokenizer, TFBertModel
59
+
60
+ model_name='cahya/bert-base-indonesian-1.5G'
61
+ tokenizer = BertTokenizer.from_pretrained(model_name)
62
+ model = TFBertModel.from_pretrained(model_name)
63
+ text = "Silakan diganti dengan text apa saja."
64
+ encoded_input = tokenizer(text, return_tensors='tf')
65
+ output = model(encoded_input)
66
+ ```
67
+
68
+ ## Training data
69
+
70
+ This model was pre-trained with 522MB of indonesian Wikipedia and 1GB of
71
+ [indonesian newspapers](https://huggingface.co/datasets/id_newspapers_2018).
72
+ The texts are lowercased and tokenized using WordPiece and a vocabulary size of 32,000. The inputs of the model are
73
+ then of the form:
74
+
75
+ ```[CLS] Sentence A [SEP] Sentence B [SEP]```