w11wo commited on
Commit
f875111
1 Parent(s): bad35c5

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: su
3
+ tags:
4
+ - sundanese-roberta-base
5
+ license: mit
6
+ datasets:
7
+ - mc4
8
+ - cc100
9
+ - oscar
10
+ - wikipedia
11
+ widget:
12
+ - text: "Budi nuju <mask> di sakola."
13
+ ---
14
+
15
+ ## Sundanese RoBERTa Base
16
+
17
+ Sundanese RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on four datasets: [OSCAR](https://hf.co/datasets/oscar)'s `unshuffled_deduplicated_su` subset, the Sundanese [mC4](https://hf.co/datasets/mc4) subset, the Sundanese [CC100](https://hf.co/datasets/cc100) subset, and Sundanese [Wikipedia](https://su.wikipedia.org/).
18
+
19
+ 10% of the dataset is kept for evaluation purposes. The model was trained from scratch and achieved an evaluation loss of 1.952 and an evaluation accuracy of 63.98%.
20
+
21
+ This model was trained using HuggingFace's Flax framework. All necessary scripts used for training could be found in the [Files and versions](https://hf.co/w11wo/sundanese-roberta-base/tree/main) tab, as well as the [Training metrics](https://hf.co/w11wo/sundanese-roberta-base/tensorboard) logged via Tensorboard.
22
+
23
+ ## Model
24
+
25
+ | Model | #params | Arch. | Training/Validation data (text) |
26
+ | ------------------------ | ------- | ------- | ------------------------------------- |
27
+ | `sundanese-roberta-base` | 124M | RoBERTa | OSCAR, mC4, CC100, Wikipedia (758 MB) |
28
+
29
+ ## Evaluation Results
30
+
31
+ The model was trained for 50 epochs and the following is the final result once the training ended.
32
+
33
+ | train loss | valid loss | valid accuracy | total time |
34
+ | ---------- | ---------- | -------------- | ---------- |
35
+ | 1.965 | 1.952 | 0.6398 | 6:24:51 |
36
+
37
+ ## How to Use
38
+
39
+ ### As Masked Language Model
40
+
41
+ ```python
42
+ from transformers import pipeline
43
+
44
+ pretrained_name = "w11wo/sundanese-roberta-base"
45
+
46
+ fill_mask = pipeline(
47
+ "fill-mask",
48
+ model=pretrained_name,
49
+ tokenizer=pretrained_name
50
+ )
51
+
52
+ fill_mask("Budi nuju <mask> di sakola.")
53
+ ```
54
+
55
+ ### Feature Extraction in PyTorch
56
+
57
+ ```python
58
+ from transformers import RobertaModel, RobertaTokenizerFast
59
+
60
+ pretrained_name = "w11wo/sundanese-roberta-base"
61
+ model = RobertaModel.from_pretrained(pretrained_name)
62
+ tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name)
63
+
64
+ prompt = "Budi nuju diajar di sakola."
65
+ encoded_input = tokenizer(prompt, return_tensors='pt')
66
+ output = model(**encoded_input)
67
+ ```
68
+
69
+ ## Disclaimer
70
+
71
+ Do consider the biases which came from all four datasets that may be carried over into the results of this model.
72
+
73
+ ## Author
74
+
75
+ Sundanese RoBERTa Base was trained and evaluated by [Wilson Wongso](https://w11wo.github.io/).