claudiatang
commited on
Commit
•
5a0c9be
1
Parent(s):
09374b7
End of training
Browse files- README.md +30 -54
- pytorch_model.bin +1 -1
README.md
CHANGED
@@ -6,42 +6,32 @@ tags:
|
|
6 |
metrics:
|
7 |
- bleu
|
8 |
model-index:
|
9 |
-
- name: flan-t5-base-eng-hwp
|
10 |
results: []
|
11 |
-
language:
|
12 |
-
- en
|
13 |
-
library_name: transformers
|
14 |
-
pipeline_tag: translation
|
15 |
---
|
16 |
|
17 |
-
|
|
|
18 |
|
19 |
-
|
|
|
|
|
20 |
It achieves the following results on the evaluation set:
|
21 |
-
- Loss: 1.
|
22 |
-
- Bleu:
|
23 |
-
- Gen Len: 18.
|
24 |
|
25 |
## Model description
|
26 |
|
27 |
-
|
28 |
-
|
29 |
-
The [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) documentation has more details on running the model.
|
30 |
-
|
31 |
-
However, to use this model to translate English to Hawaiian Pidgin, enter ``"translate English to Hawaiian Pidgin: "`` before your statement.
|
32 |
|
33 |
-
|
34 |
-
``translate English to Hawaiian Pidgin: I went to Ala Moana today to go shopping.``
|
35 |
|
36 |
-
|
37 |
|
38 |
## Training and evaluation data
|
39 |
|
40 |
-
|
41 |
-
|
42 |
-
## Intended uses & limitations
|
43 |
-
|
44 |
-
Due to a limited set of training and evaluation data, this model has many limitations, such as not knowing certain Hawaiian Pidgin phrases or having trouble with longer sentences.
|
45 |
|
46 |
## Training procedure
|
47 |
|
@@ -60,40 +50,26 @@ The following hyperparameters were used during training:
|
|
60 |
|
61 |
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|
62 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
|
63 |
-
| No log | 1.0 | 420 | 1.
|
64 |
-
| 2.
|
65 |
-
| 1.
|
66 |
-
| 1.
|
67 |
-
| 1.
|
68 |
-
| 1.
|
69 |
-
| 1.
|
70 |
-
| 0.
|
71 |
-
| 0.
|
72 |
-
| 0.
|
73 |
-
| 0.
|
74 |
-
| 0.
|
75 |
-
| 0.
|
76 |
-
| 0.
|
77 |
-
| 0.
|
78 |
|
79 |
|
80 |
### Framework versions
|
81 |
|
82 |
- Transformers 4.34.1
|
83 |
-
- Pytorch 2.0
|
84 |
-
- Datasets 2.14.
|
85 |
- Tokenizers 0.14.1
|
86 |
-
|
87 |
-
|
88 |
-
## Resources
|
89 |
-
- Christodouloupoulos, C., & Steedman, M. (2014). A massively parallel corpus: the Bible in 100 languages. Language Resources and Evaluation, 49(2), 375–395. https://doi.org/10.1007/s10579-014-9287-y
|
90 |
-
|
91 |
-
- Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., Fedus, W., … Wei, J. (2022). _Scaling Instruction-Finetuned Language Models._ doi:10.48550/ARXIV.2210.11416
|
92 |
-
|
93 |
-
- _Hawaii Pidgin_. (2017). Wycliffe. https://www.biblegateway.com/versions/Hawaii-Pidgin-HWP/ (Original work published 2000)
|
94 |
-
|
95 |
-
- _King James Bible_. (2017). BibleGateway.com. https://www.biblegateway.com/versions/king-james-version-kjv-bible/ (Original work published 1769)
|
96 |
-
|
97 |
-
- T5. (n.d.). Huggingface.co. https://huggingface.co/docs/transformers/model_doc/t5
|
98 |
-
|
99 |
-
- Translation. (n.d.). Huggingface.co. Retrieved October 18, 2023, from https://huggingface.co/docs/transformers/tasks/translation
|
|
|
6 |
metrics:
|
7 |
- bleu
|
8 |
model-index:
|
9 |
+
- name: flan-t5-base-eng-hwp
|
10 |
results: []
|
|
|
|
|
|
|
|
|
11 |
---
|
12 |
|
13 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
14 |
+
should probably proofread and complete it, then remove this comment. -->
|
15 |
|
16 |
+
# flan-t5-base-eng-hwp
|
17 |
+
|
18 |
+
This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset.
|
19 |
It achieves the following results on the evaluation set:
|
20 |
+
- Loss: 1.6073
|
21 |
+
- Bleu: 4.9074
|
22 |
+
- Gen Len: 18.8338
|
23 |
|
24 |
## Model description
|
25 |
|
26 |
+
More information needed
|
|
|
|
|
|
|
|
|
27 |
|
28 |
+
## Intended uses & limitations
|
|
|
29 |
|
30 |
+
More information needed
|
31 |
|
32 |
## Training and evaluation data
|
33 |
|
34 |
+
More information needed
|
|
|
|
|
|
|
|
|
35 |
|
36 |
## Training procedure
|
37 |
|
|
|
50 |
|
51 |
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|
52 |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
|
53 |
+
| No log | 1.0 | 420 | 1.6234 | 3.464 | 18.8 |
|
54 |
+
| 2.1287 | 2.0 | 840 | 1.4706 | 4.1392 | 18.8084 |
|
55 |
+
| 1.5094 | 3.0 | 1260 | 1.4294 | 4.3477 | 18.7924 |
|
56 |
+
| 1.2917 | 4.0 | 1680 | 1.4015 | 4.5189 | 18.8068 |
|
57 |
+
| 1.1347 | 5.0 | 2100 | 1.3900 | 4.697 | 18.8236 |
|
58 |
+
| 1.012 | 6.0 | 2520 | 1.4038 | 4.7522 | 18.8051 |
|
59 |
+
| 1.012 | 7.0 | 2940 | 1.4086 | 4.8399 | 18.8177 |
|
60 |
+
| 0.901 | 8.0 | 3360 | 1.4453 | 4.8191 | 18.8253 |
|
61 |
+
| 0.818 | 9.0 | 3780 | 1.4678 | 4.8245 | 18.8203 |
|
62 |
+
| 0.7511 | 10.0 | 4200 | 1.4922 | 4.951 | 18.8574 |
|
63 |
+
| 0.693 | 11.0 | 4620 | 1.5186 | 4.9174 | 18.8363 |
|
64 |
+
| 0.6462 | 12.0 | 5040 | 1.5487 | 5.0009 | 18.8338 |
|
65 |
+
| 0.6462 | 13.0 | 5460 | 1.5651 | 5.021 | 18.8295 |
|
66 |
+
| 0.6062 | 14.0 | 5880 | 1.5942 | 4.8801 | 18.8245 |
|
67 |
+
| 0.5781 | 15.0 | 6300 | 1.6073 | 4.9074 | 18.8338 |
|
68 |
|
69 |
|
70 |
### Framework versions
|
71 |
|
72 |
- Transformers 4.34.1
|
73 |
+
- Pytorch 2.1.0+cu118
|
74 |
+
- Datasets 2.14.6
|
75 |
- Tokenizers 0.14.1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
size 990409330
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a07d94c167f066bf9af177fd56cf7185d5d0f7f5abfcde4009e9e6e8b8c48685
|
3 |
size 990409330
|