Update README.md
Browse files
README.md
CHANGED
@@ -2,18 +2,19 @@
|
|
2 |
tags:
|
3 |
- summarization
|
4 |
widget:
|
5 |
-
- text: "
|
6 |
|
7 |
---
|
8 |
|
9 |
-
|
|
|
10 |
Pretrained model on programming language java using the t5 small model architecture. It was first released in
|
11 |
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
|
12 |
|
13 |
|
14 |
## Model description
|
15 |
|
16 |
-
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code
|
17 |
|
18 |
## Intended uses & limitations
|
19 |
|
@@ -27,30 +28,31 @@ Here is how to use this model to generate java function documentation using Tran
|
|
27 |
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
|
28 |
|
29 |
pipeline = SummarizationPipeline(
|
30 |
-
model=AutoModelWithLMHead.from_pretrained("SEBIS/
|
31 |
-
tokenizer=AutoTokenizer.from_pretrained("SEBIS/
|
32 |
device=0
|
33 |
)
|
34 |
|
35 |
-
tokenized_code = "
|
36 |
pipeline([tokenized_code])
|
37 |
```
|
38 |
-
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/
|
39 |
## Training data
|
40 |
|
41 |
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
|
42 |
|
|
|
43 |
## Training procedure
|
44 |
|
45 |
### Multi-task Pretraining
|
46 |
|
47 |
-
The model was trained on a single TPU Pod V3-8 for
|
48 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
49 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
50 |
|
51 |
### Fine-tuning
|
52 |
|
53 |
-
This model was then fine-tuned on a single TPU Pod V2-8 for
|
54 |
|
55 |
|
56 |
## Evaluation results
|
@@ -59,20 +61,23 @@ For the code documentation tasks, different models achieves the following result
|
|
59 |
|
60 |
Test results :
|
61 |
|
62 |
-
| Language / Model |
|
63 |
-
| -------------------- | :------------: |
|
64 |
-
| CodeTrans-ST-Small |
|
65 |
-
| CodeTrans-ST-Base |
|
66 |
-
| CodeTrans-TF-Small |
|
67 |
-
| CodeTrans-TF-Base |
|
68 |
-
| CodeTrans-TF-Large |
|
69 |
-
| CodeTrans-MT-Small |
|
70 |
-
| CodeTrans-MT-Base |
|
71 |
-
| CodeTrans-MT-Large |
|
72 |
-
| CodeTrans-MT-TF-Small |
|
73 |
-
| CodeTrans-MT-TF-Base |
|
74 |
-
| CodeTrans-MT-TF-Large |
|
75 |
-
| State of the art |
|
|
|
76 |
|
77 |
|
78 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
|
|
|
|
|
2 |
tags:
|
3 |
- summarization
|
4 |
widget:
|
5 |
+
- text: "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
|
6 |
|
7 |
---
|
8 |
|
9 |
+
|
10 |
+
# CodeTrans model for code comment generation java
|
11 |
Pretrained model on programming language java using the t5 small model architecture. It was first released in
|
12 |
[this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized java code functions: it works best with tokenized java functions.
|
13 |
|
14 |
|
15 |
## Model description
|
16 |
|
17 |
+
This CodeTrans model is based on the `t5-small` model. It has its own SentencePiece vocabulary model. It used multi-task training on 13 supervised tasks in the software development domain and 7 unsupervised datasets. It is then fine-tuned on the code comment generation task for the java function/method.
|
18 |
|
19 |
## Intended uses & limitations
|
20 |
|
|
|
28 |
from transformers import AutoTokenizer, AutoModelWithLMHead, SummarizationPipeline
|
29 |
|
30 |
pipeline = SummarizationPipeline(
|
31 |
+
model=AutoModelWithLMHead.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune"),
|
32 |
+
tokenizer=AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_small_code_comment_generation_java_multitask_finetune", skip_special_tokens=True),
|
33 |
device=0
|
34 |
)
|
35 |
|
36 |
+
tokenized_code = "protected String renderUri ( URI uri ) { return uri . toASCIIString ( ) ; }"
|
37 |
pipeline([tokenized_code])
|
38 |
```
|
39 |
+
Run this example in [colab notebook](https://github.com/agemagician/CodeTrans/blob/main/prediction/multitask/fine-tuning/code%20comment%20generation/small_model.ipynb).
|
40 |
## Training data
|
41 |
|
42 |
The supervised training tasks datasets can be downloaded on [Link](https://www.dropbox.com/sh/488bq2of10r4wvw/AACs5CGIQuwtsD7j_Ls_JAORa/finetuning_dataset?dl=0&subfolder_nav_tracking=1)
|
43 |
|
44 |
+
|
45 |
## Training procedure
|
46 |
|
47 |
### Multi-task Pretraining
|
48 |
|
49 |
+
The model was trained on a single TPU Pod V3-8 for 260,000 steps in total, using sequence length 512 (batch size 4096).
|
50 |
It has a total of approximately 220M parameters and was trained using the encoder-decoder architecture.
|
51 |
The optimizer used is AdaFactor with inverse square root learning rate schedule for pre-training.
|
52 |
|
53 |
### Fine-tuning
|
54 |
|
55 |
+
This model was then fine-tuned on a single TPU Pod V2-8 for 750,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
|
56 |
|
57 |
|
58 |
## Evaluation results
|
|
|
61 |
|
62 |
Test results :
|
63 |
|
64 |
+
| Language / Model | Java |
|
65 |
+
| -------------------- | :------------: |
|
66 |
+
| CodeTrans-ST-Small | 37.98 |
|
67 |
+
| CodeTrans-ST-Base | 38.07 |
|
68 |
+
| CodeTrans-TF-Small | 38.56 |
|
69 |
+
| CodeTrans-TF-Base | 39.06 |
|
70 |
+
| CodeTrans-TF-Large | **39.50** |
|
71 |
+
| CodeTrans-MT-Small | 20.15 |
|
72 |
+
| CodeTrans-MT-Base | 27.44 |
|
73 |
+
| CodeTrans-MT-Large | 34.69 |
|
74 |
+
| CodeTrans-MT-TF-Small | 38.37 |
|
75 |
+
| CodeTrans-MT-TF-Base | 38.90 |
|
76 |
+
| CodeTrans-MT-TF-Large | 39.25 |
|
77 |
+
| State of the art | 38.17 |
|
78 |
+
|
79 |
|
80 |
|
81 |
> Created by [Ahmed Elnaggar](https://twitter.com/Elnaggar_AI) | [LinkedIn](https://www.linkedin.com/in/prof-ahmed-elnaggar/) and Wei Ding | [LinkedIn](https://www.linkedin.com/in/wei-ding-92561270/)
|
82 |
+
|
83 |
+
|