yuewang-sf
commited on
Commit
•
119c23b
1
Parent(s):
bd7e947
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ license: bsd-3-clause
|
|
7 |
|
8 |
CodeT5 is a family of encoder-decoder language models for code from the paper: [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/pdf/2109.00859.pdf) by Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi.
|
9 |
|
10 |
-
The checkpoint included in this repository is denoted as **CodeT5-large** (770M), which is introduced by the paper: [CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning](https://arxiv.org/pdf/2207.01780.pdf) by Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi
|
11 |
|
12 |
## Training data
|
13 |
|
@@ -24,7 +24,7 @@ We validate the effectiveness of this checkpoint pretrained with simplified stra
|
|
24 |
|
25 |
## How to use
|
26 |
|
27 |
-
This model can be easily loaded using the `
|
28 |
|
29 |
```python
|
30 |
from transformers import AutoTokenizer, T5ForConditionalGeneration
|
@@ -41,7 +41,7 @@ print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
|
41 |
## BibTeX entry and citation info
|
42 |
|
43 |
```bibtex
|
44 |
-
@inproceedings{
|
45 |
author = {Yue Wang and Weishi Wang and Shafiq R. Joty and Steven C. H. Hoi},
|
46 |
title = {CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
|
47 |
booktitle = {EMNLP},
|
@@ -50,7 +50,7 @@ print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))
|
|
50 |
year = {2021}
|
51 |
}
|
52 |
|
53 |
-
@article{
|
54 |
author = {Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi},
|
55 |
title = {CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning},
|
56 |
journal = {arXiv preprint},
|
|
|
7 |
|
8 |
CodeT5 is a family of encoder-decoder language models for code from the paper: [CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation](https://arxiv.org/pdf/2109.00859.pdf) by Yue Wang, Weishi Wang, Shafiq Joty, and Steven C.H. Hoi.
|
9 |
|
10 |
+
The checkpoint included in this repository is denoted as **CodeT5-large** (770M), which is introduced by the paper: [CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning](https://arxiv.org/pdf/2207.01780.pdf) by Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi.
|
11 |
|
12 |
## Training data
|
13 |
|
|
|
24 |
|
25 |
## How to use
|
26 |
|
27 |
+
This model can be easily loaded using the `T5ForConditionalGeneration` functionality:
|
28 |
|
29 |
```python
|
30 |
from transformers import AutoTokenizer, T5ForConditionalGeneration
|
|
|
41 |
## BibTeX entry and citation info
|
42 |
|
43 |
```bibtex
|
44 |
+
@inproceedings{CodeT52021,
|
45 |
author = {Yue Wang and Weishi Wang and Shafiq R. Joty and Steven C. H. Hoi},
|
46 |
title = {CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation},
|
47 |
booktitle = {EMNLP},
|
|
|
50 |
year = {2021}
|
51 |
}
|
52 |
|
53 |
+
@article{CodeRL2022
|
54 |
author = {Hung Le, Yue Wang, Akhilesh Deepak Gotmare, Silvio Savarese, Steven C.H. Hoi},
|
55 |
title = {CodeRL: Mastering Code Generation through Pretrained Models and Deep Reinforcement Learning},
|
56 |
journal = {arXiv preprint},
|