created model card

#1
by Ajwad - opened
Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - bn
4
+ licenses:
5
+ - cc-by-nc-sa-4.0
6
+ ---
7
+
8
+ # banglat5_banglaparaphrase
9
+
10
+ This repository contains the pretrained checkpoint of the model **BanglaT5** finetuned on [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) dataset. This is a sequence to sequence transformer model pretrained with the ["Span Corruption"]() objective. Finetuned models using this checkpoint achieve competitive results on the dataset.
11
+
12
+ For finetuning and inference, refer to the scripts in the official GitHub repository of [BanglaNLG](https://github.com/csebuetnlp/BanglaNLG).
13
+
14
+ **Note**: This model was pretrained using a specific normalization pipeline available [here](https://github.com/csebuetnlp/normalizer). All finetuning scripts in the official GitHub repository use this normalization by default. If you need to adapt the pretrained model for a different task make sure the text units are normalized using this pipeline before tokenizing to get best results. A basic example is given below:
15
+
16
+ ## Using this model in `transformers`
17
+
18
+ ```python
19
+ from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
20
+ from normalizer import normalize # pip install git+https://github.com/csebuetnlp/normalizer
21
+
22
+ model = AutoModelForSeq2SeqLM.from_pretrained("csebuetnlp/banglat5_banglaparaphrase")
23
+ tokenizer = AutoTokenizer.from_pretrained("csebuetnlp/banglat5_banglaparaphrase", use_fast=False)
24
+
25
+ input_sentence = ""
26
+ input_ids = tokenizer(normalize(input_sentence), return_tensors="pt").input_ids
27
+ generated_tokens = model.generate(input_ids)
28
+ decoded_tokens = tokenizer.batch_decode(generated_tokens)[0]
29
+
30
+ print(decoded_tokens)
31
+ ```
32
+
33
+ ## Benchmarks
34
+
35
+ * Supervised fine-tuning
36
+
37
+ | Test Set | Model | sacreBLEU | ROUGE-L | PINC | BERTScore | BERT-iBLEU |
38
+ | -------- | ----- | --------- | ------- | ---- | --------- | ---------- |
39
+ | [BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase) | [BanglaT5](https://huggingface.co/csebuetnlp/banglat5)<br>[IndicBART](https://huggingface.co/ai4bharat/IndicBART)<br>[IndicBARTSS](https://huggingface.co/ai4bharat/IndicBARTSS)| 32.8<br>5.60<br>4.90 | 63.58<br>35.61<br>33.66 | 74.40<br>80.26<br>82.10 | 94.80<br>91.50<br>91.10 | 92.18<br>91.16<br>90.95 |
40
+ | [IndicParaphrase](https://huggingface.co/datasets/ai4bharat/IndicParaphrase) |BanglaT5<br>IndicBART<br>IndicBARTSS| 11.0<br>12.0<br>10.7| 19.99<br>21.58<br>20.59| 74.50<br>76.83<br>77.60| 94.80<br>93.30<br>93.10 | 87.738<br>90.65<br>90.54|
41
+
42
+
43
+ The dataset can be found in the link below:
44
+ * **[BanglaParaphrase](https://huggingface.co/datasets/csebuetnlp/BanglaParaphrase)**
45
+
46
+ ## Citation
47
+
48
+ If you use this model, please cite the following paper:
49
+ ```
50
+ @article{bhattacharjee2022banglanlg,
51
+ author = {Abhik Bhattacharjee and Tahmid Hasan and Wasi Uddin Ahmad and Rifat Shahriyar},
52
+ title = {BanglaNLG: Benchmarks and Resources for Evaluating Low-Resource Natural Language Generation in Bangla},
53
+ journal = {CoRR},
54
+ volume = {abs/2205.11081},
55
+ year = {2022},
56
+ url = {https://arxiv.org/abs/2205.11081},
57
+ eprinttype = {arXiv},
58
+ eprint = {2205.11081}
59
+ }
60
+ ```
61
+
62
+
63
+ If you use the normalization module, please cite the following paper:
64
+ ```
65
+ @inproceedings{hasan-etal-2020-low,
66
+ title = "Not Low-Resource Anymore: Aligner Ensembling, Batch Filtering, and New Datasets for {B}engali-{E}nglish Machine Translation",
67
+ author = "Hasan, Tahmid and
68
+ Bhattacharjee, Abhik and
69
+ Samin, Kazi and
70
+ Hasan, Masum and
71
+ Basak, Madhusudan and
72
+ Rahman, M. Sohel and
73
+ Shahriyar, Rifat",
74
+ booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
75
+ month = nov,
76
+ year = "2020",
77
+ address = "Online",
78
+ publisher = "Association for Computational Linguistics",
79
+ url = "https://www.aclweb.org/anthology/2020.emnlp-main.207",
80
+ doi = "10.18653/v1/2020.emnlp-main.207",
81
+ pages = "2612--2623",
82
+ abstract = "Despite being the seventh most widely spoken language in the world, Bengali has received much less attention in machine translation literature due to being low in resources. Most publicly available parallel corpora for Bengali are not large enough; and have rather poor quality, mostly because of incorrect sentence alignments resulting from erroneous sentence segmentation, and also because of a high volume of noise present in them. In this work, we build a customized sentence segmenter for Bengali and propose two novel methods for parallel corpus creation on low-resource setups: aligner ensembling and batch filtering. With the segmenter and the two methods combined, we compile a high-quality Bengali-English parallel corpus comprising of 2.75 million sentence pairs, more than 2 million of which were not available before. Training on neural models, we achieve an improvement of more than 9 BLEU score over previous approaches to Bengali-English machine translation. We also evaluate on a new test set of 1000 pairs made with extensive quality control. We release the segmenter, parallel corpus, and the evaluation set, thus elevating Bengali from its low-resource status. To the best of our knowledge, this is the first ever large scale study on Bengali-English machine translation. We believe our study will pave the way for future research on Bengali-English machine translation as well as other low-resource languages. Our data and code are available at https://github.com/csebuetnlp/banglanmt.",
83
+ }
84
+ ```
85
+
86
+