Text Generation
PEFT
Safetensors
Ukrainian
English
translation
Eval Results
darkproger commited on
Commit
99a40c4
1 Parent(s): 5651294

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -1
README.md CHANGED
@@ -144,7 +144,27 @@ Cleaned Multi30K: [lang-uk/multi30k-extended-17k](https://huggingface.co/dataset
144
 
145
  ## Citation
146
 
147
- TBD
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
148
 
149
 
150
 
 
144
 
145
  ## Citation
146
 
147
+ ```
148
+ @inproceedings{paniv-etal-2024-dragoman,
149
+ title = "Setting up the Data Printer with Improved {E}nglish to {U}krainian Machine Translation",
150
+ author = "Paniv, Yurii and
151
+ Chaplynskyi, Dmytro and
152
+ Trynus, Nikita and
153
+ Kyrylov, Volodymyr",
154
+ editor = "Romanyshyn, Mariana and
155
+ Romanyshyn, Nataliia and
156
+ Hlybovets, Andrii and
157
+ Ignatenko, Oleksii",
158
+ booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024",
159
+ month = may,
160
+ year = "2024",
161
+ address = "Torino, Italia",
162
+ publisher = "ELRA and ICCL",
163
+ url = "https://aclanthology.org/2024.unlp-1.6",
164
+ pages = "41--50",
165
+ abstract = "To build large language models for Ukrainian we need to expand our corpora with large amounts of new algorithmic tasks expressed in natural language. Examples of task performance expressed in English are abundant, so with a high-quality translation system our community will be enabled to curate datasets faster. To aid this goal, we introduce a recipe to build a translation system using supervised finetuning of a large pretrained language model with a noisy parallel dataset of 3M pairs of Ukrainian and English sentences followed by a second phase of training using 17K examples selected by k-fold perplexity filtering on another dataset of higher quality. Our decoder-only model named Dragoman beats performance of previous state of the art encoder-decoder models on the FLORES devtest set.",
166
+ }
167
+ ```
168
 
169
 
170