Text Generation
Transformers
Inference Endpoints
machineteacher commited on
Commit
617cb07
1 Parent(s): d6e8550

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -0
README.md CHANGED
@@ -1,3 +1,46 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
+ datasets:
4
+ - wi_locness
5
+ - matejklemen/falko_merlin
6
+ - paws
7
+ - paws-x
8
+ - asset
9
+ language:
10
+ - en
11
+ - de
12
+ - es
13
+ - ar
14
+ - ja
15
+ - ko
16
+ - zh
17
+ metrics:
18
+ - bleu
19
+ - rouge
20
+ - sari
21
+ - accuracy
22
+ library_name: transformers
23
  ---
24
+
25
+ # Model Card for mEdIT-xxl
26
+
27
+ This model was obtained by fine-tuning the `MBZUAI/bactrian-x-llama-13b-lora` model on the mEdIT dataset.
28
+
29
+ **Paper:** mEdIT: Multilingual Text Editing via Instruction Tuning
30
+
31
+ **Authors:** Vipul Raheja, Dimitris Alikaniotis, Vivek Kulkarni, Bashar Alhafni, Dhruv Kumar
32
+
33
+ ## Model Details
34
+
35
+ ### Model Description
36
+
37
+ - **Language(s) (NLP)**: Arabic, Chinese, English, German, Japanese, Korean, Spanish
38
+ - **Finetuned from model:** `MBZUAI/bactrian-x-llama-13b-lora`
39
+
40
+ ### Model Sources
41
+
42
+ - **Repository:** https://github.com/vipulraheja/medit
43
+ - **Paper:** TBA
44
+
45
+ ## How to use
46
+ We release the best-performing models presented in our paper.