sahuPrachi
commited on
Commit
•
4610b41
1
Parent(s):
cb813ff
README.md updated
Browse files
README.md
CHANGED
@@ -18,13 +18,13 @@ tags:
|
|
18 |
- indicnlp
|
19 |
---
|
20 |
|
21 |
-
|
22 |
|
23 |
<ul>
|
24 |
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
|
25 |
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
|
26 |
<li> Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) . </li>
|
27 |
-
<li>
|
28 |
</ul>
|
29 |
|
30 |
|
@@ -50,9 +50,9 @@ pad_id = tokenizer._convert_token_to_id_with_added_voc("<pad>")
|
|
50 |
|
51 |
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
|
52 |
|
53 |
-
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[
|
54 |
|
55 |
-
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[
|
56 |
|
57 |
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
|
58 |
|
@@ -70,28 +70,33 @@ model_output=model.generate(inp, use_cache=True, num_beams=4, max_length=32, min
|
|
70 |
|
71 |
# Decode to get output strings
|
72 |
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
73 |
-
print(decoded_output) # अगस्त के अंत तक
|
74 |
|
75 |
|
76 |
```
|
77 |
|
|
|
|
|
|
|
|
|
|
|
78 |
# Benchmarks
|
79 |
-
Scores on the `
|
80 |
|
81 |
Language | Rouge-1 / Rouge-2 / Rouge-L
|
82 |
---------|----------------------------
|
83 |
-
as |
|
84 |
-
bn |
|
85 |
-
gu |
|
86 |
-
hi |
|
87 |
-
kn |
|
88 |
-
ml | 58.
|
89 |
-
mr |
|
90 |
-
or |
|
91 |
-
pa |
|
92 |
-
ta | 47.
|
93 |
-
te |
|
94 |
-
average | 42.
|
95 |
|
96 |
|
97 |
# Contributors
|
|
|
18 |
- indicnlp
|
19 |
---
|
20 |
|
21 |
+
MultiIndicHeadlineGeneration is a multilingual, sequence-to-sequence pre-trained model focusing only on Indic languages. It currently supports 11 Indian languages and is finetuned on [IndicBART](https://huggingface.co/ai4bharat/IndicBART) checkpoint. You can use MultiIndicHeadlineGeneration model to build natural language generation applications in Indian languages for tasks like summarization, headline generation and other summarization related tasks. Some salient features of the MultiIndicHeadlineGeneration are:
|
22 |
|
23 |
<ul>
|
24 |
<li >Supported languages: Assamese, Bengali, Gujarati, Hindi, Marathi, Odiya, Punjabi, Kannada, Malayalam, Tamil, and Telugu. Not all of these languages are supported by mBART50 and mT5. </li>
|
25 |
<li >The model is much smaller than the mBART and mT5(-base) models, so less computationally expensive for finetuning and decoding. </li>
|
26 |
<li> Trained on large Indic language corpora (1.316 million paragraphs and 5.9 million unique tokens) . </li>
|
27 |
+
<li>All languages have been represented in Devanagari script to encourage transfer learning among the related languages.</li>
|
28 |
</ul>
|
29 |
|
30 |
|
|
|
50 |
|
51 |
# First tokenize the input and outputs. The format below is how MultiIndicHeadlineGenerationSS was trained so the input should be "Paragraph </s> <2xx>" where xx is the language code. Similarly, the output should be "<2yy> Sentence </s>".
|
52 |
|
53 |
+
inp = tokenizer("यूट्यूब या फेसबुक पर वीडियो देखते समय आप भी बफरिंग की वजह से परेशान होते हैं? इसका जवाब हां है तो जल्द ही आपकी सारी समस्या खत्म होने वाली है। दरअसल, टेलीकॉम मिनिस्टर अश्विनी वैष्णव ने पिछले सप्ताह कहा कि अगस्त के अंत तक हर-हाल में '5G' इंटरनेट लॉन्च हो जाएगा। उन्होंने यह भी कहा है कि स्पेक्ट्रम की बिक्री शुरू हो चुकी है और जून तक ये प्रोसेस खत्म होने की संभावना है।</s> <2hi>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[43615, 116, 4426, 46, . . . . 64001, 64006]])
|
54 |
|
55 |
+
out = tokenizer("<2hi> 5G इंटरनेट का इंतजार हुआ खत्म:अगस्त तक देश में शुरू हो सकती है 5G सर्विस </s>", add_special_tokens=False, return_tensors="pt", padding=True).input_ids # tensor([[64006, 393, 1690, . . . . 1690, 11999, 64001]])
|
56 |
|
57 |
model_outputs=model(input_ids=inp, decoder_input_ids=out[:,0:-1], labels=out[:,1:])
|
58 |
|
|
|
70 |
|
71 |
# Decode to get output strings
|
72 |
decoded_output=tokenizer.decode(model_output[0], skip_special_tokens=True, clean_up_tokenization_spaces=False)
|
73 |
+
print(decoded_output) # अगस्त के अंत तक शुरू हो जाएगा '5G' इंटरनेट
|
74 |
|
75 |
|
76 |
```
|
77 |
|
78 |
+
# Note:
|
79 |
+
If you wish to use any language written in a non-Devanagari script, then you should first convert it to Devanagari using the <a href="https://github.com/anoopkunchukuttan/indic_nlp_library">Indic NLP Library</a>. After you get the output, you should convert it back into the original script.
|
80 |
+
|
81 |
+
|
82 |
+
|
83 |
# Benchmarks
|
84 |
+
Scores on the `MultiIndicHeadlineGeneration` test sets are as follows:
|
85 |
|
86 |
Language | Rouge-1 / Rouge-2 / Rouge-L
|
87 |
---------|----------------------------
|
88 |
+
as | 46.06 / 30.02 / 44.64
|
89 |
+
bn | 34.22 / 19.18 / 32.60
|
90 |
+
gu | 33.49 / 17.49 / 31.79
|
91 |
+
hi | 37.14 / 18.04 / 32.70
|
92 |
+
kn | 64.82 / 53.91 / 64.10
|
93 |
+
ml | 58.69 / 47.18 / 57.94
|
94 |
+
mr | 35.20 / 19.50 / 34.08
|
95 |
+
or | 22.51 / 9.00 / 21.62
|
96 |
+
pa | 46.47 / 29.07 / 43.25
|
97 |
+
ta | 47.39 / 31.39 / 45.94
|
98 |
+
te | 37.69 / 21.89 / 36.66
|
99 |
+
average | 42.15 / 26.97 / 40.48
|
100 |
|
101 |
|
102 |
# Contributors
|