cointegrated commited on
Commit
013152b
1 Parent(s): 080c7f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -7,10 +7,33 @@ license: mit
7
  ---
8
 
9
  This is a small Russian paraphraser based on the [google/mt5-small](https://huggingface.co/google/mt5-small) model.
 
10
 
11
- It was obtained by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
12
 
13
  * The original model has 300M parameters, with 256M of them being input and output embeddings.
14
  * After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced from 1.1GB to 246MB.
15
  * The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
16
- * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from [the Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
9
  This is a small Russian paraphraser based on the [google/mt5-small](https://huggingface.co/google/mt5-small) model.
10
+ It has rather poor paraphrasing performance, but can be fine tuned for this or other tasks.
11
 
12
+ This model was created by taking the [alenusch/mt5small-ruparaphraser](https://huggingface.co/alenusch/mt5small-ruparaphraser) model and stripping 96% of its vocabulary which is unrelated to the Russian language or infrequent.
13
 
14
  * The original model has 300M parameters, with 256M of them being input and output embeddings.
15
  * After shrinking the `sentencepiece` vocabulary from 250K to 20K the number of model parameters reduced from 1.1GB to 246MB.
16
  * The first 5K tokens in the new vocabulary are taken from the original `mt5-small`.
17
+ * The next 15K tokens are the most frequent tokens obtained by tokenizing a Russian web corpus from the [Leipzig corpora collection](https://wortschatz.uni-leipzig.de/en/download/Russian).
18
+
19
+ The model can be used as follows:
20
+ ```
21
+ # !pip install transformers sentencepiece
22
+ import torch
23
+ from transformers import T5ForConditionalGeneration, T5Tokenizer
24
+
25
+ tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-small")
26
+ model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-small")
27
+
28
+ text = 'Ехал Грека через реку, видит Грека в реке рак. '
29
+ inputs = tokenizer(text, return_tensors='pt')
30
+ with torch.no_grad():
31
+ hypotheses = model.generate(
32
+ **inputs,
33
+ do_sample=True, top_p=0.95, num_return_sequences=10,
34
+ repetition_penalty=2.5,
35
+ max_length=32,
36
+ )
37
+ for h in hypotheses:
38
+ print(tokenizer.decode(h, skip_special_tokens=True))
39
+ ```