File size: 2,151 Bytes
caa5c2c
 
 
 
 
 
 
 
 
 
 
 
 
1f1a1ec
 
 
caa5c2c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
---
license: mit
datasets:
- iryneko571/CCMatrix-v1-Ja_Zh-fused
language:
- ja
- zh
library_name: transformers
pipeline_tag: translation
widget:
- text: <-ja2zh-> フェルディナント・ラッサール \n は、プロイセンの政治学者、哲学者、法学者、社会主義者、労働運動指導者。ドイツ社会民主党の母体となる全ドイツ労働者同盟の創設者である。社会主義共和政の統一ドイツを目指しつつも、……
---
# 测试用colab笔记,test notebook
不需要自己装环境即可使用!!No environment needed, use colab to test <br>
https://colab.research.google.com/drive/1PA30HPgRooCTV-H9Wr_DZXHqC42PrgTO?usp=sharing <br>
现在翻译能力就是人工吗喽,不是词汇不够,是学不会了 <br>
this model has problem learning more due to the 300M size and my poor techniques
# 模型公开声明
* 这个模型由 mt5-translation-ja_zh 启发(其实就是在它上面改的),使用mt5-small,整体较小
* 使用了CCMatrix-v1-Ja_Zh, 1e-4学习率, 7 个epoch, 大概1.7的 val loss,下不去了
# Release Notes
* this model is finetuned from mt5-small, training methods and datasets refers to larryvrh/mt5-translation-ja_zh
* used a trimmed and fused dataset CCMatrix-v1-Ja_Zh 1e-4 for 7 epoch no weight decay,arraived at about 1.7 val loss, it somehow stalls there
# A more precise example using it
# 使用指南
```python
from transformers import pipeline
model_name="iryneko571/mt5-small-translation-ja_zh"
#pipe = pipeline("translation",model=model_name,tokenizer=model_name,repetition_penalty=1.4,batch_size=1,max_length=256)
pipe = pipeline("translation",
  model=model_name,
  repetition_penalty=1.4,
  batch_size=1,
  max_length=256
  )

def translate_batch(batch, language='<-ja2zh->'): # batch is an array of string
    i=0 # quickly format the list
    while i<len(batch):
        batch[i]=f'{language} {batch[i]}'
        i+=1
    translated=pipe(batch)
    result=[]
    i=0
    while i<len(translated):
        result.append(translated[i]['translation_text'])
        i+=1
    return result

inputs=[]

print(translate_batch(inputs))