File size: 1,955 Bytes
08a6c38
 
 
 
 
 
ad7ddf9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
08a6c38
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language:
- de
- en
datasets:
- wmt14
pipeline_tag: translation
model-index:
- name: leukas/byt5-large-wmt14-deen
  results:
  - task:
      type: translation
      name: Translation
    dataset:
      name: wmt14
      type: wmt14
      config: de-en
      split: test
    metrics:
    - type: bleu
      value: 0.236
      name: BLEU
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiZTI1MzQ4NzYxMGExODRhYTk0NzY5MDcxOWZjOTJhY2ZkMWU3ZTM0NmNlMzI4ZDAyYTEwYzdjMzI3MmY1NzYzZCIsInZlcnNpb24iOjF9.0kBCKKpU8CUzcUWi9y9gFZn__j6bbsiukUBiKFmMbwtwaZSAsc25_hGsHLe3bnwQWJxov7_lGDXX9DK6XNiIAQ
    - type: loss
      value: 0.3008817732334137
      name: loss
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiMzIyM2I2Yjg1NWQ2NjEzOTAwN2NlNDc3NWU4NmI3ZDFhYzZlMTQyNjZlMWM0Njg5YWIzMmI5MzBjN2Y3NmMzYSIsInZlcnNpb24iOjF9.2totcDoHCyf7N9xaqAFVqlyuaoHM3hRIH5jmP4kPoD0crEcPZ0re6pg10_2Uoud4YunWMRvOpUTt8lk2cCM1BA
    - type: gen_len
      value: 20.0
      name: gen_len
      verified: true
      verifyToken: eyJhbGciOiJFZERTQSIsInR5cCI6IkpXVCJ9.eyJoYXNoIjoiNzdjODYyY2UxZGZjNmFmZGFlY2IzNzQxNzA3YjQ3MDg1OWY3YjdkMzM2NGI2Njk2ZDczODNlOWM4YjZjZDQ2MCIsInZlcnNpb24iOjF9.hRVI7QHiBlX3Yp0cFOdezmoxV6CTq20vzp8IzugYc0uTUVD5OAvcBVRLZERGoNR1b1Oi2FV3trQvDWXUQZnsAQ
---

# byt5-large-wmt14-deen

This model is released as part of the work from [Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation](https://arxiv.org/abs/2302.14220).
It is a ByT5 model finetuned on German-->English translation the WMT14 dataset. 

To use the model correctly, you must prepend the prompt with "translate X to Y: ", where X and Y are your source and target languages (e.g. German, English).


NOTE: The decoder_start_token_id is 259 for byt5 models and 250099 for mt5 models, which is different from the default token from google's byt5 and mt5 models (which is 0).