Update README.md
Browse files
README.md
CHANGED
@@ -46,7 +46,7 @@ The model was trained on [Japanese CC-100](http://data.statmt.org/cc-100/ja.txt.
|
|
46 |
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
|
47 |
|
48 |
# A toy prefix-tuning weight file
|
49 |
-
Along with pretrained model, we also release a [prefix-tuning](https://arxiv.org/abs/2101.00190) weight file named `smileface_suffix.task0.weight` for demonstration. The toy prefix-tuning weights here is trained to encourage the model to end every generated sentence with a smiling face emoji 😃.
|
50 |
|
51 |
Here are a few samples generated with and without the toy prefix weights, respectively.
|
52 |
|
|
|
46 |
The model uses a [sentencepiece](https://github.com/google/sentencepiece)-based tokenizer.
|
47 |
|
48 |
# A toy prefix-tuning weight file
|
49 |
+
Along with pretrained model, we also release a [prefix-tuning](https://arxiv.org/abs/2101.00190) weight file named `smileface_suffix.task0.weight` for demonstration. The toy prefix-tuning weights here is trained to encourage the model to end every generated sentence with a smiling face emoji 😃. Find the training/inference code for prefix-tuning at our Github repo [prefix-tuning-gpt](https://github.com/rinnakk/prefix-tuning-gpt).
|
50 |
|
51 |
Here are a few samples generated with and without the toy prefix weights, respectively.
|
52 |
|