File size: 1,999 Bytes
b6d59f3 b677d18 b6d59f3 b677d18 b6d59f3 b677d18 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
widget:
- text: "未来に揺れる花 過去にもあった花"
---
# Japanese GPT2 Lyric Model
## Model description
The model is used to generate Japanese lyrics.
## How to use
```python
import torch
from transformers import T5Tokenizer, GPT2LMHeadModel, TextGenerationPipeline
tokenizer = T5Tokenizer.from_pretrained("skytnt/gpt2-japanese-lyric-small")
model = GPT2LMHeadModel.from_pretrained("skytnt/gpt2-japanese-lyric-small")
def gen_lyric(prompt_text: str):
prompt_text = prompt_text.replace("\n", "[SEP]")
prompt_tokens = tokenizer.tokenize(prompt_text)
prompt_token_ids = tokenizer.convert_tokens_to_ids(prompt_tokens)
prompt_tensor = torch.LongTensor(prompt_token_ids)
prompt_tensor = prompt_tensor.view(1, -1)
# model forward
output_sequences = model.generate(
input_ids=prompt_tensor,
max_length=512,
top_p=0.95,
top_k=40,
temperature=1.0,
do_sample=True,
early_stopping=True,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
num_return_sequences=1
)
# convert model outputs to readable sentence
generated_sequence = output_sequences.tolist()[0]
generated_tokens = tokenizer.convert_ids_to_tokens(generated_sequence)
generated_text = tokenizer.convert_tokens_to_string(generated_tokens)
generated_text = "\n".join([s.strip() for s in generated_text.split('[SEP]')]).replace(' ', '\u3000').replace(
'</s>', '\n\n---end---')
return generated_text
print(gen_lyric("未来に揺れる花 過去にもあった花"))
```
## Training data
Training data ([click to download](https://data.anyweb.xyz/dataset/lyric.zip)) contains 48,394 Japanese lyrics which are collected from [NetEasyMusic](https://music.163.com/) by [lyric_download](https://github.com/SkyTNT/lyric_downlowd)
|