File size: 2,502 Bytes
4e1316c
 
 
 
 
 
 
 
 
 
 
66d04f9
 
 
4e1316c
 
 
 
 
 
 
 
 
 
 
 
 
 
3d1a488
 
 
 
4e1316c
 
3d1a488
4e1316c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66d04f9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
---
language: ja
tags:
- ja
- japanese
- gpt2
- text-generation
- lm
- nlp
license: mit
widget:
- text: <s>桜[CLS]
datasets:
- skytnt/japanese-lyric
---

# Japanese GPT2 Lyric Model

## Model description

The model is used to generate Japanese lyrics.

## How to use

```python
import torch
from transformers import T5Tokenizer, GPT2LMHeadModel

device = torch.device("cpu")
if torch.cuda.is_available():
    device = torch.device("cuda")

tokenizer = T5Tokenizer.from_pretrained("skytnt/gpt2-japanese-lyric-medium")
model = GPT2LMHeadModel.from_pretrained("skytnt/gpt2-japanese-lyric-medium")
model = model.to(device)

def gen_lyric(title: str, prompt_text: str):
    if len(title)!= 0 or len(prompt_text)!= 0:
        prompt_text = "<s>" + title + "[CLS]" + prompt_text
        prompt_text = prompt_text.replace("\n", "\\n ")
        prompt_tokens = tokenizer.tokenize(prompt_text)
        prompt_token_ids = tokenizer.convert_tokens_to_ids(prompt_tokens)
        prompt_tensor = torch.LongTensor(prompt_token_ids)
        prompt_tensor = prompt_tensor.view(1, -1).to(device)
    else:
        prompt_tensor = None
    # model forward
    output_sequences = model.generate(
        input_ids=prompt_tensor,
        max_length=512,
        top_p=0.95,
        top_k=40,
        temperature=1.0,
        do_sample=True,
        early_stopping=True,
        bos_token_id=tokenizer.bos_token_id,
        eos_token_id=tokenizer.eos_token_id,
        pad_token_id=tokenizer.pad_token_id,
        num_return_sequences=1
    )

    # convert model outputs to readable sentence
    generated_sequence = output_sequences.tolist()[0]
    generated_tokens = tokenizer.convert_ids_to_tokens(generated_sequence)
    generated_text = tokenizer.convert_tokens_to_string(generated_tokens)
    generated_text = "\n".join([s.strip() for s in generated_text.split('\\n')]).replace(' ', '\u3000').replace('<s>', '').replace('</s>', '\n\n---end---')
    title_and_lyric = generated_text.split("[CLS]",1)
    if len(title_and_lyric)==1:
        title,lyric = "" , title_and_lyric[0].strip()
    else:
        title,lyric = title_and_lyric[0].strip(), title_and_lyric[1].strip()
    return f"---{title}---\n\n{lyric}"


print(gen_lyric("桜",""))

```

## Training data

[Training data](https://huggingface.co/datasets/skytnt/japanese-lyric/blob/main/lyric_clean.pkl) contains 143,587 Japanese lyrics which are collected from [uta-net](https://www.uta-net.com/) by [lyric_download](https://github.com/SkyTNT/lyric_downlowd)