AlekseyKorshuk
commited on
Commit
•
52c43eb
1
Parent(s):
d39bec5
huggingartists
Browse files- README.md +97 -0
- config.json +40 -0
- evaluation.txt +1 -0
- flax_model.msgpack +3 -0
- merges.txt +0 -0
- optimizer.pt +3 -0
- pytorch_model.bin +3 -0
- rng_state.pth +3 -0
- scheduler.pt +3 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- trainer_state.json +180 -0
- training_args.bin +3 -0
- vocab.json +0 -0
README.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- huggingartists/bones
|
5 |
+
tags:
|
6 |
+
- huggingartists
|
7 |
+
- lyrics
|
8 |
+
- lm-head
|
9 |
+
- causal-lm
|
10 |
+
widget:
|
11 |
+
- text: "I am"
|
12 |
+
---
|
13 |
+
|
14 |
+
<div class="inline-flex flex-col" style="line-height: 1.5;">
|
15 |
+
<div class="flex">
|
16 |
+
<div
|
17 |
+
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/564dc935d7c601860b155b359d8ddf9d.1000x1000x1.png')">
|
18 |
+
</div>
|
19 |
+
</div>
|
20 |
+
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
|
21 |
+
<div style="text-align: center; font-size: 16px; font-weight: 800">BONES</div>
|
22 |
+
<a href="https://genius.com/artists/bones">
|
23 |
+
<div style="text-align: center; font-size: 14px;">@bones</div>
|
24 |
+
</a>
|
25 |
+
</div>
|
26 |
+
|
27 |
+
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
|
28 |
+
|
29 |
+
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
|
30 |
+
|
31 |
+
## How does it work?
|
32 |
+
|
33 |
+
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
|
34 |
+
|
35 |
+
## Training data
|
36 |
+
|
37 |
+
The model was trained on lyrics from BONES.
|
38 |
+
|
39 |
+
Dataset is available [here](https://huggingface.co/datasets/huggingartists/bones).
|
40 |
+
And can be used with:
|
41 |
+
|
42 |
+
```python
|
43 |
+
from datasets import load_dataset
|
44 |
+
|
45 |
+
dataset = load_dataset("huggingartists/bones")
|
46 |
+
```
|
47 |
+
|
48 |
+
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/26h7sojw/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
|
49 |
+
|
50 |
+
## Training procedure
|
51 |
+
|
52 |
+
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on BONES's lyrics.
|
53 |
+
|
54 |
+
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/1yr1mvc2) for full transparency and reproducibility.
|
55 |
+
|
56 |
+
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/1yr1mvc2/artifacts) is logged and versioned.
|
57 |
+
|
58 |
+
## How to use
|
59 |
+
|
60 |
+
You can use this model directly with a pipeline for text generation:
|
61 |
+
|
62 |
+
```python
|
63 |
+
from transformers import pipeline
|
64 |
+
generator = pipeline('text-generation',
|
65 |
+
model='huggingartists/bones')
|
66 |
+
generator("I am", num_return_sequences=5)
|
67 |
+
```
|
68 |
+
|
69 |
+
Or with Transformers library:
|
70 |
+
|
71 |
+
```python
|
72 |
+
from transformers import AutoTokenizer, AutoModelWithLMHead
|
73 |
+
|
74 |
+
tokenizer = AutoTokenizer.from_pretrained("huggingartists/bones")
|
75 |
+
|
76 |
+
model = AutoModelWithLMHead.from_pretrained("huggingartists/bones")
|
77 |
+
```
|
78 |
+
|
79 |
+
## Limitations and bias
|
80 |
+
|
81 |
+
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
|
82 |
+
|
83 |
+
In addition, the data present in the user's tweets further affects the text generated by the model.
|
84 |
+
|
85 |
+
## About
|
86 |
+
|
87 |
+
*Built by Aleksey Korshuk*
|
88 |
+
|
89 |
+
[![Follow](https://img.shields.io/github/followers/AlekseyKorshuk?style=social)](https://github.com/AlekseyKorshuk)
|
90 |
+
|
91 |
+
[![Follow](https://img.shields.io/twitter/follow/alekseykorshuk?style=social)](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
|
92 |
+
|
93 |
+
[![Follow](https://img.shields.io/badge/dynamic/json?color=blue&label=Telegram%20Channel&query=%24.result&url=https%3A%2F%2Fapi.telegram.org%2Fbot1929545866%3AAAFGhV-KKnegEcLiyYJxsc4zV6C-bdPEBtQ%2FgetChatMemberCount%3Fchat_id%3D-1001253621662&style=social&logo=telegram)](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
|
94 |
+
|
95 |
+
For more details, visit the project repository.
|
96 |
+
|
97 |
+
[![GitHub stars](https://img.shields.io/github/stars/AlekseyKorshuk/huggingartists?style=social)](https://github.com/AlekseyKorshuk/huggingartists)
|
config.json
ADDED
@@ -0,0 +1,40 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "gpt2",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"GPT2LMHeadModel"
|
6 |
+
],
|
7 |
+
"attn_pdrop": 0.1,
|
8 |
+
"bos_token_id": 50256,
|
9 |
+
"embd_pdrop": 0.1,
|
10 |
+
"eos_token_id": 50256,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"layer_norm_epsilon": 1e-05,
|
13 |
+
"model_type": "gpt2",
|
14 |
+
"n_ctx": 1024,
|
15 |
+
"n_embd": 768,
|
16 |
+
"n_head": 12,
|
17 |
+
"n_inner": null,
|
18 |
+
"n_layer": 12,
|
19 |
+
"n_positions": 1024,
|
20 |
+
"resid_pdrop": 0.1,
|
21 |
+
"scale_attn_weights": true,
|
22 |
+
"summary_activation": null,
|
23 |
+
"summary_first_dropout": 0.1,
|
24 |
+
"summary_proj_to_labels": true,
|
25 |
+
"summary_type": "cls_index",
|
26 |
+
"summary_use_proj": true,
|
27 |
+
"task_specific_params": {
|
28 |
+
"text-generation": {
|
29 |
+
"do_sample": true,
|
30 |
+
"max_length": 200,
|
31 |
+
"min_length": 100,
|
32 |
+
"temperature": 1.0,
|
33 |
+
"top_p": 0.95
|
34 |
+
}
|
35 |
+
},
|
36 |
+
"torch_dtype": "float32",
|
37 |
+
"transformers_version": "4.11.2",
|
38 |
+
"use_cache": true,
|
39 |
+
"vocab_size": 50257
|
40 |
+
}
|
evaluation.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"eval_loss": 3.516153573989868, "eval_runtime": 9.1233, "eval_samples_per_second": 22.141, "eval_steps_per_second": 2.85, "epoch": 1.0}
|
flax_model.msgpack
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c146337b2b4d723f9b1a29771e7afdc0fa3f1f1342f2f4a706eed9ae1650a841
|
3 |
+
size 497764120
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
optimizer.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:a804f5ac6f1ed67c1037d889a362a20aec52eb0cdfb6d28fe02ca0b23e9f6aec
|
3 |
+
size 995603825
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:66cef7f28919b6cff8829e7d3bd04f4098aee3c6d27e26eeec13955081f43504
|
3 |
+
size 510403817
|
rng_state.pth
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:99966cfb2c88711419cbaac8b6daf330700497742a0d28a569bfa56c42a3fa22
|
3 |
+
size 14503
|
scheduler.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:874f414e996bed5c01ac5b2a3b90399694a3bb1b56ce1bf829358b7185235dd2
|
3 |
+
size 623
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "unk_token": "<|endoftext|>"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "<|endoftext|>", "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "add_prefix_space": false, "model_max_length": 1024, "special_tokens_map_file": null, "name_or_path": "gpt2", "tokenizer_class": "GPT2Tokenizer"}
|
trainer_state.json
ADDED
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"best_metric": 3.516153573989868,
|
3 |
+
"best_model_checkpoint": "output/bones/checkpoint-131",
|
4 |
+
"epoch": 1.0,
|
5 |
+
"global_step": 131,
|
6 |
+
"is_hyper_param_search": false,
|
7 |
+
"is_local_process_zero": true,
|
8 |
+
"is_world_process_zero": true,
|
9 |
+
"log_history": [
|
10 |
+
{
|
11 |
+
"epoch": 0.04,
|
12 |
+
"learning_rate": 0.00013670742670262692,
|
13 |
+
"loss": 4.0743,
|
14 |
+
"step": 5
|
15 |
+
},
|
16 |
+
{
|
17 |
+
"epoch": 0.08,
|
18 |
+
"learning_rate": 0.00013523678052634687,
|
19 |
+
"loss": 4.0532,
|
20 |
+
"step": 10
|
21 |
+
},
|
22 |
+
{
|
23 |
+
"epoch": 0.11,
|
24 |
+
"learning_rate": 0.00013280918103490095,
|
25 |
+
"loss": 3.8244,
|
26 |
+
"step": 15
|
27 |
+
},
|
28 |
+
{
|
29 |
+
"epoch": 0.15,
|
30 |
+
"learning_rate": 0.00012945949034742042,
|
31 |
+
"loss": 3.8818,
|
32 |
+
"step": 20
|
33 |
+
},
|
34 |
+
{
|
35 |
+
"epoch": 0.19,
|
36 |
+
"learning_rate": 0.00012523581249268407,
|
37 |
+
"loss": 3.7574,
|
38 |
+
"step": 25
|
39 |
+
},
|
40 |
+
{
|
41 |
+
"epoch": 0.23,
|
42 |
+
"learning_rate": 0.00012019880259978666,
|
43 |
+
"loss": 3.8753,
|
44 |
+
"step": 30
|
45 |
+
},
|
46 |
+
{
|
47 |
+
"epoch": 0.27,
|
48 |
+
"learning_rate": 0.00011442079584574986,
|
49 |
+
"loss": 3.6832,
|
50 |
+
"step": 35
|
51 |
+
},
|
52 |
+
{
|
53 |
+
"epoch": 0.31,
|
54 |
+
"learning_rate": 0.00010798476866903087,
|
55 |
+
"loss": 3.764,
|
56 |
+
"step": 40
|
57 |
+
},
|
58 |
+
{
|
59 |
+
"epoch": 0.34,
|
60 |
+
"learning_rate": 0.00010098314716666811,
|
61 |
+
"loss": 3.7562,
|
62 |
+
"step": 45
|
63 |
+
},
|
64 |
+
{
|
65 |
+
"epoch": 0.38,
|
66 |
+
"learning_rate": 9.351647978736063e-05,
|
67 |
+
"loss": 3.8048,
|
68 |
+
"step": 50
|
69 |
+
},
|
70 |
+
{
|
71 |
+
"epoch": 0.42,
|
72 |
+
"learning_rate": 8.5691993381587e-05,
|
73 |
+
"loss": 3.8516,
|
74 |
+
"step": 55
|
75 |
+
},
|
76 |
+
{
|
77 |
+
"epoch": 0.46,
|
78 |
+
"learning_rate": 7.762205334494898e-05,
|
79 |
+
"loss": 3.7629,
|
80 |
+
"step": 60
|
81 |
+
},
|
82 |
+
{
|
83 |
+
"epoch": 0.5,
|
84 |
+
"learning_rate": 6.942254996821776e-05,
|
85 |
+
"loss": 3.7416,
|
86 |
+
"step": 65
|
87 |
+
},
|
88 |
+
{
|
89 |
+
"epoch": 0.53,
|
90 |
+
"learning_rate": 6.121123416728538e-05,
|
91 |
+
"loss": 3.5754,
|
92 |
+
"step": 70
|
93 |
+
},
|
94 |
+
{
|
95 |
+
"epoch": 0.57,
|
96 |
+
"learning_rate": 5.310602649316754e-05,
|
97 |
+
"loss": 3.625,
|
98 |
+
"step": 75
|
99 |
+
},
|
100 |
+
{
|
101 |
+
"epoch": 0.61,
|
102 |
+
"learning_rate": 4.5223323705920566e-05,
|
103 |
+
"loss": 3.5876,
|
104 |
+
"step": 80
|
105 |
+
},
|
106 |
+
{
|
107 |
+
"epoch": 0.65,
|
108 |
+
"learning_rate": 3.7676327231320786e-05,
|
109 |
+
"loss": 3.5927,
|
110 |
+
"step": 85
|
111 |
+
},
|
112 |
+
{
|
113 |
+
"epoch": 0.69,
|
114 |
+
"learning_rate": 3.0573417504900444e-05,
|
115 |
+
"loss": 3.4549,
|
116 |
+
"step": 90
|
117 |
+
},
|
118 |
+
{
|
119 |
+
"epoch": 0.73,
|
120 |
+
"learning_rate": 2.401659754895943e-05,
|
121 |
+
"loss": 3.8261,
|
122 |
+
"step": 95
|
123 |
+
},
|
124 |
+
{
|
125 |
+
"epoch": 0.76,
|
126 |
+
"learning_rate": 1.8100028133934438e-05,
|
127 |
+
"loss": 3.6442,
|
128 |
+
"step": 100
|
129 |
+
},
|
130 |
+
{
|
131 |
+
"epoch": 0.8,
|
132 |
+
"learning_rate": 1.2908675560288951e-05,
|
133 |
+
"loss": 3.7345,
|
134 |
+
"step": 105
|
135 |
+
},
|
136 |
+
{
|
137 |
+
"epoch": 0.84,
|
138 |
+
"learning_rate": 8.517091479772992e-06,
|
139 |
+
"loss": 3.4828,
|
140 |
+
"step": 110
|
141 |
+
},
|
142 |
+
{
|
143 |
+
"epoch": 0.88,
|
144 |
+
"learning_rate": 4.988342278719811e-06,
|
145 |
+
"loss": 3.5471,
|
146 |
+
"step": 115
|
147 |
+
},
|
148 |
+
{
|
149 |
+
"epoch": 0.92,
|
150 |
+
"learning_rate": 2.3731033982246404e-06,
|
151 |
+
"loss": 3.5867,
|
152 |
+
"step": 120
|
153 |
+
},
|
154 |
+
{
|
155 |
+
"epoch": 0.95,
|
156 |
+
"learning_rate": 7.089315974356758e-07,
|
157 |
+
"loss": 3.4037,
|
158 |
+
"step": 125
|
159 |
+
},
|
160 |
+
{
|
161 |
+
"epoch": 0.99,
|
162 |
+
"learning_rate": 1.9725610793441152e-08,
|
163 |
+
"loss": 3.6132,
|
164 |
+
"step": 130
|
165 |
+
},
|
166 |
+
{
|
167 |
+
"epoch": 1.0,
|
168 |
+
"eval_loss": 3.516153573989868,
|
169 |
+
"eval_runtime": 9.0194,
|
170 |
+
"eval_samples_per_second": 22.396,
|
171 |
+
"eval_steps_per_second": 2.883,
|
172 |
+
"step": 131
|
173 |
+
}
|
174 |
+
],
|
175 |
+
"max_steps": 131,
|
176 |
+
"num_train_epochs": 1,
|
177 |
+
"total_flos": 136133148672000.0,
|
178 |
+
"trial_name": null,
|
179 |
+
"trial_params": null
|
180 |
+
}
|
training_args.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:5e18bd18597806a84e61c11d161145705196f624e949e9d9a4749ae99181a2a2
|
3 |
+
size 2863
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|